Commit Graph

935 Commits

Author SHA1 Message Date
Goran Flegar
601b95cdbb [DEPS] bump LLVM version to llvm/llvm-project@49af650 (#2570)
Co-authored-by: Ashay Rane <ashay@users.noreply.github.com>
Co-authored-by: khasanovaa <khasanovaaliya19@gmail.com>
2023-10-31 12:06:25 -07:00
Zahi Moudallal
943330790a [FRONTEND] add do_not_specialize property back to JITFunction (#2573) 2023-10-31 12:02:45 -07:00
Nhat Nguyen
0cf3a67f04 [BUILD] Disable W503 in pyproject.toml (#2575)
This PR https://github.com/openai/triton/pull/2555 disabled `W503`
(means line breaks can now occur before a binary operator).

The change surprisingly didn't take any effect nor required any style
changes in `triton` main `pre-commit` stage. But our `triton-shared`
[pipeline
run](https://github.com/microsoft/triton-shared/actions/runs/6710459100/job/18236352821)
(see `Check pre-commit` stage) picked this up correctly and complained
about formatting issues. I'm not entirely sure what could be the cause
for such difference, but if we also disable `W503` in `pyproject.toml`
then the rule is picked up correctly.
2023-10-31 11:57:02 -07:00
Chris Jones
2398b82f18 [FRONTEND][BACKEND] dd memory synchronization scope parameter to atomic ops. (#2562)
Co-authored-by: Keren Zhou <kerenzhou@openai.com>
2023-10-30 19:18:27 -07:00
Keren Zhou
70fca00b67 [BACKEND] Fix device_print without arguments (#2566) 2023-10-30 20:04:44 -04:00
Keren Zhou
492886fcde [FRONTEND] Add reverse eq and ne (#2563) 2023-10-30 16:56:43 -04:00
Justin Lebar
12f906287f [FRONTEND] Refactor jit.py. (#2556)
[FRONTEND] Refactor jit.py.

The goal is to simplify the code and make it more flexible before we
change the kernel launch syntax to
`kernel[grid, compiler_flags(...)](...)`.

The main changes here are:

 - Get rid of the eval'ed code in make_launcher.  We can do everything
   using bind().
 - Add KernelParam and KernelArg classes, letting us get rid of the
   parallel arrays/dicts indexed by parameter index.
 - Get rid of duplicated kernel launch code in the cache-hit/cache-miss
   branches.
2023-10-30 13:14:51 -07:00
Justin Lebar
f88b01f558 Apply ruff pre-commit to python/triton/runtime. (#2558)
We're in the process of incrementally converting from autopep8 + flake8
+ isort to ruff, on a directory-by-directory basis.

The motivation to switch away from autopep8 is that I can't get it to
wrap long lines, even with -aaa.  This seems to be a known problem,
https://github.com/hhatto/autopep8/issues/497.

See more details about alternatives tried in
https://github.com/openai/triton/pull/2557.
2023-10-30 11:06:44 -07:00
Someone
cde42e6221 [BUILD] make cuda tools vendoring optional (#2546) 2023-10-26 23:16:41 -07:00
Dongdong Li
0469d5fccd [OPTIMIZER] Remove extra wgmma_wait_group in flash attention (#2399)
Co-authored-by: dongdongl <dongdongl@nvidia.com>
2023-10-26 16:35:36 +00:00
zhu jianjiang
cfae7e2a25 [BACKEND] Fix matmul downcast path (#2528)
for https://github.com/openai/triton/issues/2523 ,add regression test

---------

Co-authored-by: Jokeren <robinho364@gmail.com>
Co-authored-by: Keren Zhou <kerenzhou@openai.com>
2023-10-26 09:43:49 -04:00
runseny
4c816c2f59 [OPS] enable flash_attention_v2 TMA (#2544) 2023-10-25 23:31:17 -07:00
Hongtao Yu
2323adb387 [BACKEND] Handle AtomicCASOp in GPU IR conversion (#2514)
Addressing https://github.com/openai/triton/issues/2011

Co-authored-by: Philippe Tillet <phil@openai.com>
Co-authored-by: Keren Zhou <kerenzhou@openai.com>
2023-10-25 15:20:07 -04:00
Justin Lebar
e70e11e834 [BACKEND] Improve printf. (#2532)
[BACKEND] Improve printf.

Previously, we printed all of a GPU thread's values in a single printf()
call, and this, plus the user-specified prefix, was all we printed.

This caused a few problems.

 - nvptx printf can only handle 32 arguments; if you pass more than
   that, it prints garbage.  So if a thread had more than 32 values, you
   couldn't print them, issue #2486.

 - The order of the values within the Triton program (GPU thread block)
   is an implementation detail -- it depends on the layout the compiler
   assigns to a tensor.  So this also prevented you from interpreting
   the printed output.

To address this, we now print the Triton pid and multi-dimensional
Tensor index for each value.  And each value gets its own line to avoid
passing too many args to printf.

Example output:

    ```
    pid (0, 1, 2) idx (36, 127) x: 42
    ```

If you want to observe all the values in a tensor in order, you can grep
and then sort the output.

We also make a UX enhancement to print: The printed label always ends
with ": "; you don't have to add it yourself.

Fixes #2486.
2023-10-25 08:47:55 +00:00
Justin Lebar
9b4d91b132 Add TRITON_BUILD_WITH_ASAN envvar. (#2537)
Note that asan doesn't work with programs that use the GPU, so this is
only useful for running tools like triton-opt.

I was not able to get msan working.  libstdc++'s std::string
implementation seems to use uninitialized memory in a way that seems
safe but triggers an msan error.  I tried and gave up on switching to
libc++ and teaching msan to ignore this error.
2023-10-24 10:30:30 -07:00
Philippe Tillet
3f2b7263e8 Revert "[OPTIMIZER] Tweak warpsPerCTA based on the shape of MMA output (#2485)" (#2541)
Reverts openai/triton#2525
2023-10-24 10:23:19 -07:00
Sam Shleifer
12da43084b [TESTING] add diff column, option to return df in benchmark (#2469) 2023-10-24 05:17:00 +00:00
Philippe Tillet
8f467f1ea9 [OPTIMIZER] Tweak warpsPerCTA based on the shape of MMA output (#2485) (#2525)
Reverts openai/triton#2497
2023-10-23 21:50:58 -07:00
Adnan Akhundov
50add54334 [FRONTEND] Add input dtypes to autotuning key (#2534) 2023-10-24 03:29:30 +00:00
Thomas Raoux
cba7abd682 [BACKEND] Remove ttg.cmp and ttg.select and replace by arith op (#2526)
Now that the bug related to attribute is fixed in MLIR we can use arith
ops for cmp and select ops.
2023-10-23 19:35:46 -07:00
Zahi Moudallal
b0c166b9e3 [BACKEND] Fixing bug in elementwise conversion (#2517) 2023-10-20 09:11:15 -07:00
runseny
dc9e3063d7 [HOPPER] Move to tl.make_block_ptr in flash_attention backward scripts (#2395) 2023-10-20 11:06:48 +08:00
Justin Lebar
30186f401e Fix segfault in assertion test. (#2520)
<git-pr-chain>

#### Commits in this PR
1. Fix segfault in assertion test.
    
The issue here is that we were not checking the return values of the
CUDA API
calls we were making. We call one function and then use the data it
returns as
input to another call. Obviously this doesn't work if the first call
returns
    an error and doesn't actually return meaningful data.
    
I don't know why this was passing in CI, but it failed consistently for
me.

#### [PR chain](https://github.com/jlebar/git-pr-chain)
1. 👉 #2520 👈 **YOU ARE HERE**


</git-pr-chain>
2023-10-19 13:42:38 -07:00
Justin Lebar
bdf464e4a8 Make kernel_static_print test work when called twice. (#2518)
<git-pr-chain>

#### Commits in this PR
1. Make kernel_static_print test work when called twice.
    
This test is checking that a message is printed when the kernel is
compiled.
But the test had nothing to force the kernel to be compiled every time
you ran
the test. So after you ran it once, the test would fail every time until
you
    cleared the cache.

#### [PR chain](https://github.com/jlebar/git-pr-chain)
1. 👉 #2518 👈 **YOU ARE HERE**
1. #2520


</git-pr-chain>
2023-10-19 13:17:38 -07:00
Horace He
a4f373938c [RUNTIME] Filter out paths that don't exist in json group cache (#2511)
There's no guarantee that `/tmp/triton/*/*.json` existing means
that the corresponding `/tmp/triton/*/*.cubin` file also exists because the tmp directory doesn't guarantee file stability.
2023-10-18 16:44:34 -04:00
ian Bearman
768fc1fcd9 [FRONTEND] change hash to not require ptxas (#2476)
I noticed that Triton is using the `ptxas` version as part of the
version hash even for non-CUDA targets. This is an attempt at fixing
this. Moving the version calculation to the back-end makes sense to me
from an architectural standpoint, so that's my approach here. I'm not as
confident in the implementation, so please if folks have any feedback
let me know.
2023-10-17 10:28:51 -07:00
Thomas Raoux
376acb610b [BUILD] Fix macos x86 build (#2505)
There was a mismatch in the llvm link name
2023-10-17 09:49:09 -07:00
Mehdi Amini
721897fcc4 upgrade llvm to b1115f8c (NFC) (#2403)
Co-authored-by: Thomas Raoux <thomas.raoux@openai.com>
Co-authored-by: Keren Zhou <kerenzhou@openai.com>
Co-authored-by: Phil Tillet <phil@openai.com>
2023-10-16 16:38:49 -07:00
Zahi Moudallal
726bdb984f [FRONTEND][BACKEND] Fix constexpr assignment ; revert #2430 (#2496)
Without this change, a constexpr assignment (ie. `A = B & C`, where `B`
and `C` are both constexpr) is getting assigned to a triton tensor,
which becomes an issue when `A` is used as the condition of an If
statement.
Note: I had to add `not isinstance(node.value, ast.Constant)` to the
condition because if we are assigning `x = 0` then the assigned value is
also a constexpr, but in this case we do want to assign a triton tensor
to `x` so that we can do `x.to(tl.int64)` for example, which cannot be
done on a constexpr.

---------

Co-authored-by: Philippe Tillet <phil@openai.com>
2023-10-16 12:35:19 -07:00
Stewart Hall
29828fe491 [FRONTEND] add option to disable fp mul/add fusion (#2495)
By default, ptxas will enable fusion of mul/add to fma instructions. The
backend was also being configured unconditionally to enable this on
conversion from LLVM IR to PTX. This commit adds an option which can be
used to disable the FP fusion behavior in both locations.
2023-10-14 12:23:30 -07:00
Philippe Tillet
8db4fac3b0 Revert "[OPTIMIZER] Tweak warpsPerCTA based on the shape of MMA output (#2485)" (#2497)
Reverts openai/triton#2485
2023-10-13 23:32:59 -07:00
Weixing Zhang
76858bd917 [OPTIMIZER] Tweak warpsPerCTA based on the shape of MMA output (#2485)
In current implementation, warpsPerCTA is always set to [numWarps, 1]
for 2 tt.dot fusion scenario. But, it is not optimal for cases such that
tt.dot doesn't have enough parallelism on row dimension but on column
dimension.
2023-10-12 22:25:42 -07:00
Keren Zhou
f81d9d876f [FRONTEND] Fix math for constant values (#2472)
https://github.com/openai/triton/issues/2470
2023-10-12 12:11:42 -07:00
Beal Wang
5812d970a8 [HOPPER][OPTIMIZER] remove divOp and remOp from gemm math loop (#2402)
This is just for Warp Specialization kernels on Hopper. Replace DivOp
and RemOp with SelectOp and AndOp/XorOp.
2023-10-09 14:42:06 +08:00
Philippe Tillet
424e67e727 [FRONTEND] improved while loop error messages (#2463) 2023-10-06 18:37:52 -07:00
Lixun Zhang
ded79e87ee [TUTORIALS] Enable causal=False in FA fwd kernel (#2459) 2023-10-06 17:54:45 -07:00
Sam Shleifer
fb3c2f3b2b [TUTORIALS] attention: support torch 2.1 (#2461) 2023-10-06 17:50:11 -07:00
Zahi Moudallal
be19cf3103 [BACKEND] Enable reduce with 3D tensors and added tests (#2460) 2023-10-06 15:08:22 -07:00
Keren Zhou
a42d517021 [FRONTEND] Better warning on nested jit functions (#2453) 2023-10-06 14:22:51 -07:00
Thomas Raoux
24560b8152 Better tuning for H100 flash attention. (#2444)
Improves performance of fwd pass from 420 to 440 TF
2023-10-04 14:43:41 -07:00
Justin Lebar
71a8544ce7 Improve docs for atomic and load/store operations. (#2437)
- Move atomic_cas and atomic_xchg to "atomic ops" section of
   documentation.
 - Don't talk about the `cmp` operand for operations which don't have
   it.
 - Document the `sem` operand.
 - :code:`foo` and ``foo`` don't work inside a :type: annotation,
   apparently.  (They are rendered literally, instead of being treated
   as a formatting command.)  Get rid of them.
- Format the bulleted lists in the load/store operations as intended.
2023-10-04 04:17:42 +00:00
Thomas Raoux
c656a139d3 [BACKEND] Fix for FP8 QK inputs in flash attention forward pass (#2435) 2023-10-03 21:02:13 -07:00
Philippe Tillet
a0025cfc44 [FRONTEND] add missing implicit constexpr conversion in dot (#2427) 2023-10-01 16:07:50 -07:00
Philippe Tillet
533efd0cac [FRONTEND][BACKEND] changed float8e4b15 clipping semantics from +-1.875 to +-1.75 (#2422)
clipping float8e4b15 to +-1.875 is a bad idea because these are
represented as 0x7f and 0xff, which are +- nan on H100 for float8e4nv.
We lose two values but this will make compatibility with float8e4nv way
less painful. (it will just be a matter of adjusting the bias)
2023-09-29 23:33:28 -07:00
Hongtao Yu
e0edb70f78 [BACKEND] support of Fp8E4M3Nv to Bf16 conversion (#2415) 2023-09-29 17:29:41 -07:00
Keren Zhou
e284112818 Revert "[TUTORIALS] Remove unneeded quantiles parameter (#2408)" (#2419)
This reverts commit 99af23f6f4.

`quantiles` shouldn't be the problem. The documentation workflow failed
because of other issues.
2023-09-29 14:24:50 -07:00
Keren Zhou
f2f5f1d457 [TUTORIALS] Add missing docstrings (#2420)
Depend on https://github.com/openai/triton/pull/2419 to fix the
documentation workflow
2023-09-29 14:24:30 -07:00
Thomas Raoux
90bef57acf [BACKEND] turn on MMA V3 by default on Hopper (#2414) 2023-09-28 22:45:28 -07:00
evelynmitchell
99af23f6f4 [TUTORIALS] Remove unneeded quantiles parameter (#2408)
The fix is to remove the quantiles parameter in both the triton and
torch calls for the benchmark.
2023-09-28 13:48:38 -04:00
Thomas Raoux
721bdebee1 [OPTIMIZATION] Fix performance for attention backward path with mma v3 (#2411)
Support having chain of mma with mixed size.
Serialize the different block calculation in backward attention to
workaround problem with ptxas and wgmma.
2023-09-28 10:29:08 -07:00