fix more conflits
Resolve merge conflicts
Some more build and conflict fixes
Resolve conflicts for 06-fused-attension.py
resolve merge conflicts for the tutorial group gemm example
Fixes for some LIT tests
resolve remaining conflicts in tests
Fix empty kernel
set capability 0
* optimize_epilogue
* Add config
* Remove licenses
* Comment out Hopper specific parameters when printing out configs
* Add benchmark parameters from flash-attention repo
* Add Z and H in the key of autotuner
---------
Co-authored-by: Lixun Zhang <lixun.zhang@amd.com>
* add two fp8 data types `tl.float8e4b8` and `tl.float8e5b16` to triton.
* add SW type conversion between `tl.float8e4b8/tl.float8e5b16` and `fp16`
* change flashattention to support fp8 in q/k.
This is a combination of 4 commits.
Works as StandAlone and Backend
Works as StandAlone and Backend
This is a combination of 13 commits.
Works StandAlone and as Backend
This is a combination of 7 commits.
backend set default dir with flag
move bitcode to backend dir
copy backend
save
empty test work in backendmode
enable backend mode when copying to upstream
clean up
fix failure
minimize diff
add skip function
fix bug with corrupted dwarf exp
match num_wraps
fix multi threaded test issue
move bitcode file out of lib
move backend to python/triton/third_party/hip
move libhsa
backend works again
restart ci
clean upstream location first before copy
match scripts
fix new error
memoize backend stuff
fix bug
* this pr adds a third party backend for triton that works on AMD
* this expose a lot of the work that has been done in our
[fork](https://github.com/ROCmSoftwarePlatform/triton)
* most unit tests on `test_core.py` pass
* it skips some unit tests for various reasons
* we plan to follow up with more prs improving Functionality and
Performance in the future
---------
Co-authored-by: Philippe Tillet <phil@openai.com>
* [MFMA] FP8 and BF8 support
This PR adds support of fp8 and bf8 in AccelerateMatmul pass and
Introduces generation of float8 mfma instructions in ttg to llvm conversion.
* add tests
* fix tests
* review fix: fix variable naming and dot operand promotion.
* review comments fixes
---------
Co-authored-by: Shucai Xiao <shucai.xiao@amd.com>
* rebase onto improve_fwd_fa
* Fixed a leftover from rebase
* rebase onto improve_fa_fwd
* Reduce tuning space
* Disable bwd with D=128
* Add test for d=128
* Fix an issue with get_best_config when there is only one config
* Added better configs for d=128
* Fix typos
---------
Co-authored-by: Lixun Zhang <lixun.zhang@amd.com>
Without this change, a constexpr assignment (ie. `A = B & C`, where `B`
and `C` are both constexpr) is getting assigned to a triton tensor,
which becomes an issue when `A` is used as the condition of an If
statement.
Note: I had to add `not isinstance(node.value, ast.Constant)` to the
condition because if we are assigning `x = 0` then the assigned value is
also a constexpr, but in this case we do want to assign a triton tensor
to `x` so that we can do `x.to(tl.int64)` for example, which cannot be
done on a constexpr.
---------
Co-authored-by: Philippe Tillet <phil@openai.com>
By default, ptxas will enable fusion of mul/add to fma instructions. The
backend was also being configured unconditionally to enable this on
conversion from LLVM IR to PTX. This commit adds an option which can be
used to disable the FP fusion behavior in both locations.
In current implementation, warpsPerCTA is always set to [numWarps, 1]
for 2 tt.dot fusion scenario. But, it is not optimal for cases such that
tt.dot doesn't have enough parallelism on row dimension but on column
dimension.
* Attempt to absorb upstream's changes to improve causal=True
* Add autotuner
* Optimize for AMD MI250
- add pre_load_v as a tuning parameter
- do not define N_CTX as constexpr
- perform the second dot before sum
- remove qk_scale out of the inner loop
- add more configs in the autotuner
Note that bwd kernel is disabled for now. This is because we enabled
autotuning and grid becomes a function. So ctx.grid[0] no longer works.
* Enable bwd kernel
* enforce cc=None on ROCm
* Comment
* Update approach to ignore integer cc values
Co-authored-by: Alexander Efimov <efimov.alexander@gmail.com>
---------
Co-authored-by: Alexander Efimov <efimov.alexander@gmail.com>
* Restructure ROCM Library Search
Currently there are a handful of ROCM dependant files which are required for
triton to run. The linker(ld.lld), the include files, and multiple hip/hsa
shared objects.
This change will provide three search areas to find these files. All in
the same order.
1. third_party/rocm. This location is within the python/triton directory
and is carried over when triton is built. IF all necessary files
are in this location there will be no need to have ROCM installed at
all on the system.
2. $ROCM_PATH environmental variable. If this exists it will override
all other locations to find ROCM necessary files
3. /opt/rocm. The default location for ROCm installations. Finding one
here will notify triton that ROCM is installed in this environment
To ease with step 3. A new script scripts/amd/setup_rocm_libs.sh
has been added to the repo. Executing this script will cause all necessary
ROCM files to be downloaded from their respective packages on repo.radeon.com
and installed in third_party/rocm. Allowing for triton to run without installing
the full ROCM stack. setup_rocm_libs.sh takes a env_var ROCM_VERSION if a user
wishes to install a ROCM version other than the default (currently 5.4.2)
When triton whls are built to support Pytorch, method 3 will be used to stay in
sync with PyTorch's approach of bringing along any libraries needed and not
requiring ROCM to be installed.
(cherry picked from commit e6aea90fb3e8218cb562e5d990719112d8282702)
* Fix default rocm path
Running into `fatal error: hip/hip_runtime.h: No such file or directory` with latest wheel due to incorrect directory for ROCm libs
(cherry picked from commit 292bae625b113eb65c66cfe4442da7a6456c988a)
* setup_rocm_libs.sh manylinux refactor
(cherry picked from commit f995f314ada4606cb78dc6233cd9c8effc356191)
* Set setup_rocm_libs.sh to be executable
(cherry picked from commit 05d67b9418cacda0d356c2102d7c1a887948b013)
* Revert to using numbered so files to fix upstream
(cherry picked from commit 34f8189eae57a23cc15b4b4f032fe25757e0db8e)
* Remove drm script
---------
Co-authored-by: Jeff Daily <jeff.daily@amd.com>
- Move atomic_cas and atomic_xchg to "atomic ops" section of
documentation.
- Don't talk about the `cmp` operand for operations which don't have
it.
- Document the `sem` operand.
- :code:`foo` and ``foo`` don't work inside a :type: annotation,
apparently. (They are rendered literally, instead of being treated
as a formatting command.) Get rid of them.
- Format the bulleted lists in the load/store operations as intended.