Commit Graph

652 Commits

Author SHA1 Message Date
Philippe Tillet
b5ba639bae [FRONTEND] fixed issue for fp64 literals and added tests (#1698)
fixes #1686
2023-05-20 18:36:28 -07:00
Keren Zhou
fb30d84069 [FRONTEND] Refactor contains_return_op into an independent AST (#1694)
https://github.com/openai/triton/issues/1690
2023-05-20 11:18:40 -07:00
Zahi Moudallal
34817ecc95 [BACKEND] Added support to convert shared to distributed layouts (#1682) 2023-05-17 17:20:29 -07:00
Daniil Fukalov
17eb982771 [OPS] Remove duplicated function already defined in triton module. (#1679) 2023-05-16 17:17:13 -07:00
Keren Zhou
3baab48eaf [FRONTEND] Differentiate between bool and int in the frontend (#1678)
`bool` is a subclass of `int`, so `isinstance(bool_var, int) == True`,
and a `bool` constant will be converted to an `int` constant.

In triton specifically, if a bool var is treated as an integer, it
prevents us using the `logical_and` operator which requires both
operands have the same bit length.

> Cannot bitcast data-type of size 32 to data-type of size 1

By differentiating int and bool, it allows us to make the syntax more
close to native python. We can now use `if bool_var and condition` to
check the truthiness, and `if bool_var is True` to check identity.
2023-05-16 18:24:16 +00:00
Ingo Müller
0c4de8ab72 [DEPENDENCIES] Update LLVM to 17.0.0 (c5dede880d17) and port changes. (#1668)
This depends on a [pending LLVM
release](https://github.com/ptillet/triton-llvm-releases/pull/10).

* Implement setCalleeFromCallable in CallOp.
* Cast type to ShapedType for various getters.
* Improve TritonDialect::materializeConstant due to breaking change in
constructor of arith::ConstantOp.
* Add OpaqueProperties argument in inferReturnTypes.

Co-authored-by: Philippe Tillet <phil@openai.com>
2023-05-15 21:51:14 -07:00
Sophia Wisdom
9820899b38 [FRONTEND] Assert that for loop bounds must be ints (#1664) 2023-05-12 22:44:45 -07:00
George Karpenkov
3249d7a9b0 [FRONTEND] Do not use exceptions do guide control flow in compilation runtime (#1663)
Triton runtime currently relies on KeyError to check whether a kernel
has been compiled. This results in somewhat confusing backtraces when
running the kernel crashes, as the stack traces includes not only the
actual crash, but also the stack trace for the original KeyError which
was caught.
2023-05-12 22:03:26 -07:00
Keren Zhou
674f9bf7a6 [FRONTEND] Better error messages for noinline functions (#1657)
```
at 10:18:def val_multiplier_noinline(val, i):
    return val * i

           ^
Function val_multiplier_noinline is marked noinline, but was called with non-scalar argument val:fp32[constexpr[128]]
```
2023-05-11 12:46:25 -07:00
Benjamin Chetioui
115964b780 [TESTS] Add regression test for issue #1601. (#1611)
Following up on #1603, I am adding a new file meant to contain
functional regression tests to the repository.
Let me know if another folder would be a more appropriate place for
these tests.

Co-authored-by: Philippe Tillet <phil@openai.com>
2023-05-10 23:30:36 -07:00
Natalia Gimelshein
0daee68d71 [FRONTEND] Don't call set_device in tl.dot (#1646)
This breaks multiprocess compilation
2023-05-10 20:39:27 -04:00
Zahi Moudallal
fb40bf1954 [TEST] Fixed and re-enabled reduce test (#1644)
Re-enabled reduce test after fixing the %cst stride in the ttgir, and
modifying the sweep parameters to make sure the shape per CTA to be less
than or equal to the tensor shape.
2023-05-10 15:15:11 -07:00
Keren Zhou
147ec4384d [FRONTEND] Hotfix for contains_return_op (#1651)
`noinline` can be None, False, or True, so we have to check the callee
in the first two cases.
2023-05-10 15:14:53 -07:00
Mario Lezcano Casado
6b1af5fe37 [FRONTEND] Add support for scalar conditions in device_assert (#1641)
This sometimes happens in TorchInductor. See
https://github.com/pytorch/pytorch/pull/100880.
More generally, it's useful to be able to write `tl.device_assert(False,
msg)`.

Co-authored-by: Keren Zhou <kerenzhou@openai.com>
2023-05-09 23:05:00 -07:00
Keren Zhou
b19b274d93 [FRONTEND] Fix return op related control flow issues (#1637)
- Case 1: Return after static control flow is taken. Peel off
instructions after the first `return` for each basic block.

```python
if static_condition:
    tl.store(...)
    return
return
```

- Case 2: Return exists in both `if` and `else` branches of an inlined
`JITFunction` function

```python
def foo():
    if dynamic_condition:
        return a
    else:
        return b
```

- Case 3: Return exists in a `JITFunction` from another module

```python
import module
if cond:
    a = module.func()
```

- Case 4: A chain of calls through undefined local variables

```python
import module
if cond:
    a = x
    a = a.to(tl.int32).to(tl.int32)
```

- Case 5: Call a function `func` without returning variables. `func` is
recognized as an `Expr` first instead of a `Call`.

```python
if cond:
    foo()
else:
    bar()
```

- Case 6: Call a `noinline` function. We don't need to check if the
function contains any return op.
2023-05-09 12:51:14 -04:00
Michaël Benesty
858a2f0a5e [FRONTEND] Added interpreter mode (#1573)
Simple mechanism to run Triton kernels on PyTorch for debugging purpose
(upstream from Kernl).

Todo:
- random grid iteration
- support of atomic ops
- more unit tests
- cover new APIs?
2023-05-08 14:28:20 -07:00
q.yao
132fe1bb01 [DOCS] Fix docstrings for sphinx docs (#1635) 2023-05-08 08:41:46 -04:00
Philippe Tillet
d338521b65 [SETUP] Removing torch as a test dependency (#1632)
circular dependency is causing troubles now that our interpreter depends
on torch 2.0 ...
2023-05-07 12:29:19 -07:00
Zahi Moudallal
125d9d1cc7 [TEST] Added convert layout test from/to sliced blocked/mma (#1620) 2023-05-06 00:20:52 +00:00
Keren Zhou
fd381e2336 [BACKEND] Allow noinline functions to return multiple values of primitive types (#1623)
Fix https://github.com/openai/triton/issues/1621
2023-05-05 19:25:58 +00:00
Zahi Moudallal
e2ae2c6c48 [BACKEND] Modified store op thread masking (#1605) 2023-05-04 17:15:05 -07:00
peterbell10
deb2c71fb4 [FRONTEND] Add tl.expand_dims (#1614)
This exposes `semantic.expand_dims` in the public API and builds upon it
with support for expanding multiple dimensions at once. e.g.
```python
tl.expand_dims(tl.arange(0, N), (0, -1))  # shape = [1, N, 1]
```

Compared to indexing with `None`, this API is useful because the
dimensions can be constexpr values rather than hard-coded into the
source. As a basic example
```python
@triton.jit
def max_keepdim(value, dim):
    res = tl.max(value, dim)
    return tl.expand_dims(res, dim)
```
2023-05-04 09:46:24 -07:00
Michaël Benesty
d196302cb0 [FRONTEND] make torch optional (#1604)
make torch optional to fix circular dependency issue
2023-05-02 21:56:25 -07:00
Zahi Moudallal
3449a9d40d Zahi/slice reduce rebased (#1594)
[BACKEND] Enable slice layout support for reduce op
2023-05-01 18:00:23 -07:00
albanD
9d5354d991 [RUNTIME] Ensure we hold the GIL before calling into CPython API in cubin binding (#1583)
Formatting of the diff is not the best. I only indented the whole
function, moved the creation of the py::bytes and the return out of the
scope and declared and assigned the cubin variable appropriately.
Everything else is unchanged.

Today it triggers the following error on CPython debug build:
```
Fatal Python error: _PyMem_DebugMalloc: Python memory allocator called without holding the GIL
Python runtime state: initialized

```

---------

Co-authored-by: Keren Zhou <kerenzhou@openai.com>
Co-authored-by: Philippe Tillet <phil@openai.com>
2023-05-01 08:41:55 -07:00
Keren Zhou
3aff0102a3 [FRONTEND] Fix calling local variables’ attribute functions in the if statement (#1597)
If `node.func` is an `ast.Attribute`, it won't cause an early return.
(Not sure if I interpret it correctly)

https://github.com/openai/triton/issues/1591
2023-04-30 15:41:16 -07:00
David MacLeod
4b072516e7 [FRONTEND] add architecture to hash to avoid invalid image on cubin load (#1593)
Closes https://github.com/openai/triton/issues/1556
https://github.com/openai/triton/issues/1512

The current hash used for caching the cubin does not include the
architecture. This leads to the following error when compiling against
one arch and running against another (with no code changes to trigger a
recompilation).
```
RuntimeError: Triton Error [CUDA]: device kernel image is invalid
```
Was not sure what unit tests would be appropriate here (if any)

Co-authored-by: davidma <davidma@speechmatics.com>
2023-04-29 19:32:10 +00:00
Keren Zhou
ee864048b3 [FRONTEND][BACKEND] Add the noinline annotation for triton.jit (#1568)
# Introducing the `noinline` Parameter for Triton JIT Decorator

We're excited to introduce a new parameter, `noinline`, that can be
added to the `jit` decorator in Triton. This parameter allows developers
to specify that a particular Triton function should not be inlined into
its callers. In this post, we'll dive into the syntax, purpose, and
implementation details of this new feature.

## Syntax

To use the `noinline` parameter, simply add `noinline=True` to the `jit`
decorator for the function that you don't want to be inlined. Here's an
example:

```python
@triton.jit(noinline=True)
def device_fn(x, y, Z):
    z = x + y
    tl.store(Z, z)

def test_noinline():
    @triton.jit
    def kernel(X, Y, Z):
        x = tl.load(X)
        y = tl.load(Y)
        device_fn(x, y, Z)
```

In this example, the `device_fn` function is decorated with
`@triton.jit(noinline=True)`, indicating that it should not be inlined
into its caller, `kernel`.

## Purpose

The `noinline` parameter serves several key purposes:

- Reducing code size: By preventing inlining, we can reduce the size of
the compiled code.
- Facilitating debugging: Keeping functions separate can make it easier
to debug the code.
- Avoiding common subexpression elimination (CSE) in certain cases: CSE
can sometimes be avoided by using the `noinline` parameter to reduce
register pressure.
- Enabling dynamic linking: This parameter makes it possible to
dynamically link Triton functions.

## Implementation

The implementation of the `noinline` parameter involves significant
changes to three analysis modules in Triton: *Allocation*, *Membar*, and
*AxisInfo*. Prior to this update, these modules assumed that all Triton
functions had been inlined into the root kernel function. With the
introduction of non-inlined functions, we've had to rework these
assumptions and make corresponding changes to the analyses.

### Call Graph and Limitations

<div style="text-align: center;">
<img
src="https://user-images.githubusercontent.com/2306281/234663904-12864247-3412-4405-987b-6991cdf053bb.png"
alt="figure 1" width="200" height="auto">
</div>

To address the changes, we build a call graph and perform all the
analyses on the call graph instead of a single function. The call graph
is constructed by traversing the call edges and storing them in an edge
map. Roots are extracted by checking nodes with no incoming edges.

The call graph has certain limitations:

- It does not support recursive function calls, although this could be
implemented in the future.
- It does not support dynamic function calls, where the function name is
unknown at compilation time.

### Allocation

<div style="text-align: center;">
<img
src="https://user-images.githubusercontent.com/2306281/234665110-bf6a2660-06fb-4648-85dc-16429439e72d.png"
alt="figure 2" width="400" height="auto">
</div>

In Triton, shared memory allocation is achieved through two operations:
`triton_gpu.convert_layout` and `triton_gpu.alloc_tensor`. The
`convert_layout` operation allocates an internal tensor, which we refer
to as a *scratch* buffer, while the `alloc_tensor` operation returns an
allocated tensor and is thus known as an *explicit* buffer.

To accommodate the introduction of function calls, we are introducing a
third type of buffer called a *virtual* buffer. Similar to scratch
buffers, virtual buffers are allocated internally within the scope of a
function call, and the buffers allocated by the called functions remain
invisible to subsequent operations in the calling function. However,
virtual buffers are distinct from scratch buffers in that the call
operation itself does not allocate memory—instead, it specifies the
total amount of memory required by all the child functions being called.
The actual allocation of buffers is performed by individual operations
within these child functions. For example, when invoking edge e1, no
memory is allocated, but the total amount of memory needed by function B
is reserved. Notably, the amount of shared memory used by function B
remains fixed across its call sites due to the consideration of dynamic
control flows within each function.

An additional challenge to address is the calculation of shared memory
offsets for functions within a call graph. While we can assume a shared
memory offset starting at 0 for a single root function, this is not the
case with a call graph, where we must determine each function's starting
offset based on the call path. Although each function has a fixed memory
consumption, the starting offset may vary. For instance, in Figure 2,
the starting offset of function C through edges e1->e2 differs from that
through edges e2->e4. To handle this, we accumulate the starting offset
at each call site and pass it as an argument to the called function.
Additionally, we amend both the function declaration and call sites by
appending an offset variable.

### Membar

<div style="text-align: center;">
<img
src="https://user-images.githubusercontent.com/2306281/234665157-844dd66f-5028-4ef3-bca2-4ca74b8f969d.png"
alt="figure 3" width="300" height="auto">
</div>

The membar pass is dependent on the allocation analysis. Once the offset
and size of each buffer are known, we conduct a post-order traversal of
the call graph and analyze each function on an individual basis. Unlike
previous analyses, we now return buffers that remain unsynchronized at
the end of functions, allowing the calling function to perform
synchronization in cases of overlap.

### AxisInfo

<div style="text-align: center;">
<img
src="https://user-images.githubusercontent.com/2306281/234665183-790a11ac-0ba1-47e1-98b1-e356220405a3.png"
alt="figure 4" width="400" height="auto">
</div>

The AxisInfo analysis operates differently from both membar and
allocation, as it traverses the call graph in topological order. This is
necessary because function arguments may contain axis information that
will be utilized by callee functions. As we do not implement
optimizations like function cloning, each function has a single code
base, and the axis information for an argument is determined as a
conservative result of all axis information passed by the calling
functions.

---------

Co-authored-by: Philippe Tillet <phil@openai.com>
2023-04-28 14:59:04 -07:00
Keren Zhou
e326ff74d1 [TEST] Fix test cache (#1588)
To avoid puzzling segment fault problems caused by multiprocessing, this
PR:

- Uses "spawn" instead of "fork".
- Define the `instance_descriptor` namedtuple globally.
- Make the `kernel_sub` JITFunction defined by the child process only.
2023-04-28 07:39:06 -07:00
Philippe Tillet
8f47bdcc92 [OPTIMIZER] Added kWidth attribute to DotOperandEncoding (#1584)
This is a pre-requisist for efficient mixed-precision matmul
2023-04-26 23:03:18 -07:00
Keren Zhou
8f7ec23401 [FRONTEND] Refine arithmetic checks and corresponding tests for extern_elementwise (#1577)
The current main would fail on `math.scalbn` because we implicitly cast
the first argument from `int32` to `float32`, while the function only
accepts `int32` as the first argument and `float32` as the second
argument.

So we update the type matching logic as follows:

1. Check if there's a type tuple that matches the types of the input
arguments
2. If yes, we don't allow arithmetic check.
3. If not, we will do arithmetic check to implicitly cast types among
arguments.
4. If we still don't find a corresponding function that accepts the
casted types, throwing an error.

---------

Co-authored-by: Philippe Tillet <phil@openai.com>
2023-04-25 14:25:45 -07:00
Philippe Tillet
d9020179ee [FRONTEND] libdevice path no longer part of the runtime driver (#1580) 2023-04-25 13:44:08 -07:00
Zahi Moudallal
4963d67cd3 [FRONTEND] Use ttgir module num-warps instead of default value (#1576)
Use ttgir num-warps attribute instead of default value.
2023-04-25 08:22:49 -07:00
Natalia Gimelshein
d5969b81fe [FRONTEND] Test pow with mixed dtypes (#1575)
Also reverts #1541 that breaks this test.
2023-04-24 21:38:40 -04:00
Philippe Tillet
ec242430d1 [THIRD_PARTY] bumped ptxas version to 12.1.105 (#1574) 2023-04-24 16:49:31 -07:00
Himanshu Pathak
6d226431b1 [FRONTEND] do not run AccelerateMatmul on pre-Volta GPUs (#1505)
Related to #1271 . I am currently working on adding support for
Pre-volta GPUs in Triton.

---------

Co-authored-by: Himanshu Pathak <himanshu@mtatva.com>
Co-authored-by: Philippe Tillet <phil@openai.com>
2023-04-24 15:53:02 -07:00
Philippe Tillet
a359b62ef3 [RUNTIME] Lazy driver initialization (#1571) 2023-04-24 15:16:09 -07:00
Ian O'Connell
cd096afa58 [FRONTEND] don't hold a file lock (#1569)
We have had complaints/issues randomly where a zombie python process is
holding this lock. We don't need it since renames are atomic on posix.
So refactor this to make temp files unique and then use replace
(https://docs.python.org/3/library/os.html#os.replace )
2023-04-24 12:50:24 -07:00
Michaël Benesty
7d2a4d95c2 [DOCS] fixed num warps / stages in matmul (#1561) 2023-04-21 12:57:26 -07:00
peterbell10
c71bf73f24 [BUILD] Use a persistent directory for cmake (#1548)
Fixes #1545

`build_temp` is a temporary directory which `distutils` used to keep in
the `./build` directory, but when `pyproject.toml` is present `pip` now
puts it in `/tmp` and removes it at the end of the build.

Instead, this creates a new permanent directory like
`python/build/cmake.linux_x86_64-cpython-3.8` (the old name but with
cmake instead of temp).

While I was looking at the verbose pip output, I also noticed a bunch of
warnings like
```
Python recognizes 'triton/runtime.backends' as an importable package,
but it is not listed in the `packages` configuration of setuptools.

'triton/runtime.backends' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
```

So I've also added these to the packages list.

---------

Co-authored-by: Keren Zhou <kerenzhou@openai.com>
2023-04-20 16:38:44 -07:00
cctry
3e213dccb1 [FRONTEND] Make lru_cache compatible for Python 3.7 or older (#1552)
Change the usage of LRU cache decorator from @functools.lru_cache to
@functools.lru_cache().
The former raises an error TypeError('Expected maxsize to be an integer
or None' for Python 3.7 or older.
2023-04-20 16:14:32 -07:00
Keren Zhou
fef8150b65 [FRONTEND] Remove debug print in code_gen (#1550) 2023-04-19 17:13:01 -07:00
Da Yan
b42e3d06d4 [FRONTEND] fix type checking in extern_elementwise (#1541)
Some math ops accept inputs of different types (e.g., tl.math.jn).
We don't want to cast the scalar types of input operands of those math
ops.
2023-04-18 16:59:21 -07:00
Daniil Fukalov
a90a2d864f [BUILD] Add ability to build with clang+lld. (#1544)
This way reduces build time with assertions enabled LLVM and
dramatically speeds up triton's build with a "debug" LLVM.

Co-authored-by: Philippe Tillet <phil@openai.com>
2023-04-18 21:20:12 +00:00
Natalia Gimelshein
7d1a95b046 [TESTS] Added test for avg_pool_bwd kernel (#1540)
This kernel was briefly broken on main, prevent future regressions.

---------

Co-authored-by: Keren Zhou <kerenzhou@openai.com>
2023-04-17 21:20:34 -07:00
peterbell10
a3c3e5a3a1 [TESTS][OPTIMIZER] enable tests for argmin/max and fix some bugs (#1537)
`argmin`/`argmax` is currently only tested in 1d and when we enable the
tests for 2d it reveals a few bugs.
2023-04-17 18:47:31 -07:00
Sharad Vikram
cf26e05a8f [FRONTEND] remove debug print (#1538) 2023-04-17 15:17:19 -07:00
Philippe Tillet
608ec061c1 [TESTING] Added more tests for annotations and autotuner (#1533)
Essentially identical to #538, but it fails formatting tests and I don't
want to ping the author on a weekend.
2023-04-15 19:44:08 -07:00
Philippe Tillet
df6c2babbd [FRONTEND] Now using strings for annotations (#1529)
Works with `__future__` annotations and also avoids having to import
torch just for the sake of type annotations.
2023-04-15 15:32:22 -07:00
Philippe Tillet
f367647b38 [FRONTEND] Added tl.extra.cuda.smid (#1532) 2023-04-15 14:42:59 -07:00