mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-10 07:28:15 -05:00
* init * removed mulacc * is uoptimize the problem? * lol hax make work temporarily fix l8er * revert extra/ changes * clean up * flaky metal tests? * add back mulacc for metal * revert last commit * try skipping linearizer_failure tests * skip flammit tests... cuz tests all work locally * try narrow down exact linearizer failure test * try 2 * try 4 * generated code is the exact same wtf why CI fails * code for 15 and 17 are exact same with or without mulacc, this should pass * try only 1 failure * try garbage collecting lol... * try del variables lol * try gcing after del lol... * is diskcache the problem??? * try disabling opts cache idk * try remove hack * try disable github metal cache... * try CACHELEVEL=0 :D idk anymore * try increase newCommandQueueWithMaxCommandBufferCount_, im almost out of ideas... * revert * actually not a HACK * oops
34 lines
1.5 KiB
Markdown
34 lines
1.5 KiB
Markdown
# Adding a new accelerator to tinygrad
|
|
|
|
It's pretty easy to add a new accelerator to tinygrad. All you need to do is implement a total of 20 (optionally 21) low level ops. Then tinygrad takes care of the rest, handling derivatives and syntactic sugar.
|
|
|
|
## llops
|
|
|
|
These are the ops that you must implement for your accelerator of choice.
|
|
```
|
|
Buffer # class of memory on this device
|
|
unary_op (NOOP, EXP2, LOG2, CAST, SIN, SQRT) # A -> A
|
|
reduce_op (SUM, MAX) # A -> B (smaller size, B has 1 in shape)
|
|
binary_op (ADD, SUB, MUL, DIV, CMPEQ, MAX) # A + A -> A (all the same size)
|
|
load_op (EMPTY, CONST, FROM, CONTIGUOUS, CUSTOM) # -> A (initialize data on device)
|
|
ternary_op (WHERE) # A, A, A -> A
|
|
```
|
|
|
|
## mlops
|
|
|
|
These are the mid level ops that handle the derivatives.
|
|
```
|
|
Relu, Log, Exp, Sin # unary ops
|
|
Sum, Max # reduce ops (with axis argument)
|
|
Maximum, Add, Sub, Mul, Pow, Div, Equal # binary ops (no broadcasting, use expand)
|
|
Expand, Reshape, Permute, Pad, Shrink, Flip # movement ops
|
|
Where # ternary ops
|
|
```
|
|
These are implemented in [mlops.py](/tinygrad/mlops.py).
|
|
|
|
## hlops
|
|
|
|
These are the syntax sugar. They are built on top of the mlops and support most of the things that you could expect from a tensor library.
|
|
|
|
These are implemented in [tensor.py](/tinygrad/tensor.py).
|