* get basic ptx impl working
* test ops passing
* mypy
* dont hardcode target
* more walrus
* ptx in ci
* bool cast and f16 load/store
* weird numpy bug and f16 cast tolerance
* cast half to bool
* fix 1 byte load/store
* disable half for ptx
* fix args and enable xid
* fix non-ptr args
* allow bitcast
* mypy
* cleanups
* midcast use allclose
* add xor
* Revert "disable half for ptx"
This reverts commit 73391c05fd.
* enable float16
* mypy
* no more crashing in ci
* fix ci
* minor cleanups
* use new fn for ptx compiler
* no diskcache in ptx compile
* use rn instead of rz
* save some lines
* new DEFINE_GLOBAL syntax
* line length
* new llvm
* cmpeq
* minor fix
* cast in mulacc
* update test_recursive_add to check line count
* mypy
* remove llvmir.py
* fix bool const
* wip
* cleanups
* working
* llvm in separate pr
* cleanups
* more cleanups
* fix ci
* use in_features directly in nn.Linear.__init__ bound check (#3050)
* use in_features directly in nn.Linear.__init__ bound check
get rid of the unnecessary check of isinstance int
* that is always int
* long lines
* Device._buffers -> Device._devices (#3052)
backend devices used to be called buffers
* make Embedding device aware for multigpu (#3051)
* make Embedding device aware for multigpu
* split line instead of igore because that's cheating
* add test incomplete
* add test complete
* remove comment
* fix white space
* remove nn.Embedding
* remove unused reciprocal (#3053)
* remove unused reciprocal
* comment
* unit tests for Device.canonicalize (#3055)
* add multigpu test for RMSNorm (#3056)
* need all gather
* add two multigpu test scenarios for RMSNorm
* No extra vars call (#3054)
* remove unused reciprocal
* comment
* remove unneeded call to vars
* free speedup
* explicit lazybuffer caching (#3058)
* hotfix: remove useless slow assert from ShapeTracker
* Speed tweaks (#3059)
* base doesn't have to be a function
* no double fetch
* pop, don't check
* make the gc happy
* avoid hasattr
* cache canonicalize
* remove assert, faster base
* don't redefine that every time
* fix gpt2 attention with start_pos = 0 (#3061)
* fix gpt2 attention with start_pos size 1
test cases taken from ll_transformer branch
* fix interpreted
* Tensor.cat with 0 shape tensors (#3062)
* Tensor.cat with 0 shape tensors
supported both 0 in cat axis (for a subset of input), or 0 in non-cat axis (all needs to be 0)
* no shp
* test scaled dot product attention (#3063)
* add test
* add initial test for scaled dot product attention
* test pass for scaled dot product attention
* cached size (#3060)
* cached size
* simplify simplify
* 0 doesn't have base
* fix test
* cleaner cache
* hmm, metal is flaky on this...might be real(ish) but useless as test
* short circuit reshape/expand properly
* better reshape bypass
* hotfix: use is for enum compare
* hotfix: use is for enum compare, a few more
* speedtweaks3: apply shouldn't use the tensor constructor (#3065)
* speedtweaks3: apply shouldn't use the tensor constructor
* replace 0 size with CONST, not 0 in shape
* update gh actions (#3033)
* update checkout actions
* update upload artifact
* update setup python
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* unbind view or shapetracker also returns var_val (#3067)
* unbind view or shapetracker also returns var_val
4% faster for llama compile time
* one line less
* unbound_views
* hotfix: examples/transformer.py
* jit autorealizes output (#3069)
* early gate the graph (#3070)
* simpler idxs_to_idx (#3071)
* filter_strides -> canonicalize_strides (#3072)
* fix onehot and jit in examples/transformer (#3073)
trained to 0.999 in < 6 seconds on M1 Max consistently
* better test demonstration (#3077)
* a better test demonstration
* fix white space
* Tensor.expand resolves the new_shape before shortcut return (#3078)
similar to how reshape is done. also updated shrink shortcut criteria to read similar to pad
* minor cleanups of lazy.py (#3080)
* wmma: clean up device specific tensor core code (#3081)
* mem_estimate is always int, not symbolic (#3083)
* mem_estimate is always int, not symbolic
op_estimate can be symbolic, but mem_estimate is always int, thus we don't need to sym_infer it.
fixed some long lines too. update_stats is a very big function
* operator does not need underscores
* cat works (#3086)
* hotfix disable flaky mac runner wino cifar (#3087)
* remove the third merging state in view._merge_dims (#3085)
no logic depends on state == 0 or state == 2
* minor cleanup of View.reshape (#3088)
* minor cleanup of View.reshape
removed some redundant logic
* new_strides
* revert that
* use BEAM=2 instead of BEAM=4 in cuda ci gpt2 (#3089)
BEAM=2 is faster and less search time. investigating why BEAM2+BEAM4 is slower than BEAM2 alone
* use device from LinearizerOptions in kernel search (#3090)
* use device from LinearizerOptions in kernel search
removed all Device.DEFAULT in search.py
* pass device string for parallel pickle
* device for interpreted backends in LinearizerOptions
* update jit type annotation post lazy rewrite (#3091)
* add mutigpu support for llama attention (#3064)
* add llama attention test for multigpu
* test fails
* kv cache trying to shrink on sharded axis
* mask None works for scale dot product
* kv cache seems to be working but scale dot product breaks
* scaled dot product works, but the last linear layer failed
* running into the reshape case where it could be wrong for multigpu
* making sure it was the reshape
* adding contiguous doesn't solve
* need to shard more properly
* remove reshape test
* minor adjustment to scale dot product attention test
* weights are sharded wrong
* continue fix new weight sharding
* clean up
* fix attention when start_pos is 0
* remove print
* add TODOs for the best mutigpu interface
* bugfix do not reset shapetracker of 0 size lazybuffer (#3096)
it might be coming from an expand, and resetting results incorrect stride. caught by interpreted backend
* One hot in tensor.py (#3093)
* onehot in Tensor.py
* one_hot tests
* works for all shapes, not just 1
* pylint
* not a static method
* moved around, num_classes mandatory
* pylint
* pylint
* space & moving
* formatting
* moved tests
* fix broadcasted logic if there's 0 in shapes (#3097)
* fix broadcasted logic if there's 0 in shapes
should always expand into 0, not the other way around. fixed matmul with 0 in input shapes.
for forwards for now though, backward is more involved and would need to change 0 size shortcuts
* fix tests
* replace with tensor op (#3099)
* fix gpt2 with empty prompt (#3100)
logits would be empty so need to replace that with ones before sampling, also cannot reshape with -1 when there's 0 in other axes
* Revert "fix gpt2 with empty prompt" (#3101)
* fix gpt2 with empty prompt take 2 (#3102)
logits would be empty so need to replace that with ones before sampling, also cannot reshape with -1 when there's 0 in other axes
* wmma: enable METAL half tensor cores and clean up cstyle (#3095)
* wmma: enable METAL half tensor cores and clean up cstyle
* revert simple_matmul rand changes and break line in tensor
* added metal fp16->fp32 tensor core
* add half @ half to mac benchmark (#3103)
* flag to profile mixtral - 1.7 tok/s now (#3104)
* update NumNode.__hash__ to be hash(self.b) (#3105)
with this, `a:=NumNode(x) == b` implies `hash(a) == hash(b)`
* catch runtime error in search._time_program (#3106)
return inf if search encountered runtime errors.
* no exceptions in __del__ when module creation is failed in hip/cuda (#3107)
* failed test case due to cast resets shapetracker (#3109)
cast implicitly resets shapetracker and makes it contiguous (for disk tensor), which fails for Interpreted backend if inputs contain non-contiguous st.
* cleanup ops_disk type annotation and redundant str cast (#3110)
* minor cleanup of test_disk_tensor (#3112)
* add Tensor.var (#3114)
also updated MeanVarianceNormalization and made test_ops test tensors of var and std smaller
* move sample inside jit for beautiful_mnist (#3115)
also removed .realize() for jit functions since jit does it automatically now. a little more beautiful
* minor cleanups of onnx_ops (#3116)
* fix conversation: llama generates token not prob now (#3120)
* add device options for tests in multigpu (#3121)
* make DType a dataclass (#3111)
* remove np from DType
* convert to dataclass
* remove dunder hash, eq, ne overrides from ImageDType
* is dataclass required for PtrDType?
* fix GPU tests
* reduce lines
* revert changes to np
* minor cleanup
* hotfix: ptrdtype compare was broken
* move fromcpu out of lazy.py (#3122)
* move fromcpu out of lazy.py
* fix abstractions2
* remove numpy from device (#3123)
* remove numpy from device
* fix tests
* np item
* cleanups
* simplify with as_buffer
* no toCPU
* tinygradic
* cast to scalar
* remove numpy from ops_torch (#3124)
updated mnist test to cast label to int8 and avoid hacking cast issue of torch uint8
* Fix backward fn for `<` and `==` (#3037)
* fix no grad fn for < and ==
* remove 2 line breaks
* Remove deprecated autograd variable
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* separate try except blocks in onnx2torch in model benchmark (#3126)
exceptions can be raised from either model conversion or individual backend failed. openpilot on torch mps works, but does not work with torch cpu.
seperate the expcetion block so that the benchmark can inlcude torch mps for openpilot.
* update env_vars.md (#3127)
mostly removed deprecated ones. not clear how to maintain this especially for extra/examples
* update test_ptr_ne (#3130)
* remove np from metal graph (#3129)
* dtype fmt (#3132)
* dtype fmt
* three ways to access
* fix off-by-one error in st_equal (#3131)
* fix off by one error
* whitespace
* no numpy (#3134)
* fast resnet eval (#3135)
* fast resnet eval
* fix HIP multidevice graph
* neater expression for devices
* lines
* add decorator test
* remove LLVMOPT
* move ptx
* Update ops_cuda.py
---------
Co-authored-by: Christopher Milan <chrismilan@ucla.edu>
Co-authored-by: chenyu <chenyu@fastmail.com>
Co-authored-by: Yixiang Gao <yixiangg310573@gmail.com>
Co-authored-by: jxdv <virgoj@protonmail.com>
Co-authored-by: Francis Lam <flam@alum.mit.edu>
Co-authored-by: SnakeOnex <sheeproman@gmail.com>
Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>
Co-authored-by: Jyotirmaya Mahanta <jyotirmaya.mahanta@gmail.com>
Co-authored-by: Guy Leroy <g.m.leroy@outlook.com>
Co-authored-by: Paul Gustafson <paul.gustafson@theambrusgroup.com>
tinygrad: For something between PyTorch and karpathy/micrograd. Maintained by tiny corp.
Homepage | Documentation | Examples | Showcase | Discord
This may not be the best deep learning framework, but it is a deep learning framework.
Due to its extreme simplicity, it aims to be the easiest framework to add new accelerators to, with support for both inference and training. If XLA is CISC, tinygrad is RISC.
tinygrad is still alpha software, but we raised some money to make it good. Someday, we will tape out chips.
Features
LLaMA and Stable Diffusion
tinygrad can run LLaMA and Stable Diffusion!
Laziness
Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.
DEBUG=3 python3 -c "from tinygrad import Tensor;
N = 1024; a, b = Tensor.rand(N, N), Tensor.rand(N, N);
c = (a.reshape(N, 1, N) * b.T.reshape(1, N, N)).sum(axis=2);
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"
And we can change DEBUG to 4 to see the generated code.
Neural networks
As it turns out, 90% of what you need for neural networks are a decent autograd/tensor library. Throw in an optimizer, a data loader, and some compute, and you have all you need.
from tinygrad import Tensor, nn
class LinearNet:
def __init__(self):
self.l1 = Tensor.kaiming_uniform(784, 128)
self.l2 = Tensor.kaiming_uniform(128, 10)
def __call__(self, x:Tensor) -> Tensor:
return x.flatten(1).dot(self.l1).relu().dot(self.l2)
model = LinearNet()
optim = nn.optim.Adam([model.l1, model.l2], lr=0.001)
x, y = Tensor.rand(4, 1, 28, 28), Tensor([2,4,3,7]) # replace with real mnist dataloader
for i in range(10):
optim.zero_grad()
loss = model(x).sparse_categorical_crossentropy(y).backward()
optim.step()
print(i, loss.item())
See examples/beautiful_mnist.py for the full version that gets 98% in ~5 seconds
Accelerators
tinygrad already supports numerous accelerators, including:
And it is easy to add more! Your accelerator of choice only needs to support a total of ~25 low level ops. More information can be found in the documentation for adding new accelerators.
Installation
The current recommended way to install tinygrad is from source.
From source
git clone https://github.com/tinygrad/tinygrad.git
cd tinygrad
python3 -m pip install -e .
Direct (master)
python3 -m pip install git+https://github.com/tinygrad/tinygrad.git
Documentation
Documentation along with a quick start guide can be found in the docs/ directory.
Quick example comparing to PyTorch
from tinygrad import Tensor
x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy
The same thing but in PyTorch:
import torch
x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
print(x.grad.numpy()) # dz/dx
print(y.grad.numpy()) # dz/dy
Contributing
There has been a lot of interest in tinygrad lately. Following these guidelines will help your PR get accepted.
We'll start with what will get your PR closed with a pointer to this section:
- No code golf! While low line count is a guiding light of this project, anything that remotely looks like code golf will be closed. The true goal is reducing complexity and increasing readability, and deleting
\ns does nothing to help with that. - All docs and whitespace changes will be closed unless you are a well-known contributor. The people writing the docs should be those who know the codebase the absolute best. People who have not demonstrated that shouldn't be messing with docs. Whitespace changes are both useless and carry a risk of introducing bugs.
- Anything you claim is a "speedup" must be benchmarked. In general, the goal is simplicity, so even if your PR makes things marginally faster, you have to consider the tradeoff with maintainablity and readablity.
- In general, the code outside the core
tinygrad/folder is not well tested, so unless the current code there is broken, you shouldn't be changing it.
Now, what we want:
- Bug fixes (with a regression test) are great! This library isn't 1.0 yet, so if you stumble upon a bug, fix it, write a test, and submit a PR, this is valuable work.
- Solving bounties! tinygrad offers cash bounties for certain improvements to the library. All new code should be high quality and well tested.
- Features. However, if you are adding a feature, consider the line tradeoff. If it's 3 lines, there's less of a bar of usefulness it has to meet over something that's 30 or 300 lines. All features must have regression tests. In general with no other constraints, your feature's API should match torch or numpy.
- Refactors that are clear wins. In general, if your refactor isn't a clear win it will be closed. But some refactors are amazing! Think about readability in a deep core sense. A whitespace change or moving a few functions around is useless, but if you realize that two 100 line functions can actually use the same 110 line function with arguments while also improving readability, this is a big win.
- Tests/fuzzers. If you can add tests that are non brittle, they are welcome. We have some fuzzers in here too, and there's a plethora of bugs that can be found with them and by improving them. Finding bugs, even writing broken tests (that should pass) with
@unittest.expectedFailureis great. This is how we make progress. - Dead code removal from core
tinygrad/folder. We don't care about the code in extra, but removing dead code from the core library is great. Less for new people to read and be confused by.
Running tests
You should install the pre-commit hooks with pre-commit install. This will run the linter, mypy, and a subset of the tests on every commit.
For more examples on how to run the full test suite please refer to the CI workflow.
Some examples of running tests locally:
python3 -m pip install -e '.[testing]' # install extra deps for testing
python3 test/test_ops.py # just the ops tests
python3 -m pytest test/ # whole test suite
