mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-09 06:58:11 -05:00
@@ -71,12 +71,14 @@ print(y.grad) # dz/dy
|
||||
Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.
|
||||
|
||||
```python
|
||||
OPTLOCAL=1 GPU=1 DEBUG=3 python3 -c "from tinygrad.tensor import Tensor;
|
||||
DEBUG=3 OPTLOCAL=1 GPU=1 python3 -c "from tinygrad.tensor import Tensor;
|
||||
N = 1024; a, b = Tensor.randn(N, N), Tensor.randn(N, N);
|
||||
c = (a.reshape(N, 1, N) * b.permute(1,0).reshape(1, N, N)).sum(axis=2);
|
||||
print((c.numpy() - (a.numpy() @ b.numpy())).mean())"
|
||||
```
|
||||
|
||||
Change to `DEBUG=4` to see the generated code.
|
||||
|
||||
## Neural networks?
|
||||
|
||||
It turns out, a decent autograd tensor library is 90% of what you need for neural networks. Add an optimizer (SGD, RMSprop, and Adam implemented) from tinygrad.nn.optim, write some boilerplate minibatching code, and you have all you need.
|
||||
|
||||
Reference in New Issue
Block a user