mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-10 07:28:15 -05:00
Update index.md (#4315)
This commit is contained in:
committed by
GitHub
parent
24a6342950
commit
40264c7d1e
@@ -30,7 +30,7 @@ If you are migrating from PyTorch, welcome. Most of the API is the same. We hope
|
||||
|
||||
### tinygrad doesn't have nn.Module
|
||||
|
||||
There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal claases for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__`
|
||||
There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal classes for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__`
|
||||
|
||||
### tinygrad is functional
|
||||
|
||||
@@ -42,4 +42,4 @@ When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](t
|
||||
|
||||
### tinygrad requires @TinyJIT to be fast
|
||||
|
||||
PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function.
|
||||
PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function.
|
||||
|
||||
Reference in New Issue
Block a user