Update index.md (#4315)

This commit is contained in:
Victor Ziliang Peng
2024-04-27 00:12:44 -07:00
committed by GitHub
parent 24a6342950
commit 40264c7d1e

View File

@@ -30,7 +30,7 @@ If you are migrating from PyTorch, welcome. Most of the API is the same. We hope
### tinygrad doesn't have nn.Module
There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal claases for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__`
There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal classes for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__`
### tinygrad is functional
@@ -42,4 +42,4 @@ When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](t
### tinygrad requires @TinyJIT to be fast
PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function.
PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function.