From 40264c7d1e76053f21598b52d33c3e02e71d064e Mon Sep 17 00:00:00 2001 From: Victor Ziliang Peng Date: Sat, 27 Apr 2024 00:12:44 -0700 Subject: [PATCH] Update index.md (#4315) --- docs/index.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/index.md b/docs/index.md index ff6ab5ed6e..82023371f0 100644 --- a/docs/index.md +++ b/docs/index.md @@ -30,7 +30,7 @@ If you are migrating from PyTorch, welcome. Most of the API is the same. We hope ### tinygrad doesn't have nn.Module -There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal claases for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__` +There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal classes for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__` ### tinygrad is functional @@ -42,4 +42,4 @@ When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](t ### tinygrad requires @TinyJIT to be fast -PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function. \ No newline at end of file +PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function.