mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-08 22:48:25 -05:00
docs: fix mkdoc warnings and link to tensor.md (#4760)
This commit is contained in:
@@ -9,7 +9,7 @@ There is a good [bunch of tutorials](https://mesozoic-egg.github.io/tinygrad-not
|
||||
|
||||
## Frontend
|
||||
|
||||
Everything in [Tensor](tensor.md) is syntactic sugar around [function.py](function.md), where the forwards and backwards passes are implemented for the different functions. There's about 25 of them, implemented using about 20 basic ops. Those basic ops go on to construct a graph of:
|
||||
Everything in [Tensor](tensor/index.md) is syntactic sugar around [function.py](function.md), where the forwards and backwards passes are implemented for the different functions. There's about 25 of them, implemented using about 20 basic ops. Those basic ops go on to construct a graph of:
|
||||
|
||||
::: tinygrad.lazy.LazyBuffer
|
||||
options:
|
||||
|
||||
@@ -16,7 +16,7 @@ We also have [developer docs](developer.md), and Di Zhu has created a [bunch of
|
||||
|
||||
## tinygrad Usage
|
||||
|
||||
The main class you will interact with is [Tensor](tensor.md). It functions very similarly to PyTorch, but has a bit more of a functional style. tinygrad supports [many datatypes](dtypes.md). All operations in tinygrad are lazy, meaning they won't do anything until you realize.
|
||||
The main class you will interact with is [Tensor](tensor/index.md). It functions very similarly to PyTorch, but has a bit more of a functional style. tinygrad supports [many datatypes](dtypes.md). All operations in tinygrad are lazy, meaning they won't do anything until you realize.
|
||||
|
||||
* tinygrad has a built in [neural network library](nn.md) with some classes, optimizers, and load/save state management.
|
||||
* tinygrad has a JIT to make things fast. Decorate your pure function with `TinyJit`
|
||||
@@ -36,11 +36,11 @@ There's nothing special about a "Module" class in tinygrad, it's just a normal c
|
||||
|
||||
### tinygrad is functional
|
||||
|
||||
In tinygrad, you can do [`x.conv2d(w, b)`](tensor.md/#tinygrad.Tensor.conv2d) or [`x.sparse_categorical_cross_entropy(y)`](tensor.md/#tinygrad.Tensor.sparse_categorical_crossentropy). We do also have a [`Conv2D`](nn.md/#tinygrad.nn.Conv2d) class like PyTorch if you want a place to keep the state, but all stateless operations don't have classes.
|
||||
In tinygrad, you can do [`x.conv2d(w, b)`](tensor/ops.md/#tinygrad.Tensor.conv2d) or [`x.sparse_categorical_cross_entropy(y)`](tensor/ops.md/#tinygrad.Tensor.sparse_categorical_crossentropy). We do also have a [`Conv2D`](nn.md/#tinygrad.nn.Conv2d) class like PyTorch if you want a place to keep the state, but all stateless operations don't have classes.
|
||||
|
||||
### tinygrad is lazy
|
||||
|
||||
When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](tensor.md/#tinygrad.Tensor.realize) the Tensor that the computation actually runs.
|
||||
When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](tensor/index.md/#tinygrad.Tensor.realize) the Tensor that the computation actually runs.
|
||||
|
||||
### tinygrad requires @TinyJit to be fast
|
||||
|
||||
|
||||
@@ -50,7 +50,7 @@ randn = Tensor.randn(2, 3) # create a tensor of shape (2, 3) filled with random
|
||||
uniform = Tensor.uniform(2, 3, low=0, high=10) # create a tensor of shape (2, 3) filled with random values from a uniform distribution between 0 and 10
|
||||
```
|
||||
|
||||
There are even more of these factory methods, you can find them in the [Tensor](tensor.md) file.
|
||||
There are even more of these factory methods, you can find them in the [Tensor Creation](tensor/creation.md) file.
|
||||
|
||||
All the tensors creation methods can take a `dtype` argument to specify the data type of the tensor, find the supported `dtype` in [dtypes](dtypes.md).
|
||||
|
||||
@@ -75,7 +75,7 @@ print(t6.numpy())
|
||||
# [-56. -48. -36. -20. 0.]
|
||||
```
|
||||
|
||||
There are a lot more operations that can be performed on tensors, you can find them in the [Tensor](tensor.md) file.
|
||||
There are a lot more operations that can be performed on tensors, you can find them in the [Tensor Ops](tensor/ops.md) file.
|
||||
Additionally reading through [abstractions2.py](https://github.com/tinygrad/tinygrad/blob/master/docs/abstractions2.py) will help you understand how operations on these tensors make their way down to your hardware.
|
||||
|
||||
## Models
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
# Tensor
|
||||
|
||||
::: tinygrad.Tensor
|
||||
options:
|
||||
heading_level: 2
|
||||
|
||||
Reference in New Issue
Block a user