diff --git a/docs/index.md b/docs/index.md index 381bb060db..fb9eb3de93 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,6 +2,14 @@ Welcome to the docs for tinygrad. This page is for users of the tinygrad library tinygrad is not 1.0 yet, but it will be soon. The API has been pretty stable for a while. +While you can `pip install tinygrad`, we encourage you to install from source: + +```bash +git clone https://github.com/tinygrad/tinygrad.git +cd tinygrad +python3 -m pip install -e . +``` + ## tinygrad Usage The main class you will interact with is [Tensor](tensor.md). It functions very similarly to PyTorch, but has a bit more of a functional style. tinygrad supports [many datatypes](dtypes.md). All operations in tinygrad are lazy, meaning they won't do anything until you realize. @@ -16,21 +24,19 @@ We have a [quickstart guide](quickstart.md) and a [showcase](showcase.md) ## Differences from PyTorch -If you are migrating from PyTorch, welcome. We hope you will find tinygrad both familiar and somehow more "correct feeling" +If you are migrating from PyTorch, welcome. Most of the API is the same. We hope you will find tinygrad both familiar and somehow more "correct feeling" ### tinygrad doesn't have nn.Module -There's nothing special about a "Module" class in tinygrad, it's just a normal class. `get_parameter` +There's nothing special about a "Module" class in tinygrad, it's just a normal class. [`nn.state.get_parameters`](nn/#tinygrad.nn.state.get_parameters) can be used to recursively search normal claases for valid tensors. Instead of the `forward` method in PyTorch, tinygrad just uses `__call__` ### tinygrad is functional - - -In tinygrad, you can do `x.conv2d(w, b)` or `x.sparse_categorical_cross_entropy(y)` +In tinygrad, you can do [`x.conv2d(w, b)`](tensor/#tinygrad.Tensor.conv2d) or [`x.sparse_categorical_cross_entropy(y)`](tensor/#tinygrad.Tensor.sparse_categorical_crossentropy). We do also have a [`Conv2D`](nn/#tinygrad.nn.Conv2d) class like PyTorch if you want a place to keep the state, but all stateless operations don't have classes. ### tinygrad is lazy -When you do `a+b` in tinygrad, nothing happens. +When you do `a+b` in tinygrad, nothing happens. It's not until you [`realize`](tensor/#tinygrad.Tensor.realize) the Tensor that the computation actually runs. ### tinygrad requires @TinyJIT to be fast diff --git a/docs/tensor.md b/docs/tensor.md index e2378336c4..bcf6f35f3a 100644 --- a/docs/tensor.md +++ b/docs/tensor.md @@ -16,6 +16,11 @@ ::: tinygrad.Tensor.realize ::: tinygrad.Tensor.replace ::: tinygrad.Tensor.assign +::: tinygrad.Tensor.detach +::: tinygrad.Tensor.to +::: tinygrad.Tensor.to_ +::: tinygrad.Tensor.shard +::: tinygrad.Tensor.shard_ ::: tinygrad.Tensor.contiguous ::: tinygrad.Tensor.contiguous_backward @@ -39,6 +44,13 @@ ::: tinygrad.Tensor.kaiming_uniform ::: tinygrad.Tensor.kaiming_normal +## Data Access + +::: tinygrad.Tensor.data +::: tinygrad.Tensor.item +::: tinygrad.Tensor.tolist +::: tinygrad.Tensor.numpy + ## Movement (low level) ::: tinygrad.Tensor.reshape diff --git a/tinygrad/tensor.py b/tinygrad/tensor.py index b30fca1889..d690b123e7 100644 --- a/tinygrad/tensor.py +++ b/tinygrad/tensor.py @@ -144,6 +144,7 @@ class Tensor: run_schedule(*create_schedule_with_vars(flatten([x.lazydata.lbs for x in lst]))) def realize(self) -> Tensor: + """Trigger the computation needed to create this Tensor. This is a light wrapper around corealize.""" Tensor.corealize([self]) return self @@ -187,6 +188,7 @@ class Tensor: assert all_int(self.shape), f"no data if shape is symbolic, {self.shape=}" return self._data().cast(self.dtype.fmt, self.shape if len(self.shape) else (1,)) def item(self) -> ConstType: + """Returns the value of this tensor as a standard Python number.""" assert self.dtype.fmt is not None, f"no fmt dtype for {self.dtype}" assert self.numel() == 1, "must have one element for item" return self._data().cast(self.dtype.fmt)[0]