New docs are in mkdocs (#4178)

* start mkdocs

* simple docs for tensor

* more docs

* move those back

* more docs

* copy markdown extensions

* docs legacy

* docs building workflow

* fix showcase links

* only that?

* install tinygrad

* add docs to setup.py

* Delete examples/llm.c/data
This commit is contained in:
George Hotz
2024-04-16 10:59:51 +04:00
committed by GitHub
parent aa093efa43
commit 8f749ae0eb
27 changed files with 379 additions and 6 deletions

29
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: Deploy Docs
on:
push:
branches:
- master
- mkdocs
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: pip install -e .[docs]
- run: mkdocs gh-deploy --force

View File

@@ -91,7 +91,7 @@ jobs:
run: python -m mypy --strict-equality
- name: Test Docs
run: |
python docs/abstractions2.py
python docs-legacy/abstractions2.py
- name: Test Quickstart
run: awk '/```python/{flag=1;next}/```/{flag=0}flag' docs/quickstart.md > quickstart.py && PYTHONPATH=. python quickstart.py
- name: Fuzz Test symbolic

1
.gitignore vendored
View File

@@ -51,3 +51,4 @@ quickstart.py
.hypothesis
weights
*.lprof
site/

View File

@@ -21,7 +21,7 @@ repos:
pass_filenames: false
- id: docs2
name: docs2
entry: python3 docs/abstractions2.py
entry: python3 docs-legacy/abstractions2.py
language: system
always_run: true
pass_filenames: false

View File

Before

Width:  |  Height:  |  Size: 538 B

After

Width:  |  Height:  |  Size: 538 B

View File

Before

Width:  |  Height:  |  Size: 526 B

After

Width:  |  Height:  |  Size: 526 B

7
docs/developer.md Normal file
View File

@@ -0,0 +1,7 @@
## Frontend
Everything in [Tensor](tensor.md) is syntactic sugar around [function.py](function.md), where the forwards and backwards passes are implemented for the different ops. That goes on to construct a graph of
::: tinygrad.lazy.LazyBuffer
options:
show_source: false

3
docs/dtypes.md Normal file
View File

@@ -0,0 +1,3 @@
::: tinygrad.dtypes
options:
members: true

7
docs/function.md Normal file
View File

@@ -0,0 +1,7 @@
<!-- TODO: remove the imported members -->
<!-- TODO: move Function from tensor to function -->
::: tinygrad.function
options:
members: true
inherited_members: false
show_source: false

37
docs/index.md Normal file
View File

@@ -0,0 +1,37 @@
Welcome to the docs for tinygrad. This page is for users of the tinygrad library. We also have [developer docs](developer.md)
tinygrad is not 1.0 yet, but it will be soon. The API has been pretty stable for a while.
## tinygrad Usage
The main class you will interact with is [Tensor](tensor.md). It functions very similarly to PyTorch, but has a bit more of a functional style. tinygrad supports [many datatypes](dtypes.md). All operations in tinygrad are lazy, meaning they won't do anything until you realize.
* tinygrad has a built in [neural network library](nn.md) with some classes, optimizers, and load/save state management.
* tinygrad has a JIT to make things fast. Decorate your pure function with `TinyJit`
* tinygrad has amazing support for multiple GPUs, allowing you to shard your Tensors with `Tensor.shard`
To understand what training looks like in tinygrad, you should read `beautiful_mnist.py`
We have a [quickstart guide](quickstart.md) and a [showcase](showcase.md)
## Differences from PyTorch
If you are migrating from PyTorch, welcome. We hope you will find tinygrad both familiar and somehow more "correct feeling"
### tinygrad doesn't have nn.Module
There's nothing special about a "Module" class in tinygrad, it's just a normal class. `get_parameter`
### tinygrad is functional
<!-- link these methods -->
In tinygrad, you can do `x.conv2d(w, b)` or `x.sparse_categorical_cross_entropy(y)`
### tinygrad is lazy
When you do `a+b` in tinygrad, nothing happens.
### tinygrad requires @TinyJIT to be fast
PyTorch spends a lot of development effort to make dispatch very fast. tinygrad doesn't. We have a simple decorator that will replay the kernels used in the decorated function.

17
docs/nn.md Normal file
View File

@@ -0,0 +1,17 @@
## Neural Network classes
::: tinygrad.nn
options:
members: true
## Optimizers
::: tinygrad.nn.optim
options:
members: true
## Load/Save
::: tinygrad.nn.state
options:
members: true

View File

@@ -19,7 +19,7 @@ python3 examples/efficientnet.py webcam
Take a look at [yolov8.py](/examples/yolov8.py).
![yolov8 by tinygrad](/docs/showcase/yolov8_showcase_image.png)
![yolov8 by tinygrad](showcase/yolov8_showcase_image.png)
## Audio
@@ -37,7 +37,7 @@ SMALL=1 python3 examples/whisper.py
Take a look at [mnist_gan.py](/examples/mnist_gan.py).
![mnist gan by tinygrad](/docs/showcase/mnist_by_tinygrad.jpg)
![mnist gan by tinygrad](showcase/mnist_by_tinygrad.jpg)
### Stable Diffusion
@@ -45,7 +45,7 @@ Take a look at [mnist_gan.py](/examples/mnist_gan.py).
python3 examples/stable_diffusion.py
```
![a horse sized cat eating a bagel](/docs/showcase/stable_diffusion_by_tinygrad.jpg)
![a horse sized cat eating a bagel](showcase/stable_diffusion_by_tinygrad.jpg)
*"a horse sized cat eating a bagel"*

167
docs/tensor.md Normal file
View File

@@ -0,0 +1,167 @@
::: tinygrad.Tensor
options:
heading_level: 2
members: false
show_source: false
## Properties
::: tinygrad.Tensor.shape
::: tinygrad.Tensor.dtype
::: tinygrad.Tensor.device
## tinygrad ops
::: tinygrad.Tensor.corealize
::: tinygrad.Tensor.realize
::: tinygrad.Tensor.replace
::: tinygrad.Tensor.assign
::: tinygrad.Tensor.contiguous
::: tinygrad.Tensor.contiguous_backward
## Creation (basic)
::: tinygrad.Tensor.empty
::: tinygrad.Tensor.zeros
::: tinygrad.Tensor.ones
::: tinygrad.Tensor.full
::: tinygrad.Tensor.arange
::: tinygrad.Tensor.eye
## Creation (random)
::: tinygrad.Tensor.rand
::: tinygrad.Tensor.randn
::: tinygrad.Tensor.normal
::: tinygrad.Tensor.uniform
::: tinygrad.Tensor.scaled_uniform
::: tinygrad.Tensor.glorot_uniform
::: tinygrad.Tensor.kaiming_uniform
::: tinygrad.Tensor.kaiming_normal
## Movement (low level)
::: tinygrad.Tensor.reshape
::: tinygrad.Tensor.expand
::: tinygrad.Tensor.permute
::: tinygrad.Tensor.flip
::: tinygrad.Tensor.shrink
::: tinygrad.Tensor.pad
## Movement (high level)
::: tinygrad.Tensor.__getitem__
::: tinygrad.Tensor.slice
::: tinygrad.Tensor.gather
::: tinygrad.Tensor.cat
::: tinygrad.Tensor.stack
::: tinygrad.Tensor.repeat
::: tinygrad.Tensor.split
::: tinygrad.Tensor.chunk
::: tinygrad.Tensor.squeeze
::: tinygrad.Tensor.unsqueeze
::: tinygrad.Tensor.pad2d
::: tinygrad.Tensor.transpose
::: tinygrad.Tensor.flatten
::: tinygrad.Tensor.unflatten
## Reduce
::: tinygrad.Tensor.sum
::: tinygrad.Tensor.max
::: tinygrad.Tensor.min
::: tinygrad.Tensor.mean
::: tinygrad.Tensor.var
::: tinygrad.Tensor.std
::: tinygrad.Tensor.softmax
::: tinygrad.Tensor.log_softmax
::: tinygrad.Tensor.argmax
## Processing
::: tinygrad.Tensor.conv2d
::: tinygrad.Tensor.dot
::: tinygrad.Tensor.matmul
::: tinygrad.Tensor.einsum
::: tinygrad.Tensor.cumsum
::: tinygrad.Tensor.triu
::: tinygrad.Tensor.tril
::: tinygrad.Tensor.avg_pool2d
::: tinygrad.Tensor.max_pool2d
::: tinygrad.Tensor.conv_transpose2d
## Unary Ops (math)
::: tinygrad.Tensor.logical_not
::: tinygrad.Tensor.neg
::: tinygrad.Tensor.log
::: tinygrad.Tensor.log2
::: tinygrad.Tensor.exp
::: tinygrad.Tensor.exp2
::: tinygrad.Tensor.trunc
::: tinygrad.Tensor.ceil
::: tinygrad.Tensor.floor
::: tinygrad.Tensor.round
::: tinygrad.Tensor.lerp
::: tinygrad.Tensor.square
::: tinygrad.Tensor.clip
::: tinygrad.Tensor.abs
::: tinygrad.Tensor.sign
::: tinygrad.Tensor.reciprocal
## Unary Ops (activation)
::: tinygrad.Tensor.relu
::: tinygrad.Tensor.sigmoid
::: tinygrad.Tensor.elu
::: tinygrad.Tensor.celu
::: tinygrad.Tensor.swish
::: tinygrad.Tensor.silu
::: tinygrad.Tensor.relu6
::: tinygrad.Tensor.hardswish
::: tinygrad.Tensor.tanh
::: tinygrad.Tensor.sinh
::: tinygrad.Tensor.cosh
::: tinygrad.Tensor.atanh
::: tinygrad.Tensor.asinh
::: tinygrad.Tensor.acosh
::: tinygrad.Tensor.hardtanh
::: tinygrad.Tensor.gelu
::: tinygrad.Tensor.quick_gelu
::: tinygrad.Tensor.leakyrelu
::: tinygrad.Tensor.mish
::: tinygrad.Tensor.softplus
::: tinygrad.Tensor.softsign
## Elementwise Ops (broadcasted)
::: tinygrad.Tensor.add
::: tinygrad.Tensor.sub
::: tinygrad.Tensor.mul
::: tinygrad.Tensor.div
::: tinygrad.Tensor.xor
::: tinygrad.Tensor.pow
::: tinygrad.Tensor.maximum
::: tinygrad.Tensor.minimum
::: tinygrad.Tensor.where
## Neural Network Ops (functional)
::: tinygrad.Tensor.linear
::: tinygrad.Tensor.sequential
::: tinygrad.Tensor.layernorm
::: tinygrad.Tensor.batchnorm
::: tinygrad.Tensor.dropout
::: tinygrad.Tensor.one_hot
::: tinygrad.Tensor.scaled_dot_product_attention
::: tinygrad.Tensor.binary_crossentropy
::: tinygrad.Tensor.binary_crossentropy_logits
::: tinygrad.Tensor.sparse_categorical_crossentropy
## Casting Ops
::: tinygrad.Tensor.cast
::: tinygrad.Tensor.bitcast
::: tinygrad.Tensor.float
::: tinygrad.Tensor.half

98
mkdocs.yml Normal file
View File

@@ -0,0 +1,98 @@
# pip install mkdocs mkdocs-material mkdocstrings[python]
site_name: tinygrad docs
site_url: https://docs.tinygrad.org/
nav:
- Home: index.md
- Tensor: tensor.md
- dtypes: dtypes.md
- Neural Networks: nn.md
- Quickstart: quickstart.md
- Showcase: showcase.md
- Developer: developer.md
- Function: function.md
#- tinygrad: reference/
#extra_css:
#- css/tinygrad.css
markdown_extensions:
- attr_list
- admonition
- callouts
- footnotes
- pymdownx.details
- pymdownx.emoji:
emoji_index: !!python/name:material.extensions.emoji.twemoji
emoji_generator: !!python/name:material.extensions.emoji.to_svg
- pymdownx.highlight:
pygments_lang_class: true
- pymdownx.inlinehilite:
style_plain_text: python
- pymdownx.magiclink
- pymdownx.snippets:
base_path: [!relative $config_dir]
check_paths: true
- pymdownx.superfences
- pymdownx.tabbed:
alternate_style: true
slugify: !!python/object/apply:pymdownx.slugs.slugify
kwds:
case: lower
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
- toc:
permalink: "¤"
theme:
name: material
features:
- announce.dismiss
- content.action.edit
- content.action.view
- content.code.annotate
- content.code.copy
- content.tooltips
- navigation.footer
- navigation.indexes
- navigation.sections
- navigation.tabs
- navigation.tabs.sticky
- navigation.top
- search.highlight
- search.suggest
- toc.follow
palette:
scheme: slate
primary: black
accent: lime
plugins:
- search
- mkdocstrings:
handlers:
python:
import:
- https://docs.python.org/3/objects.inv
paths: [tinygrad]
options:
docstring_options:
ignore_init_summary: true
docstring_section_style: list
filters: ["!^_"]
heading_level: 3
inherited_members: false
merge_init_into_class: true
separate_signature: true
show_root_heading: true
show_root_full_path: false
show_signature_annotations: true
show_symbol_type_heading: true
show_symbol_type_toc: true
show_source: true
signature_crossrefs: true
summary: true
#- gen-files:
# scripts:
# - docs/gen_ref_pages.py
#- literate-nav:
# nav_file: SUMMARY.md

View File

@@ -28,6 +28,7 @@ line-length = 150
exclude = [
"disassemblers/",
"docs/",
"docs-legacy/",
"examples/",
"extra/",
"openpilot/",

View File

@@ -53,6 +53,11 @@ setup(name='tinygrad',
"networkx",
"hypothesis",
],
'docs': [
"mkdocs-material",
"mkdocstrings[python]",
"markdown-callouts",
],
'testing_tf': [
"tensorflow==2.15.1",
"tensorflow_addons",

View File

@@ -1,3 +1,4 @@
"""This is where the forwards and backwards passes live."""
import math
from typing import Tuple, Optional
from tinygrad.helpers import argsort

View File

@@ -70,6 +70,7 @@ def _pad_left(*shps:Tuple[sint, ...], v=1): return tuple((v,) * (max(len(i_) for
def broadcast_shape(*shps:Tuple[sint, ...]): return tuple(0 if any(sh_ == 0 for sh_ in sh) else max(sh) for sh in zip(*_pad_left(*shps)))
class Tensor:
"""A `Tensor` is a multi-dimensional matrix containing elements of a single data type."""
__slots__ = "lazydata", "requires_grad", "grad", "_ctx"
__deletable__ = ('_ctx',)
training: ClassVar[bool] = False
@@ -836,7 +837,6 @@ class Tensor:
def round(self: Tensor) -> Tensor:
return ((self > 0) == ((b := self.cast(dtypes.int32) / 2.0).cast(dtypes.int32) == b)).where((self - 0.5).ceil(), (self + 0.5).floor())
def lerp(self, end: Tensor, weight: Union[Tensor, float]) -> Tensor: return self + (end - self) * weight
def square(self): return self*self
def clip(self, min_, max_): return self.maximum(min_).minimum(max_)
def abs(self): return self.relu() + (-self).relu()