Files
tinygrad/docs/tensor/ops.md
geohotstan 309afa20b7 add Tensor.max_unpool2d (#9518)
* why does max_unpool2d feel slower than out.gradient ...

* slightly cleaner

* what happened to ruff

* need to think about this some more

* slightly faster now?

* clean up, 1 more failing edge case

* ok good

* working TINY_BACKEND

* nit doc wording

* retry CI
2025-03-22 12:11:33 -04:00

1.4 KiB

Reduce

::: tinygrad.Tensor.sum ::: tinygrad.Tensor.prod ::: tinygrad.Tensor.max ::: tinygrad.Tensor.min ::: tinygrad.Tensor.any ::: tinygrad.Tensor.all ::: tinygrad.Tensor.isclose ::: tinygrad.Tensor.mean ::: tinygrad.Tensor.var ::: tinygrad.Tensor.std ::: tinygrad.Tensor.std_mean ::: tinygrad.Tensor.softmax ::: tinygrad.Tensor.log_softmax ::: tinygrad.Tensor.logsumexp ::: tinygrad.Tensor.logcumsumexp ::: tinygrad.Tensor.argmax ::: tinygrad.Tensor.argmin

Processing

::: tinygrad.Tensor.avg_pool2d ::: tinygrad.Tensor.max_pool2d ::: tinygrad.Tensor.max_unpool2d ::: tinygrad.Tensor.conv2d ::: tinygrad.Tensor.conv_transpose2d ::: tinygrad.Tensor.dot ::: tinygrad.Tensor.matmul ::: tinygrad.Tensor.einsum ::: tinygrad.Tensor.cumsum ::: tinygrad.Tensor.cummax ::: tinygrad.Tensor.triu ::: tinygrad.Tensor.tril ::: tinygrad.Tensor.interpolate ::: tinygrad.Tensor.scatter ::: tinygrad.Tensor.scatter_reduce ::: tinygrad.Tensor.sort ::: tinygrad.Tensor.topk

Neural Network (functional)

::: tinygrad.Tensor.linear ::: tinygrad.Tensor.sequential ::: tinygrad.Tensor.layernorm ::: tinygrad.Tensor.batchnorm ::: tinygrad.Tensor.dropout ::: tinygrad.Tensor.one_hot ::: tinygrad.Tensor.scaled_dot_product_attention ::: tinygrad.Tensor.binary_crossentropy ::: tinygrad.Tensor.binary_crossentropy_logits ::: tinygrad.Tensor.sparse_categorical_crossentropy ::: tinygrad.Tensor.cross_entropy ::: tinygrad.Tensor.nll_loss