Files
tinygrad/docs/tensor/ops.md
Josh Moore 1f9d2442b9 Add Tensor.scatter_reduce (#8947)
* pytorch scatter -> scatter_reduce

* WIP scatter_reduce implementation

* _pre_scatter return type hint

* split out src, mask to satisfy linter

* Add src cast back in

* dict of lambdas instead of ifs

* sum and prod reduction ops with include_self

* add reduce arg error message

* add amax and amin reduction ops

* Fix include_self for higher dims

* Simplify

* Simplify amax and amin too

* Pull include_self logic out into _inv_mask function

* reduce arg cannot be None for scatter_reduce

* Fix self-mask issue

* Add mean reduce op

* Add tests

* any() not needed here

* remove comment

* End support for Tensor src with reduce arg in tinygrad scatter

* Process index, dim inside actual functions

* Add scatter_reduce to onnx

* Add excluded onnx ScatterElements reduction tests back in

* Save 2 lines on the mask helpers

* Update docs

* Add include_self=False tests

* cleanup

* Remove unneeded helper function

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2025-02-13 09:08:54 -05:00

51 lines
1.3 KiB
Markdown

## Reduce
::: tinygrad.Tensor.sum
::: tinygrad.Tensor.prod
::: tinygrad.Tensor.max
::: tinygrad.Tensor.min
::: tinygrad.Tensor.any
::: tinygrad.Tensor.all
::: tinygrad.Tensor.mean
::: tinygrad.Tensor.var
::: tinygrad.Tensor.std
::: tinygrad.Tensor.std_mean
::: tinygrad.Tensor.softmax
::: tinygrad.Tensor.log_softmax
::: tinygrad.Tensor.logsumexp
::: tinygrad.Tensor.logcumsumexp
::: tinygrad.Tensor.argmax
::: tinygrad.Tensor.argmin
## Processing
::: tinygrad.Tensor.avg_pool2d
::: tinygrad.Tensor.max_pool2d
::: tinygrad.Tensor.conv2d
::: tinygrad.Tensor.conv_transpose2d
::: tinygrad.Tensor.dot
::: tinygrad.Tensor.matmul
::: tinygrad.Tensor.einsum
::: tinygrad.Tensor.cumsum
::: tinygrad.Tensor.cummax
::: tinygrad.Tensor.triu
::: tinygrad.Tensor.tril
::: tinygrad.Tensor.interpolate
::: tinygrad.Tensor.scatter
::: tinygrad.Tensor.scatter_reduce
## Neural Network (functional)
::: tinygrad.Tensor.linear
::: tinygrad.Tensor.sequential
::: tinygrad.Tensor.layernorm
::: tinygrad.Tensor.batchnorm
::: tinygrad.Tensor.dropout
::: tinygrad.Tensor.one_hot
::: tinygrad.Tensor.scaled_dot_product_attention
::: tinygrad.Tensor.binary_crossentropy
::: tinygrad.Tensor.binary_crossentropy_logits
::: tinygrad.Tensor.sparse_categorical_crossentropy
::: tinygrad.Tensor.cross_entropy
::: tinygrad.Tensor.nll_loss