Files
tinygrad/docs/tensor/ops.md
geohotstan 1d64c12f2b add Topk to tensor (#9343)
* terrible but somewhat working impl

* linux behaves differently than macos?

* slightly better impl

* small clean up; haven't figured this out yet

* better

* torch has different behavior on linux and macos for duplicated values

* add sum docs

* fix test

* add torch return_type test

* add an exception test

* wrap_fxn instead, and move op lower in order

* better repeated values test

* rerun ci
2025-03-09 20:01:42 -04:00

53 lines
1.3 KiB
Markdown

## Reduce
::: tinygrad.Tensor.sum
::: tinygrad.Tensor.prod
::: tinygrad.Tensor.max
::: tinygrad.Tensor.min
::: tinygrad.Tensor.any
::: tinygrad.Tensor.all
::: tinygrad.Tensor.isclose
::: tinygrad.Tensor.mean
::: tinygrad.Tensor.var
::: tinygrad.Tensor.std
::: tinygrad.Tensor.std_mean
::: tinygrad.Tensor.softmax
::: tinygrad.Tensor.log_softmax
::: tinygrad.Tensor.logsumexp
::: tinygrad.Tensor.logcumsumexp
::: tinygrad.Tensor.argmax
::: tinygrad.Tensor.argmin
## Processing
::: tinygrad.Tensor.avg_pool2d
::: tinygrad.Tensor.max_pool2d
::: tinygrad.Tensor.conv2d
::: tinygrad.Tensor.conv_transpose2d
::: tinygrad.Tensor.dot
::: tinygrad.Tensor.matmul
::: tinygrad.Tensor.einsum
::: tinygrad.Tensor.cumsum
::: tinygrad.Tensor.cummax
::: tinygrad.Tensor.triu
::: tinygrad.Tensor.tril
::: tinygrad.Tensor.interpolate
::: tinygrad.Tensor.scatter
::: tinygrad.Tensor.scatter_reduce
::: tinygrad.Tensor.topk
## Neural Network (functional)
::: tinygrad.Tensor.linear
::: tinygrad.Tensor.sequential
::: tinygrad.Tensor.layernorm
::: tinygrad.Tensor.batchnorm
::: tinygrad.Tensor.dropout
::: tinygrad.Tensor.one_hot
::: tinygrad.Tensor.scaled_dot_product_attention
::: tinygrad.Tensor.binary_crossentropy
::: tinygrad.Tensor.binary_crossentropy_logits
::: tinygrad.Tensor.sparse_categorical_crossentropy
::: tinygrad.Tensor.cross_entropy
::: tinygrad.Tensor.nll_loss