* add `Tensor.isclose()` * support `equal_nan` so as to match PyTorch's behavior * update unit tests * remove some tests temporarily * re-enable one test * re-enable other test * try to fix failing tests during CI * save one line of code --------- Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
1.3 KiB
Reduce
::: tinygrad.Tensor.sum ::: tinygrad.Tensor.prod ::: tinygrad.Tensor.max ::: tinygrad.Tensor.min ::: tinygrad.Tensor.any ::: tinygrad.Tensor.all ::: tinygrad.Tensor.isclose ::: tinygrad.Tensor.mean ::: tinygrad.Tensor.var ::: tinygrad.Tensor.std ::: tinygrad.Tensor.std_mean ::: tinygrad.Tensor.softmax ::: tinygrad.Tensor.log_softmax ::: tinygrad.Tensor.logsumexp ::: tinygrad.Tensor.logcumsumexp ::: tinygrad.Tensor.argmax ::: tinygrad.Tensor.argmin
Processing
::: tinygrad.Tensor.avg_pool2d ::: tinygrad.Tensor.max_pool2d ::: tinygrad.Tensor.conv2d ::: tinygrad.Tensor.conv_transpose2d ::: tinygrad.Tensor.dot ::: tinygrad.Tensor.matmul ::: tinygrad.Tensor.einsum ::: tinygrad.Tensor.cumsum ::: tinygrad.Tensor.cummax ::: tinygrad.Tensor.triu ::: tinygrad.Tensor.tril ::: tinygrad.Tensor.interpolate ::: tinygrad.Tensor.scatter ::: tinygrad.Tensor.scatter_reduce
Neural Network (functional)
::: tinygrad.Tensor.linear ::: tinygrad.Tensor.sequential ::: tinygrad.Tensor.layernorm ::: tinygrad.Tensor.batchnorm ::: tinygrad.Tensor.dropout ::: tinygrad.Tensor.one_hot ::: tinygrad.Tensor.scaled_dot_product_attention ::: tinygrad.Tensor.binary_crossentropy ::: tinygrad.Tensor.binary_crossentropy_logits ::: tinygrad.Tensor.sparse_categorical_crossentropy ::: tinygrad.Tensor.cross_entropy ::: tinygrad.Tensor.nll_loss