Commit Graph

14 Commits

Author SHA1 Message Date
Clément Verrier
a7f91224eb add Tensor.isclose() (#8844)
* add `Tensor.isclose()`

* support `equal_nan`

so as to match PyTorch's behavior

* update unit tests

* remove some tests temporarily

* re-enable one test

* re-enable other test

* try to fix failing tests during CI

* save one line of code

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2025-02-17 10:11:40 -05:00
Josh Moore
1f9d2442b9 Add Tensor.scatter_reduce (#8947)
* pytorch scatter -> scatter_reduce

* WIP scatter_reduce implementation

* _pre_scatter return type hint

* split out src, mask to satisfy linter

* Add src cast back in

* dict of lambdas instead of ifs

* sum and prod reduction ops with include_self

* add reduce arg error message

* add amax and amin reduction ops

* Fix include_self for higher dims

* Simplify

* Simplify amax and amin too

* Pull include_self logic out into _inv_mask function

* reduce arg cannot be None for scatter_reduce

* Fix self-mask issue

* Add mean reduce op

* Add tests

* any() not needed here

* remove comment

* End support for Tensor src with reduce arg in tinygrad scatter

* Process index, dim inside actual functions

* Add scatter_reduce to onnx

* Add excluded onnx ScatterElements reduction tests back in

* Save 2 lines on the mask helpers

* Update docs

* Add include_self=False tests

* cleanup

* Remove unneeded helper function

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2025-02-13 09:08:54 -05:00
geohotstan
cea5853cfa add Tensor.scatter (#7737)
* working I think

* where are my onnx scatter tests??

* forward_only for now

* try if nan hack fix NV

* looks like issue is different... CUDA WHY

* oops that was wrong. Try if this fixes CUDA

* simpler multiply

* actually finish this up tmrw morning :x

* fix tests?

* improve tests

* improve test and implementation

* fix ruff

* complete but lots of expected failure...

* reviewed tests

* add onnx tests

* is this a processing op?

* add return type to indicate that it's not in-place

* final cleanups

* use or and improve tests a little

* add masked_index_select

* call it masked_setitem instead

* try

* FIXED

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-11-27 10:52:04 -05:00
chenyu
3b26e51fce Tensor.cummax (#7854)
generalized the existing cumsum and take Ops.MAX in addition to Ops.ADD
2024-11-22 15:55:02 -05:00
geohotstan
9c41c376d3 add Tensor.nll_loss (#7683)
* move nll_loss to new branch

* make nll_loss examples practical

* self *is*

* add to docs

* small
2024-11-13 13:12:13 -05:00
jeffzh4ng
19a7e41113 implement logcumsumexp (#6921)
* implement logcumsumexp

* change axis=None to axis=0
2024-10-06 10:45:36 -04:00
chenyu
590c0922b6 Tensor.prod (#6250)
* Tensor.prod

a new reduce op!

* onnx ReduceProd
2024-08-23 10:06:32 -04:00
Alessandro Benetti
9328248610 support for std_mean and cross_entropy (#6181)
* support for std_mean and cross_entropy (#3)

* Cross entropy and std mean support

* remove extra examples
2024-08-19 12:06:44 -07:00
George Hotz
0a8668cf30 improvements to docs 2024-08-07 09:57:24 -07:00
Eitan Turok
39c8c9c00a Add docs (#5942)
* init commit

* finish writing

* add to docs

* fix docs

* fix typo

* delete new line

* rename to tensor properties

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2024-08-07 07:38:51 -07:00
chenyu
0afcbfae84 docs: add Tensor.interpolate to doc page (#5510) 2024-07-16 14:17:19 -04:00
chenyu
6856f915d6 Tensor.any and Tensor.all (#5320)
does not work in ptx yet due to how boolean tensor is handled
2024-07-07 14:36:00 -04:00
chenyu
c1e330f302 Tensor.int and Tensor.bool (#5317) 2024-07-07 11:52:58 -04:00
wozeparrot
b2b49cef6f split tensor docs (#4754) 2024-05-28 11:03:52 -07:00