* update the backend to fix torch deprecation warning
* use param_hook to avoid full backward hook needlessly firing on inputs which do not require gradients
* fix indentation
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* Implement private _linalg_eigh function for tensor eigenvalue decomposition in torch backend
* Add unit test for linalg.eigh function in TestTorchBackend
This test verifies the eigenvalue decomposition of a 2x2 tensor using the linalg.eigh function, ensuring the computed eigenvalues and reconstructed tensor match the expected results.
- Implemented a new function `equal` in the torch backend to compare two tensors for equality.
- Added unit tests for the `equal` function to verify its correctness with different tensor inputs.
* Enhance tensor random functions with dtype support
- Updated `aten.uniform_` and `aten.normal_` to include dtype parameter in backend.py
- Added unit tests for uniform and normal tensor generation with specific dtypes in test.py
* Refactor test name for clarity
- Renamed `test_normal_dtype` to `test_normal` in `extra/torch_backend/test.py`
- Aims to improve readability and better reflect the test's purpose
* bug in div range folding
* simpler
* oh, this is right for indexing, but the div mod folding needs to be fixed
* reenable
* Passing test_complexity_w_unroll2 (#10068)
* Passing
* remove non_folded_divs
* Add check for negative tern in div folding
* Add test
* bump that limit
* fix casted
---------
Co-authored-by: Sieds Lykles <93992551+S-Lykles@users.noreply.github.com>
* add kernelize
* remove that
* kernelize returns self
* update abstractions2.py
* kernelize in test_schedule
* temp: assert BUFFER_VIEW's existence
* ASSIGN must have a buffer or subbuffer target
* assert and shrink
* fix
* padded setitem
* var
* toposort once
* extra
* base_buffer
* end with BUFFER_VIEW
* setitem for disk
* test_setitem_becomes_subbuffer
* mul slice test
* torch backend fix 1
* non-deterministic
* keep subbuffer
* Add amax support to Tensor operations
- Implemented amax function in backend.py for tensor max operations.
- Added unit tests for amax in test.py to ensure correct functionality.
* Fix formatting in amax output function
- Adjusted spacing in the amax output lambda function in backend.py
- Improved code readability for better maintenance
* fix some tests in test_ops for torch backend(171 failing)
* fix more tests (135 failures)
* fix tests (126 failing)
* handle transposed convs (109 tests failing)
* fix slice
* fix lshift & rshift and more tests (87 tests failing)
* revert accidental change
* remove unnecessary changes (82 failures)
* fix backward for avg_pool2d (78 failures)
* fix backward for avg_pool2d (78 failures)
* fix replication backpass
* fix reflection pad back pass (71 failures)
* cummax with indicies, aten.mv and move out methods (67 failures)
* extract avg_pool2d and avg_pool3d to separate functions (62 failures)
* revert changes for cat_out
* rewrite avg_pool and pad without repetition
* remove duplicates from decomps
* slice rewrite and add slice_backward (59 failures)
* add dtype fixup from https://github.com/tinygrad/tinygrad/pull/9297
* fix linter error and remove Tensor.pad (48 failures)
* add select_backward and index_put (40 failures)
* fix some more tests (36 failures)
* fix more tests (12 failures)
* some cleanups and fix couple more tests (10 failures)
* cleaner way to write upsample
* some more upsample cleanups
* use lambda for upsample
* add autowrapper for upsample forward
* cumsum and max_dim without aten functions
* revert _log_softmax
* fix more tests (1 failure)
* make linter happy
* move import to appropriate func
* make linter happy
* add codes for noqa
* some more refactors
* remove comment
* remove dependency on aten function for conv backward
* some more refactors
* add returns
* revert a change from merge
* some cleanups
* remove whitespace
* remove ruff change
* revert upsample
* add masked_fill_.Tensor and scatter.src_out
* add todo
* fix test_biased_conv2d
* fix test_var_one_in_axis & test_std_one_in_axis but break test_biased_conv2d :(
* revert torch_debug
* revert torch_debug
* skip test_gather_failure for the tiny backend
* make padding registration more consise
* add nonzero
* remove scatter_add since we already have the out
* fix scatter
* remove some repetition
* make upsample backward registrations more concise
* remove select.int
* use Tensor.cumsum
* realize conv2d outputs before backward to fix test_biased_conv2d
* add a todo for realize(1 failure)
* add new_empty and new_empty_strided
* make test_pad_circular_mode forward only and remove redundant stuff
* fix linter errors
* remove expect failure
* just tb
* slice is a view_op
* contiguous only when lazydata.is_realized
* fix backward for test_pad_circular_mode
* revert torch.nn.functional.pad override
* add transpose.int and make constant_pad_nd contiguous
* slice_backwards has no kwargs
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* probably how cumprod should look like
* update _cumalu to work with MUL
* shorter
* cumprod testing
* clean
* more cleanup
* add cumprod to torch backend.
* make it look like cumsum
* mypy fix
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* why does max_unpool2d feel slower than out.gradient ...
* slightly cleaner
* what happened to ruff
* need to think about this some more
* slightly faster now?
* clean up, 1 more failing edge case
* ok good
* working TINY_BACKEND
* nit doc wording
* retry CI
* wow argmax is so good
* 1 less line
* clean up and better variable names
* is this torch thing right...?
* add more tests
* slap a TODO on it
* clean ups
* prettier looking code and fix ceil mode test
* add return types and some docs
* ok that was a bad example since indices == value, just no example
* train_shakespeare_char.py works
* move aten.where.self_out to tiny_backend_out
* fix memory leak
* corealize in the backward_hook
* Update backend.py
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* poc
* repeated values fail, sigh
* is this being timed out?
* fix up down names
* bitonic v2, does this run?
* bitonic v3, faster
* bitonic v3.1, faster
* bitonic v3.1.1, same speed unlucky
* support dim and indices
* bitonic v3.2, simpler code, TODO repeated indices
* bruv gimme green for once cmon
* cat (stack) implementation, slow but maybe one day when cat is fast meow
* revert to v3.2
* bitonic v4, who let the cats out edition
* clean up variable names
* figured out repeated indices :D
* ruff check --fix
* use sort for topk
* add Tensor.sort everywhere
* fix docs and add some types
* slightly better variable names
* am I doing torch inplace correctly?
* delegate sort to values_stable
* add a contig, faster first sort
* maybe don't test_inplace
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* fix leak, realize everything on torch optim step
* only realize a subset
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* add torch inplace tests
* first set of tests passing
* wrap all inplace funcs, add more tests
* fixes and wrap more functions
* fix all uint8 tests to avoid slow tests
* fix the one test
* another test, another fix
* and one more, works for ddp now
* something on contiguous, cleanup
---------
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com>
* terrible but somewhat working impl
* linux behaves differently than macos?
* slightly better impl
* small clean up; haven't figured this out yet
* better
* torch has different behavior on linux and macos for duplicated values
* add sum docs
* fix test
* add torch return_type test
* add an exception test
* wrap_fxn instead, and move op lower in order
* better repeated values test
* rerun ci