* add default gate in index
* assert store
* add TestRendererFailures
- move test_gated_store_with_alu to new TestRenderFailures class for
tests that fail on multiple renderers
- add test_renderer_failures.py run on python CI
* add test for gated index in 2d
* test TestRenderFailures
Introduced in #9585, probably when i incorrectly resolved merge conflict
while rebasing an old, mi300x-only branch. Seems to be the source of
multi gpu beam llama hangs
* init lds noop and lds_0 spec
* refactor lds helper test
* fix typo
* test all lds at the same time
* change comment
* comment
* start test_lds_full
* test_lds_tc
* add tc spec
* AMDComputeQueue.wreg
Used to be part of #9428, i think it's much more readable than repeating
the ~same pm4 things over and over again, especially with separate .encode
* fix indentation
* fix some tests in test_ops for torch backend(171 failing)
* fix more tests (135 failures)
* fix tests (126 failing)
* handle transposed convs (109 tests failing)
* fix slice
* fix lshift & rshift and more tests (87 tests failing)
* revert accidental change
* remove unnecessary changes (82 failures)
* fix backward for avg_pool2d (78 failures)
* fix backward for avg_pool2d (78 failures)
* fix replication backpass
* fix reflection pad back pass (71 failures)
* cummax with indicies, aten.mv and move out methods (67 failures)
* extract avg_pool2d and avg_pool3d to separate functions (62 failures)
* revert changes for cat_out
* rewrite avg_pool and pad without repetition
* remove duplicates from decomps
* slice rewrite and add slice_backward (59 failures)
* add dtype fixup from https://github.com/tinygrad/tinygrad/pull/9297
* fix linter error and remove Tensor.pad (48 failures)
* add select_backward and index_put (40 failures)
* fix some more tests (36 failures)
* fix more tests (12 failures)
* some cleanups and fix couple more tests (10 failures)
* cleaner way to write upsample
* some more upsample cleanups
* use lambda for upsample
* add autowrapper for upsample forward
* cumsum and max_dim without aten functions
* revert _log_softmax
* fix more tests (1 failure)
* make linter happy
* move import to appropriate func
* make linter happy
* add codes for noqa
* some more refactors
* remove comment
* remove dependency on aten function for conv backward
* some more refactors
* add returns
* revert a change from merge
* some cleanups
* remove whitespace
* remove ruff change
* revert upsample
* add masked_fill_.Tensor and scatter.src_out
* add todo
* fix test_biased_conv2d
* fix test_var_one_in_axis & test_std_one_in_axis but break test_biased_conv2d :(
* revert torch_debug
* revert torch_debug
* skip test_gather_failure for the tiny backend
* make padding registration more consise
* add nonzero
* remove scatter_add since we already have the out
* fix scatter
* remove some repetition
* make upsample backward registrations more concise
* remove select.int
* use Tensor.cumsum
* realize conv2d outputs before backward to fix test_biased_conv2d
* add a todo for realize(1 failure)
* add new_empty and new_empty_strided
* make test_pad_circular_mode forward only and remove redundant stuff
* fix linter errors
* remove expect failure
* just tb
* slice is a view_op
* contiguous only when lazydata.is_realized
* fix backward for test_pad_circular_mode
* revert torch.nn.functional.pad override
* add transpose.int and make constant_pad_nd contiguous
* slice_backwards has no kwargs
---------
Co-authored-by: chenyu <chenyu@fastmail.com>