* LazyBuffer = UOp
* try 4 at this diff
* skip optimization tests p1
* raise kernel count expectations
* BIND isn't the _only_ uop that can become a tensor
* fix test_ones_sum on symbolic
* bump openpilot, correctness first
* offset on assign is fine
* uop is immutable
* what if this was higher
* more optimization skips
* instant fold const copy
* test_multitensor shouldn't expect buffer for unrealized
* move copy folder to upats
* start BUFFER_VIEW
* kinda BUFFER_VIEW
* Revert "kinda BUFFER_VIEW"
This reverts commit 94b4fe3040.
* BUFFER_VIEW try 2
* linter and missed _device
* pylint
* keep Ops.CONTIGUOUS
* always BUFFER_VIEW disk
* test
* cpu isn't a real device
* buffer references afte del
* add that back
* start bringing some of these back
* more test updates
* simpler simplify copy
* subbufer everything
* this is fine with buffer view
* cleanup the diff in test/ 1
* copy is one thing
* diff pruning
* diff pruning 2
* oh bind unbinds way too early
* extra
* more diff pruning
* more const folding
* experiment with symbolic here
* Revert "experiment with symbolic here"
This reverts commit cb87d61f7a.
* Revert "more const folding"
This reverts commit 2a7d258a2b.
* Revert VALID early folding
This reverts commit 4074f52317.
* storing const is fine
* fix test_prefer_half_buffer
* iterate on test_real_world
* this fixes test_train_mnist memory, breaks everything else
* Revert "this fixes test_train_mnist memory, breaks everything else"
This reverts commit dccfcbe068.
* always expect buffer to exist here
* temp debug: something is mutating lazydata in compile3
* Revert "temp debug: something is mutating lazydata in compile3"
This reverts commit 71400f0d55.
* everything back to normal
* compile3
* compile3 test
* start captured jit work, that test passes
* finalized memory skip set
* linter err
* back to base here
* tiny metaop cleanup
* print tensor
* 4th type this unbind got me
* green pickle
* tensor_variable sanity
* cast sanity
* link from the reds
* COPY sanity + minor repr change
* you can exist
* enable test_winograd
* bye bye nbytes
* danger, uop is mutating
* real become
* delete those from uop init
* put it in buffer init
* buffer inits with so much stuff
* buffer pickle try 2
* toposort can't be a cached property
* fix test_schedule_gc_with_inputs
* remove all @unittest.skip(gc)
* Revert "remove all @unittest.skip(gc)"
This reverts commit 9d8d92dd85.
* reenable real world + test_schedule_gc
* test: RUN_PROCESS_REPLAY=0
* fix pickle jit
* test changes
* reenable test_lru_alloc and TestTrain
* fix imagedtype
* bring pr back
* reenable 3 gc tests
* test_schedule better diff
* disable SPLIT_REDUCEOP
* test_save_all_dtypes looks fixed
* fix metadata
* skip that one
* fix viz by not pickling buffers
* simple test for const folding
* bring split reduceop back
* add simplify_alu
* simplify_binop fixes a test
* fix cast folding
* disable that test
* that test looks fine
* changes from delete_lazy pruning p1
* cast folding and children base
* test: cast folding from pruning branch
* green test_sgd_4convs_fuse_conv_bw
* enable some indexing folding
* test_complex_backward is fixed
* prune more, 295 -> 233
* fix test_multi_const_folding_literal
* fix double copy
* early become test
* ooooops
* clean up ctx in all big_graph
* fix openpilot 208 kernels
* train_cifar is fine now
* fix CAST_BEFORE_VIEW
* ever faker const
* back to 13
* mark expectedFailure
* fine don't create them
* test_multi_const_folding_tensor
---------
Co-authored-by: George Hotz <geohot@gmail.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* newest newer than new refactor of getitem
* hmmm
* hmmmmmmmmmmmmmmmmm
* bro.
* ???
* small improvements
* cleaner, but why u gotta do this to me mypy
* fix, but still dunno about mypy
* even better
* try again? Passes locally
* use match
* fix mypy
* better
* broooooo check this out
* fix mypy
* bug fix
* fixed
* polish
* advanced setitem draft
* add setitem tests
* fix for tests
* small change
* handle repeated indices with test
* fix v broadcasting to mask
* clean up a bit
* open more tests
* clean up, fixes issue with scalar tensor index
* fix
* fix index_put_ and linter
* add type annotation
* done
* remove non contiguous hack
* woops linter
* name fix
* add back type notation
* more type notation
* final
* linter
* check lazydata not shared
* no numpy
* no numpy
* rename
* index benchmark
* linter
* no cloning time
* rm benchmark
* new function
* rm contiguous and cast early
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
Co-authored-by: chenyu <chenyu@fastmail.com>
* compile raise CompileError and skip only RuntimeError in multiprocess beam
renderer error with multiprocess should not be skipped by beam
* use `==` for dtype to dtype comparison
* that needs to be is
* typo
* no ret value and just force contiguous
* ok revert contiguous stuff
* actually do force it contiguous
* revert again lol
* add simple regression test
* add assert for MLB
* guess we're contiguous everything from now on
* lol ugly af empty return...
* don't change order cuz i don't get disk
* init
* feat: add _to_const_val to getitem
* doc: changed docs
* docs: updated more docs
* merge: improved/fancy
* better error msg, minor cleanups
* feat: added index_put to test_indexing
* clean: test_indexing
* revert: gather changes lol
* refactor: use dict for tracking tensor indexing, also asserts for type
* oooooooooops
* ugh
* will revert this commit xD
* fix: removed asserts
* improvement: made in-line if statement clearer
* improved err message and improved slice_int tests
* fix: recover accidentally deleted line
* finishing touches
* reword some docs and del torch device tests in test_indexing
* del some redundant tests
* revert: gather asserts, do it in seperate pr
* fix some data_ptr stuff
* done
* done done
* fix broadcasted logic if there's 0 in shapes
should always expand into 0, not the other way around. fixed matmul with 0 in input shapes.
for forwards for now though, backward is more involved and would need to change 0 size shortcuts
* fix tests
* lazy rewrite, try 2
* min fix tests
* pass contig test
* put broken pads back
* move that to realize
* no contig child fixes array packing
* so wrong
* now that's correct
* base children
* fix bind issues
* disable to_image_idx
* fix tests
* that failure shouldn't break other tests
* more fixes
* fix torch
* skip failing tests in CI
* 1e-7
* half is broken
* 1e-6 margin of error
* enable test_index and test_advancedindex with pretty diff
* removed contig
* created set_ helper function
* comment change
* del empty line
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* add some helpers
* I think it should all work..
* fixed get_set_tensor
* done
* del import
* bye bye typing
* style
* remove empty lines lol
* deleted dtype arg
* del trailing space