* better support for platform dependent flags
* osx test support
* removed unused import and made line length <150
* changed osx ci shm
* lstrip in case SharedMemory._name is passed
* lazy rewrite, try 2
* min fix tests
* pass contig test
* put broken pads back
* move that to realize
* no contig child fixes array packing
* so wrong
* now that's correct
* base children
* fix bind issues
* disable to_image_idx
* fix tests
* that failure shouldn't break other tests
* more fixes
* fix torch
* skip failing tests in CI
* 1e-7
* half is broken
* 1e-6 margin of error
* remove the all_int(shape) check in Tensor._loadop
we can support jittable symbolic shape random with custom rand now, and we can formalize it in the test after threefry is ready
* MOCKHIP false positive
* move everything to code_for_op to reason about it
* loop the loopable parts
* its not that unreadable
* these are loopable too
* nitpick
* tests p1 - replace these with the actual compiler running alu ops tests
* tests p2: compile test_dtype_alu in HIP!
+add to CI
* nobody liked test_renderer
* revert test_dtypes change
* isolated mockhip tests
* dont need the WHERE hack after #2782
+ruff
* bf16 is broken in HIP
job failed in: https://github.com/tinygrad/tinygrad/actions/runs/7232101987/job/19705951290?pr=2778#step:8:73
* picking this back up
* add compile tests for unary ops and binary ops
* MOD is only in ints
* CMPLT wont work after the dtypes pr is merged because it will always be bool
* test all combinations
* Update cstyle.py
* don't use vload
* no getenv
* set seed
---------
Co-authored-by: qazal <qazal.software@gmail.com>
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com>
* invert (broken)
* decent invert
* shapetracker invert works
* plus is meh, invert is good
* support invert mask
* a few more invert tests
* shapetracker math invert test
* Uncripple dtype tests, TestBFloat16DType never actually runs.
* Fix conversion from/to bfloat16.
Call cast() recursively, so that it works for any type combo.
* Run this test on torch backend as well.
* Add torch.bfloat16.
* Add support for ushort and uint.
* Convert np.uint32 to np.int32 when loading.
* Fix warning.
* the universe is flat as a 2D tensor
* try this
* TESTS
* less lines in test
* don't change all_int since other places use it
* add tests and del noqa by making non-aesthetic spacing LOOOOOL
* some reordering
* fixed empty list and add tests
* more tests
* add list bool tensors
* clearer with least lines added
* added bool
* oops
* more tests
* improved tests
* oops