* move gpuctypes in tree
* fix mypy
* regex exclude
* autogen sh
* mypy exclude
* does that fix it
* fix mypy
* add hip confirm
* verify all autogens
* build clang2py
* opencl headers
* gpu on 22.04
* add onnx test_reduce_log_sum_exp
* more reuse
* more
* stuff
* good CenterCropPad
* imports
* good ArrayFeatureExtractor
* pretty good Pad
* stuff
* stuff
* onnx.py
* Atan
* pass int8 test
* dtype related
* fastmath stuff
* Resize linear
* fix CI
* move back
* init
* test: added dtype tests for maximum
* fix: seperate maximum const and maximum tensors
* fix: del useless line
* fix: some dtypes
* CODE GOLF: we golfing at mar-a-lago golf club tonight boyyyys
* fix: add lil helper function
* fix: some test refactoring
* done
* sike: not done yet lol
* wtf I missed an assert, am I drunk
* yeah idk
* fix: line save from redundant check
* revert: line save
* fix: simplify test_broadcast cuz I'm stumped
* change some test name
* fix: bool max bool works
* test: add a maximum bool test
* test: make sure minimum also works with bool
* fix: something like this? :s
* fix: maybe this?
* fix: how about this? tighter check
* fix: this.
* revert: nvm mul(0.5) and div(2) has the same kernel for backward
* fix: .is_floating_point() xD
* revert: maximum and minimum and add cast
* fix: cover negative const case in test
* fix: use eq because I don't understand clang :D
* WHOOOOPS
* try
* test: add logical_not tests
* gah im retarded, but this doesn't match types for const()
* fix: can't we jsut do this?
* big change: I don't actually know what I'm doing
* WOOO IM JUST CHANGING EVERYTHING WOW probably gon revert later
* BYE BYE noqa: E501
* fix: less lines and add test
* fix: rm 2 redundant tests
* fix: eq with False so we don't unintentionally implicit upcast, but it's bool anyways so w/e
* cleanup noop prefixes in _pool
make expand dim=None as noop (in addition to -1). then slice, reshape, expand in _pool can share the same noop prefix
* nit
* something then reshape style
* that's repeat
- removed exact duplicated tests
- only kept one function if torch_fxn is the same as tinygrad_fxn
- used tensor method instead of class method style
- replaced unneeded `lamdba f: f(x)` with just `f`
- re-enabled commented tests that work now
- removed some forward_only now 0 shape tensor can backward
* add operator.lt and operator.eq to test_dtype_alu
those should pass now as we have broadcasted before passing to lt and eq.
also updated the test skipping criteria to reuse test_dtype.is_dtype_supported
* llvm lt nan is incorrect
* enable truediv too
* Revert "enable truediv too"
This reverts commit df703235fb.
* just that
* move reduce over 0 len axis logic to lazy.py
this fixed uneven shard reduce case if the uneven one has length 0
* fix interpreted backends
* fix backwards for 0 shape tensors too