* update mask function
* kept 94 with the new fetcher
clean up batch fetcher
* 94.04% without cutmix
* 94.04% with cutmix
* move batch fetcher to avoid fetching additional batch last STEP
* feat: train cifar using multigpu
* feat: split eval batch across 5
* feat: cleaner allreduce
* feat: 93.88%
* feat: cleaner batch chunking from bert
* feat: cleaner grad sync
* feat: tinygrad argmax
* feat: make it work with different gpu counts
* feat: move some stuff into the normal __init__
* feat: autodetect gpu count
* feat: move import inside
* add disk_tensor
* fix jit
* new baseline before whitening
* whitening through torch
* whiting done currently at 91.65%
* 91.99%
* clean up mixup and 92.3%
* clean up 92.30%
* 92.49% before searching for new hyper-parameters
* fix CI
* fix white space
* add whitening init in test
* refactor, update hyperpara, 92.72%
* converting whiting to tinygrad operation
* update CI kernels count for CIFAR
* add pad reflect
* add random crop 92.53%
* update hyperpara 93%
* 93.15% on docker container, need to refactor the assignment for hyper param
* print out weights and bias to be separated
* bias/non-bias params separated
* fix whitespace
* clean up
* refactor hyper-param with dict
* refactor lr schedular params
* fix whitespace
* fix cross entropy loss
* fix whitespace
* move opt hyp to hyp dict
* minor fixup
* adjust model, loss scaling
* 92.74% while using half of compute as before
* update hyp for cutmix
* random shuffle during batches
* clean up
* updating the model
* update ConvGroup
* disable gradients for batchnorm layer weights
* whitespace
* 93.92%
* clean up
* finally 94%git add .!
* rewrite whitening to remove dependency on torch
* whitespace
* remove dependency on torch, 93.91%
* back to 94.03%
* clean up
* update test_real_world
* Rename in files
* Move files
* Moved to extra/datasets as suggested
* Changes to files
* Fixed stupid mistake
---------
Co-authored-by: terafo <terafo@protonmail.com>
* Fix examples
* Remove training in parameters
* Simplify a bit
* Remove extra import
* Fix linter errors
* factor out Device
* NumPy-like semantics for Tensor.__getitem__ (#506)
* Rewrote Tensor.__getitem__ to fix negative indices and add support for np.newaxis/None
* Fixed pad2d
* mypy doesn't know about mlops methods
* normal python behavior for out-of-bounds slicing
* type: ignore
* inlined idxfix
* added comment for __getitem__
* Better comments, better tests, and fixed bug in np.newaxis
* update cpu and torch to hold buffers (#542)
* update cpu and torch to hold buffers
* save lines, and probably faster
* Mypy fun (#541)
* mypy fun
* things are just faster
* running fast
* mypy is fast
* compile.sh
* no gpu hack
* refactor ops_cpu and ops_torch to not subclass
* make weak buffer work
* tensor works
* fix test failing
* cpu/torch cleanups
* no or operator on dict in python 3.8
* that was junk
* fix warnings
* comment and touchup
* dyn add of math ops
* refactor ops_cpu and ops_torch to not share code
* nn/optim.py compiles now
* Reorder imports
* call mkdir only if directory doesn't exist
---------
Co-authored-by: George Hotz <geohot@gmail.com>
Co-authored-by: Mitchell Goff <mitchellgoffpc@gmail.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
* fixes big KOPT, breaks opencl
* fix optimizer
* KernelCache
* oops, broke batchnorm
* hack to fix it
* fix llvm, less hacky gpu
* disable the cache
* cache just breaks things