* remove np from beautiful_cifar
* remove np from cifar
* rename variable and rename tensor.arrange to just tensor.randperm
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* enumerate cases of Tensors in the JIT
* optional fused optimizers
* add fused optimizer test
* move that there
* ugh
* work on beautiful_cifar
* speed close to hlb_cifar
* schedule to corealize all
* one line sched step
* less lines
* move cifar into datasets
* support for pathlib Tensors, tar_extract, and fetch gunzip
* too early for Device.DEFAULT
* simpler hlb_cifar + .to(None) is default
* new compiler failure, start beautiful_cifar
* beautiful cifar runs but is broken
* jit train step
* cleaner
* std_mean, not mean_std
* more correct
* fast indexing
* don't print that
* torch load broken
* add eval
* nicer bar
* decoraters are the way to do this
* bounds check the load
* a few ops
* batchnorm bugfix, if track_running_stats is False, use online estimate
* full timing
* fix fusion
* unneeded realize
* master tensor