* third try at torch loading
* numpy fixed
* fix enet compile
* load_single_weight supports empty weights
* oops, CPU wasn't the default
* so many bugs
* simple convnext implementation
* shorter function names
* need to realize the random functions now
* creating an optimizer realizes all params
* assign contiguous
* fix lazy lazy
* why was i doing that...add convnext to tests
* LazyNumpyArray
* enable assert + comment
* no two tiny
* New unittest for utils.py
Unit test fetch in basic ways. Would have tested more fetches, but
downloading stuff for tests is annoying and mocking is more
dependencies.
* Remove unused imports
Because the weights are being loaded from a third party internet address, it's unsafe to use eval. Also with the change I think the code became a little bit more clear as now it's clearer which keys are being transformed.
Co-authored-by: Seba Kreft <sebastian.kreft@houm.com>
* added resnets
* fix minor
* fix minor
* resnet in models
* added resnet test
* added resnet train test
* added linear, conv2d nn tests
* fix minor in extra/training
* resnet in models
* fix minor
* fix tolerance for linear in nn test
* fix eval, this causes cpu and gpu UT failing
* revert transformer test
* fix minor for CPU test
* improved model get_params for sequential layer
* fix minor for params counting
* commented broken ops tests
* improved train for resnet