Commit Graph

168 Commits

Author SHA1 Message Date
Daulet
c7e95ddb21 Add diamond model test (#181)
* add backward pass test for diamond model

* fix train_efficientnet example
2020-12-11 09:21:36 -08:00
Liam
89d0ff6989 Consistent testing (#137)
* Consistent GPU classes

Convert the existing GPU classes into one standard format.

Remove duplicated functions in `test_mnist` and create a TestMNISTGPU
class. This reduces line count and ensures consistency.

Use `@unittest.skipUnless(GPU, "Requires GPU")` instead of `if GPU:` to
skip GPU testing. This will ensure that skipped tests are displayed
accordingly in the pytest output.

* Optim Testing now supports GPU

* Tensor testing now supports GPU

jacobian and gradcheck auto skipped until GPU float64 support added.

* GPU support for custom constructor methods

* Remove GPU flag from Model constructors

It was requested that the `gpu` kwarg be removed from the model
constructor. GPU conversion is now handled in the train function.

This also required the conversion of Optimizer parameters as they are
constructed prior to execution of the `train` function and are dependant
on the model GPU state.

* Fix typo: float32->float64

* Clean `get_parameters` utility

Just a quick refactor w/ the new support for optimizers.

* Remove GPU kwarg from TinyNet

Remove `gpu` kwarg from tiny net to match test_mnist `train` function.
2020-12-09 02:25:27 -08:00
George Hotz
13d34373d1 move gradcheck to extra, clean up unbroadcast 2020-11-16 08:03:31 -08:00
George Hotz
9ac1ad40d6 Add GPU Support! (do not merge yet) (#41)
* copy tensors to and from gpu

* add on GPU

* adding works

* we stick shapes in

* works on cpu and gpu

* test changes, not passing yet

* something else

* op tests pass

* add, mean, and sum have working forward/backward

* mul ops test

* no gpu support, no problem

* test pass, clean up later

* gpu cleanup

* cleanup test ops, don't let div fail

* revert more

* aimpler dispatcher

* clean up grad

* GPU and

* grad is a Tensor now

* gate test on GPU

* cleanups

* late loading gpu

* GPU as input option

* last cleanups
2020-11-01 07:00:49 -08:00
Timothy Mc Alister
15e5988323 make default parameters work for functions 2020-10-26 12:43:36 +01:00
George Hotz
b27bcbe4b4 avgpool and test refactor 2020-10-25 18:40:01 -07:00
George Hotz
567707a5f6 rename max_pool2d to match torch, remove more fast conv crap 2020-10-25 17:16:47 -07:00
George Hotz
96f9cdb8a0 woah, fastconv is wrong 2020-10-25 12:56:42 -07:00
George Hotz
bb98cdfef7 improve conv testing 2020-10-25 12:46:04 -07:00
George Hotz
67506eb6ba fast im2col 2020-10-25 11:49:35 -07:00
George Hotz
935f5ddaaa always keep batch size out front 2020-10-25 08:14:07 -07:00
George Hotz
b91fd3afad maxpool 2020-10-25 07:43:34 -07:00
George Hotz
5756115e57 anyone else let down by the fast conv? 2020-10-23 09:09:29 -07:00
0xNaN
d95adbddb4 gradcheck now returns only a bool, refactoring of test_gradcheck 2020-10-22 01:28:52 +02:00
0xNaN
adbfc67456 test jacobian and numerical_jacobian against torch.autograd.functional.jacobian 2020-10-22 01:28:52 +02:00
0xNaN
1561d3b9c0 extracting jacobian and test_jacobian 2020-10-22 01:28:52 +02:00
0xNaN
93bc3c22a0 tiny gradcheck 2020-10-22 01:28:52 +02:00
Adrian Garcia Badaracco
5afe6b1f68 rename files 2020-10-21 11:28:03 -05:00