Files
tinygrad/extra/models
David Hou 4b95350c41 fp16 resnet (without expand backwards sum in float, doesn't work) (#3816)
* fp16 resnet

* cast running mean and var back to default float

* extra cast

* check symbolic no overflow

* add linearizer failure

* loss scaler after grad contig

* oops

* i think this works

* don't loss scale fp32

* remove overflow test case

* remove symbolic bounds check

* loss scaler should be float

* temporarily disable padto cuz bug

shruggie

* make running stats in batchnorm float32?

* calculate lars stuff in fp32?

* oops

* remove most changes

* move loss scaler out of optimizer

* no more FP16 var

* oops

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-03-28 01:25:37 -04:00
..
2024-03-04 17:36:23 -05:00
2023-11-28 17:36:55 -08:00
2024-03-14 20:44:34 -07:00
2024-03-18 15:33:06 -07:00
2023-11-28 17:36:55 -08:00
2024-01-12 14:13:40 -05:00
2023-11-28 17:36:55 -08:00
2023-11-28 17:36:55 -08:00