George Hotz
2a10116bfa
support drawing graphs
2022-01-16 10:45:58 -08:00
George Hotz
2cae2dfa07
don't crash the dataloader for imagenet
2022-01-16 08:41:26 -08:00
George Hotz
907ff7dbb6
imagenet is training
2022-01-15 23:27:31 -08:00
George Hotz
d1e082e0ef
add imagenet training support
2022-01-15 23:16:38 -08:00
George Hotz
ade2af7ac0
data doesn't require grad
2022-01-15 22:41:27 -08:00
George Hotz
0973e54eb0
fix DEBUG for GPU
2022-01-15 22:14:28 -08:00
George Hotz
e0bef0bd01
training is False by default
2022-01-15 19:57:41 -08:00
George Hotz
8ec2341cca
fix bn training
2022-01-15 19:47:01 -08:00
George Hotz
46bbbcf7f0
model touchups
2021-11-30 11:13:34 -05:00
George Hotz
b0f14b4af8
move datasets into datasets
2021-10-30 19:55:50 -07:00
iainwo
56d44637f3
fixed pylint, formatted python files iwth cblack on localhost ( #204 )
...
* fixed pylint, formatted python files iwth cblack on localhost
* Revert "fixed pylint, formatted python files iwth cblack on localhost"
This reverts commit 07e2b88466 .
* dedented 4-spaces added linter
Co-authored-by: Iain Wong <iainwong@outlook.com >
2020-12-17 14:37:31 -08:00
George Hotz
1d10559d1d
tinygrad.utils -> extra.utils
2020-12-12 15:26:07 -08:00
Daulet
c7e95ddb21
Add diamond model test ( #181 )
...
* add backward pass test for diamond model
* fix train_efficientnet example
2020-12-11 09:21:36 -08:00
Marcel Bischoff
d204f09316
some progress on batchnorms (draft) ( #147 )
...
* no of categories for efficientnet
* need layer_init_uniforn
* merge fail
* merge fail
* batchnorms
* needs work
* needs work how determine training
* pow
* needs work
* reshape was needed
* sum with axis
* sum with axis and tests
* broken
* works again
* clean up
* Update test_ops.py
* using sum
* don't always update running_stats
* space
* self
* default return running_stats
* passes test
* need to use mean
* merge
* testing
* fixing pow
* test_ops had a line dropped
* undo pow
* rebase
2020-12-09 22:14:27 -08:00
George Hotz
00312b8ad1
batchnorm work
2020-12-06 14:40:07 -08:00
George Hotz
102e6356e9
replace layer_init_uniform with .uniform
2020-12-06 13:44:31 -08:00
George Hotz
521098cc2f
se optional, track time better
2020-12-06 12:29:42 -08:00
George Hotz
609d11e699
trainer works with CIFAR
2020-12-06 12:20:14 -08:00
George Hotz
80a9c777ba
requires grad, optim in train enet
2020-12-06 11:10:30 -08:00
George Hotz
c66c27d22e
get parameters
2020-12-06 10:45:04 -08:00
George Hotz
51daaa43d4
fix memory leaks, add gc test
2020-12-06 10:34:40 -08:00
George Hotz
b8deb36e56
train BS=16 for 32 steps
2020-12-04 10:00:32 -08:00
adamritter
5797e63d9b
Train efficientnet should respect NUM environment variable ( #122 )
...
Co-authored-by: holonomicjl <58403584+holonomicjl@users.noreply.github.com >
2020-11-16 20:02:31 -08:00
George Hotz
2ffb8de1ea
move efficientnet to extra
2020-11-16 08:08:07 -08:00
George Hotz
55012d21bb
debug in backward pass too
2020-11-10 01:19:52 -08:00
George Hotz
7ac1b163a5
add backward to enet train
2020-11-09 16:05:52 -08:00
George Hotz
8ca9c0205f
train_efficientnet is broken still
2020-11-09 16:01:16 -08:00