George Hotz
|
5e7e359706
|
fix tests
|
2020-10-29 08:19:07 -07:00 |
|
George Hotz
|
9ae3e9daf3
|
shape has to be a kwarg now, idk why this didn't break before
|
2020-10-29 08:13:05 -07:00 |
|
George Hotz
|
f84f6c1edd
|
write sqrt and div using pow
|
2020-10-29 07:57:25 -07:00 |
|
Göktuğ Karakaşlı
|
4b163ee270
|
efficient version of adam (#20)
* counteracted bias initialization
* test new adam
* add optimizer tests
* rename helper function names to fix the test
* remove redundant import
|
2020-10-27 15:54:40 -07:00 |
|
George Hotz
|
f9788eba14
|
parameters, and start on efficientnet
|
2020-10-27 08:53:35 -07:00 |
|
George Hotz
|
1654008c1f
|
conv stride support
|
2020-10-26 08:54:43 -07:00 |
|
George Hotz
|
2a55d7402b
|
clean up ops, refactor pool backward. add stride test
|
2020-10-26 08:47:11 -07:00 |
|
George Hotz
|
93dceb4bee
|
fix kernel_size bug, name like torch, add test
|
2020-10-26 08:38:53 -07:00 |
|
Timothy Mc Alister
|
15e5988323
|
make default parameters work for functions
|
2020-10-26 12:43:36 +01:00 |
|
George Hotz
|
2d37fd686b
|
test ops
|
2020-10-25 19:03:49 -07:00 |
|
George Hotz
|
2eebbd32c6
|
ops test speed
|
2020-10-25 19:01:02 -07:00 |
|
George Hotz
|
b27bcbe4b4
|
avgpool and test refactor
|
2020-10-25 18:40:01 -07:00 |
|
George Hotz
|
4c42676cb6
|
400 -> 200
|
2020-10-25 17:19:59 -07:00 |
|
George Hotz
|
567707a5f6
|
rename max_pool2d to match torch, remove more fast conv crap
|
2020-10-25 17:16:47 -07:00 |
|
George Hotz
|
ea41f5e1c1
|
seems more generic
|
2020-10-25 16:40:37 -07:00 |
|
George Hotz
|
2333c4dea7
|
no tqdm in actions
|
2020-10-25 16:40:08 -07:00 |
|
George Hotz
|
ad48061927
|
better sort in torch profiler
|
2020-10-25 16:07:49 -07:00 |
|
George Hotz
|
82f8e10813
|
no hacks in that test
|
2020-10-25 15:52:05 -07:00 |
|
George Hotz
|
4baa4c041f
|
it's crazy how much faster pytorch is than numpy
|
2020-10-25 15:42:33 -07:00 |
|
George Hotz
|
5ddbd7f04b
|
2 to 3x slower than torch
|
2020-10-25 15:27:33 -07:00 |
|
George Hotz
|
f8311f5ecd
|
print fp/bp mnist
|
2020-10-25 15:08:18 -07:00 |
|
George Hotz
|
5c179d18ad
|
add profiling for mnist net
|
2020-10-25 14:20:55 -07:00 |
|
George Hotz
|
8fcada8071
|
faster and better convnet
|
2020-10-25 13:48:44 -07:00 |
|
George Hotz
|
96f9cdb8a0
|
woah, fastconv is wrong
|
2020-10-25 12:56:42 -07:00 |
|
George Hotz
|
bb98cdfef7
|
improve conv testing
|
2020-10-25 12:46:04 -07:00 |
|
George Hotz
|
ef24aac09e
|
finally, fast convs
|
2020-10-25 12:39:44 -07:00 |
|
George Hotz
|
67506eb6ba
|
fast im2col
|
2020-10-25 11:49:35 -07:00 |
|
George Hotz
|
c9968756d1
|
allow the line profiler to work
|
2020-10-25 11:13:40 -07:00 |
|
George Hotz
|
5062c2c8ff
|
profile conv better
|
2020-10-25 11:11:00 -07:00 |
|
George Hotz
|
c74764bac3
|
oops, set to None
|
2020-10-25 08:28:18 -07:00 |
|
George Hotz
|
935f5ddaaa
|
always keep batch size out front
|
2020-10-25 08:14:07 -07:00 |
|
George Hotz
|
b91fd3afad
|
maxpool
|
2020-10-25 07:43:34 -07:00 |
|
George Hotz
|
5216a1d9f3
|
refactor into tensor and ops
|
2020-10-23 10:34:21 -07:00 |
|
George Hotz
|
9b9e47f369
|
added conv profile test
|
2020-10-23 09:46:10 -07:00 |
|
George Hotz
|
5756115e57
|
anyone else let down by the fast conv?
|
2020-10-23 09:09:29 -07:00 |
|
George Hotz
|
bcb60e0b7c
|
wow, you have to name them test
|
2020-10-23 06:33:18 -07:00 |
|
George Hotz
|
2259c9faa1
|
low lr improves rmsprop
|
2020-10-23 06:22:32 -07:00 |
|
George Hotz
|
eda29fa0e0
|
clean up test
|
2020-10-23 06:11:38 -07:00 |
|
George Hotz
|
373b4e341b
|
Merge pull request #15 from f0ti/master
added RMSprop optim
|
2020-10-23 06:08:20 -07:00 |
|
f0ti
|
0b87aaca1e
|
update rsmprop
|
2020-10-23 14:46:45 +02:00 |
|
f0ti
|
c5f726ec2e
|
all three
|
2020-10-23 11:53:01 +02:00 |
|
f0ti
|
6a38ccb6b0
|
update rmsprop and readme
|
2020-10-23 11:49:43 +02:00 |
|
George Hotz
|
21ebb0b769
|
if you wait 24 seconds, that gets 98%
|
2020-10-22 21:49:14 -07:00 |
|
George Hotz
|
816f648161
|
chans doesn't need to be in self
|
2020-10-22 21:19:35 -07:00 |
|
George Hotz
|
77251cc6c3
|
7x7 conv = more accuracy
|
2020-10-22 21:10:27 -07:00 |
|
f0ti
|
7e1eddb0c5
|
added RMSprop optim
|
2020-10-23 02:50:02 +02:00 |
|
0xNaN
|
d95adbddb4
|
gradcheck now returns only a bool, refactoring of test_gradcheck
|
2020-10-22 01:28:52 +02:00 |
|
0xNaN
|
adbfc67456
|
test jacobian and numerical_jacobian against torch.autograd.functional.jacobian
|
2020-10-22 01:28:52 +02:00 |
|
0xNaN
|
1561d3b9c0
|
extracting jacobian and test_jacobian
|
2020-10-22 01:28:52 +02:00 |
|
0xNaN
|
93bc3c22a0
|
tiny gradcheck
|
2020-10-22 01:28:52 +02:00 |
|