George Hotz
462f1ce0da
Remove Matmul ( #323 )
2022-06-10 19:26:23 -07:00
George Hotz
30ab2249eb
match order
2022-06-08 11:46:51 -07:00
George Hotz
4a9882d495
hlops
2022-06-08 11:46:09 -07:00
George Hotz
e046a2fd9f
readme fix typos
2022-06-08 11:43:05 -07:00
George Hotz
4b09ca90a1
readme: still WIP
2022-06-08 11:41:19 -07:00
George Hotz
f0fe37bd34
simpler graph demo
2022-06-05 12:40:12 -07:00
George Hotz
89acf6742d
more graph docs
2022-06-05 12:16:50 -07:00
George Hotz
88de42fb6e
document graph mode
2022-06-05 12:13:05 -07:00
George Hotz
d8d19ed468
wikimedia wasn't returning 200
2022-01-15 19:09:29 -08:00
George Hotz
a95ef16c8c
sub 1000 lines
2021-10-30 19:48:24 -07:00
George Hotz
844540a5ed
yolo in readme
2021-10-30 19:47:34 -07:00
George Hotz
121d5a17ee
use tinynn for Conv2d
2021-10-30 19:40:44 -07:00
George Hotz
114f6ca3fd
more readme cleanup
2021-10-30 16:51:25 -07:00
George Hotz
effd0dc833
update readme
2021-10-30 16:34:00 -07:00
George Hotz
2e71ae33f6
max op works
2021-06-17 17:01:21 -07:00
George Hotz
e8eb7d1b7e
max op
2021-06-17 16:20:56 -07:00
George Hotz
c1d469d440
sum op
2021-06-17 16:19:35 -07:00
George Hotz
ff3fdc58e5
risk -> cherry
2021-06-16 09:59:48 -07:00
George Hotz
1e62e45d67
better todo
2021-06-15 10:30:16 -07:00
George Hotz
9ca4388695
debug
2021-06-15 10:24:21 -07:00
George Hotz
3d44aab52c
more
2021-06-15 10:23:57 -07:00
George Hotz
4850d6eb43
update todo
2021-06-15 10:22:39 -07:00
George Hotz
508ced114c
readme
2021-06-13 17:17:44 -07:00
George Hotz
77ba198b57
Revert "Update README.md ( #259 )" ( #260 )
...
This reverts commit 5a69c5db6d .
2021-06-04 14:41:41 -07:00
Gabriel Rojas
5a69c5db6d
Update README.md ( #259 )
2021-06-04 14:41:07 -07:00
George Hotz
0702e0c763
nah, no sign, it's not what you want. use relu
2021-01-03 09:30:33 -08:00
George Hotz
c2eeb6950b
add support for sign. technically relu can be second class now
2021-01-03 08:29:57 -08:00
George Hotz
92abe43683
reduce before binary because of unbroadcasting
2020-12-31 09:49:52 -05:00
George Hotz
de7fe085de
no read out of bounds
2020-12-31 09:41:36 -05:00
George Hotz
30f8132646
reorder ops in ops cpu
2020-12-30 11:00:01 -05:00
George Hotz
e5b2803b5d
ops in readme
2020-12-30 10:48:55 -05:00
George Hotz
2d44bf7f1a
Dot -> Matmul
2020-12-30 10:41:51 -05:00
George Hotz
fcfe3dae01
write slice for CPU
2020-12-30 10:32:53 -05:00
George Hotz
1f5c9618ef
refactor in readme and issue #225
2020-12-29 17:30:04 -05:00
George Hotz
4bbad11afe
link to papers
2020-12-29 14:15:46 -05:00
George Hotz
3f8e137b6f
extra/transformer
2020-12-29 14:14:00 -05:00
George Hotz
8f9232d59b
readmee
2020-12-29 13:40:34 -05:00
George Hotz
837aaacfbf
Unpad2D on GPU:
2020-12-29 13:16:14 -05:00
George Hotz
02655c07d5
break maxpool2d on GPU
2020-12-29 13:05:57 -05:00
George Hotz
061e37de39
touchups
2020-12-29 12:41:21 -05:00
George Hotz
a2e6562330
fix max op, less lines
2020-12-29 10:47:04 -05:00
George Hotz
628d21f899
doc touchup
2020-12-28 10:45:26 -05:00
George Hotz
fafece9db7
avgpool2d is a second class op
2020-12-28 10:41:59 -05:00
George Hotz
593233b668
log and exp are first class ops
2020-12-28 10:00:30 -05:00
Liam
bcf1518309
All devices are equal! ( #196 )
...
* Update all devices to be tested
ANE, CPU and OCL all now support all tests.
However tests are not currently passing on GPU and I cannot test on CPU.
Failing GPU test are not an issue caused by this update. Tests have not
been passing due to a missing "six" required installation.
OpenCL Tests have not been run since commit: 1a1c63a08b
devices have 3 types and are handle by a new DeviceTypes enum. (The goal
is to revert to Tensor.<type>, but this current setup allows for keyword
argument defaults: `device=DeviceType.CPU`)
All references to Tensor.GPU/CPU/ANE as been converted to the
corresponding `DeviceTypes` enum.
Refactor of the conversion code to allow for any device to any device
conversion.
* Add six dependency in requirements.txt
* Resolve failure to run tests
Move six into gpu required installs. Remove six from standard
installation.
* Remove repeated data conversion
* Refactor method names
Also reduce code with .to and .to_
* Dynamic device handlers
* Refactor DeviceTypes -> Device
* Add mem copy profiling back
* test_backward_pass_diamond_model passing
* Resolve Sum issue on GPU
* Revert batchnorm2d tests
* Update README with upadated API
* ANE testing with
* Last minute line gains
2020-12-15 23:44:08 -08:00
George Hotz
b86bbd2e72
readmes
2020-12-13 21:32:20 -08:00
George Hotz
4d8235d5f7
readme update
2020-12-13 20:24:33 -08:00
NeuralLink
1a1c63a08b
Gan is real...Look what tiny just generated! ( #192 )
...
* mode collapse solved
* info add
* delete unnecessary imports
* readme
2020-12-13 20:23:12 -08:00
George Hotz
f95e79dab7
update readme
2020-12-12 17:14:10 -08:00
George Hotz
a5aced8d47
30 MEGAReLUs. we need to lose 12 lines
2020-12-12 17:07:34 -08:00