George Hotz
686a74de92
fast zeros and ones
2023-02-27 06:46:26 -08:00
George Hotz
3a2a500e90
prevent race condition, external yolo test for now
2023-02-26 17:08:24 -08:00
Jacky Lee
0f58c4c648
Cleanup yolo and remove stateless classes ( #604 )
...
* Add AvgPool2d as a layer
* Clean up a bit
* Remove stateless layers in yolo_nn
* More cleanup
* Save label for test
* Add test for YOLO
* Test without cv2
* Don't fail if cv2 not installed
* Better import
* Fix image read
* Use opencv :)
* Don't download the file
* Fix errors
* Use same version
* Set higher confidence
* Why is the confidence so low?
* Start over
* Remove stateless layers
* Remove extra lines
* Revert changes
* Save a few more lines
2023-02-26 16:55:21 -08:00
George Hotz
1d01842232
remove fake test
2023-02-25 10:21:07 -08:00
George Hotz
8b96522e1d
instant identity removal
2023-02-25 09:46:04 -08:00
voidz
94bec40110
moved extras/jit.py -> tinygrad/jit.py ( #599 )
...
* moved extras/jit.py to tinygrad/jit.py
* fixed indent
* removed tinygrad.helpers.DEBUG from jit.py
2023-02-25 08:32:33 -08:00
George Hotz
2c5e13a513
Reluless ( #600 )
...
* replace relu for maximum
* fix for other backend
* clean up RELU and GT0
* tests for maximum
* had to clean that up
* why reverse a maximum?
2023-02-25 01:21:16 -08:00
George Hotz
f3386c7f09
improve symbolic, hlop conv output is simple now
2023-02-24 22:20:40 -08:00
George Hotz
f8f026e8bb
oversized expand for HLOP convs
2023-02-24 21:48:47 -08:00
George Hotz
2edfe64512
improve shapetracker tests
2023-02-24 21:07:53 -08:00
George Hotz
da5643d024
rest of tests shouid be made to pass
2023-02-24 12:52:23 -08:00
George Hotz
85452fbaf3
onnx 58/109/208
2023-02-24 12:19:05 -08:00
George Hotz
e8a153e4e9
onnx : add a whole bunch of ops
2023-02-24 12:00:03 -08:00
George Hotz
f2486a7248
more onnx ops
2023-02-24 10:55:58 -08:00
George Hotz
4d0a3dd653
openpilot expand is bugged
2023-02-24 10:25:59 -08:00
George Hotz
2e56a4793e
rename log_softmax, support dim, fix onnx Softmax
2023-02-24 10:11:24 -08:00
George Hotz
5cdfeffe2c
fix shape test
2023-02-24 09:36:32 -08:00
George Hotz
e263c0c628
onnx : another model test is passing
2023-02-24 09:22:58 -08:00
George Hotz
d3feea302d
much cleaner way to write onnx ops
2023-02-24 08:46:28 -08:00
George Hotz
f6d946853c
more bugfixes
2023-02-24 00:21:29 -08:00
George Hotz
b1b2d8f440
onnx : some op tests working
2023-02-23 23:58:13 -08:00
George Hotz
2d59b25ead
onnx backend test : enable only the model tests
2023-02-23 22:36:26 -08:00
George Hotz
5b10dfcab8
onnx tests : 22/175/208
2023-02-23 22:00:16 -08:00
George Hotz
d8b6f241f1
external_test_onnx_backend
2023-02-23 21:55:07 -08:00
George Hotz
758515dcc0
conv2d is an hlop ( #589 )
...
* conv2d is an hlop
* shorter conv
* KOPT=-1
* alt imp
* MULACC
* smarter mulacc
* pop conv
* 7x7 -> 5x5
* didn't fix, that's not going to work
* this is faster and matches old behavior
* oh, non lazy just won't work with mulacc
* mulacc in torch
* bool types were creeping in
* optimizer is actually better with hlop conv
* fix pushing permutes issue
* refactor einsum_mulacc
* fix up readme
* update readme
* _image_conv2d
* fix bias addition location
* pushing permutes gets back to 200 kernels
* conv cleanup
* disable hlop conv
* don't hide that in helpers
2023-02-23 17:52:31 -08:00
George Hotz
ab3a2ae9a2
fix test_resnet in onnx now that maxpool works
2023-02-23 08:41:47 -08:00
George Hotz
fd6082dcef
support all _pool2d. conv will eventually be an hlop
2023-02-23 08:19:47 -08:00
Mischa Untaga
5190784cbb
Fix Tensor random functions determinism with same seed ( #580 )
...
* fix Tensor random functions determinism with same seed
* long lived rng
* TIL ClassVar typing
2023-02-22 19:08:43 -08:00
George Hotz
c8d89eb20e
avg/max pool strides
2023-02-22 18:00:48 -08:00
George Hotz
628ce067a1
add tests to mypy
2023-02-22 07:07:38 -08:00
George Hotz
104c3c5e73
oops, forgot that debug
2023-02-22 06:58:27 -08:00
Connor Henderson
9670bf1fd1
Add unsqueeze ( #574 )
...
* Add unsqueeze
* remove UNSQUEEZE from llops part of readme
* make it an hlop
2023-02-20 20:14:59 -08:00
George Hotz
60008e55cd
sick of that failing
2023-02-19 13:05:37 -08:00
Martin Loretz
7e9a5e3f31
Refactor graph ( #560 )
...
* Refactor graph
* Add graph tests
* Use CPUBuffer for graph tests
* Remove the use of GlobalCounters
2023-02-19 10:41:30 -08:00
Kirill
7944cfdadc
Remove Tensor.data ( #565 )
2023-02-18 16:36:12 -08:00
Jacky Lee
9fd41632c6
Import get_parameters from tinygrad.nn ( #559 )
...
* get_parameter is in optim
* Update all imports for get_parameters
* Clean up
* use optim.get_paramters
2023-02-17 15:22:26 -08:00
George Hotz
fae7654924
fix sync issue
2023-02-17 12:42:45 -08:00
George Hotz
5e6265be6e
metal timing, fix speed test
2023-02-17 12:31:54 -08:00
George Hotz
121bd03cbd
metal globalcounters
2023-02-17 12:02:54 -08:00
Jacky Lee
e172f0087a
BatchNorm2D -> BatchNorm2d ( #558 )
...
* BatchNorm2D -> BatchNorm2d
* Fix typo
2023-02-16 12:31:49 -08:00
George Hotz
20a03d5017
woah, don't sync torch if it's not torch
2023-02-12 07:48:56 -08:00
George Hotz
de71c13934
test speed v torch uses jit
2023-02-12 07:43:17 -08:00
George Hotz
446442dbb3
fix tests symbolic
2023-02-11 15:16:47 -08:00
George Hotz
7a7046f264
sum_combine_num
2023-02-11 14:48:31 -08:00
George Hotz
7d33f2d659
CL.CACHE is over, GlobalCounters.cache is it
2023-02-11 12:00:14 -08:00
George Hotz
0a2035e015
oops, GPU isn't defined
2023-02-11 10:10:02 -08:00
George Hotz
3421d4af10
the jit has a test
2023-02-11 10:04:03 -08:00
George Hotz
b9f02671d3
oops, broke torch speed test
2023-02-10 16:13:53 -06:00
Jacky Lee
5c51ae8dbf
Show where tinygrad is faster in speed test vs torch ( #549 )
...
* show where tinygrad is faster
* don't change text color
2023-02-10 14:01:07 -06:00
George Hotz
c3cf17c6d0
Symbolic render ( #550 )
...
* render symbolic
* valid
* fix shapetracker tests
* render_python is the default
* expr is gone
* remove legacy behavior
2023-02-10 13:22:26 -06:00