Commit Graph

1088 Commits

Author SHA1 Message Date
George Hotz
7a1d96fd76 No negative (#632)
* behavior is correct without VALIDHACKS

* simple div and mod

* fix tests

* no negative variables

* alt form is correct

* still correct

* bug in mulnode

* at least validhacks works now

* cleanups

* test validhacks, and to_image_idx

* cache compare key

* tests and __neg__
2023-03-03 16:48:14 -08:00
George Hotz
999b44c274 fix external test + speed 2023-03-03 06:46:16 -08:00
George Hotz
459488bba2 fix linter (#630)
* fix linter

* no imports okay

* explicit bases

* disable in pylintrc
2023-03-02 20:06:20 -08:00
George Hotz
bfcec234a2 Refactor ASTs (#622)
* ugh worst branch name

* compiler refactor continues

* scc -> cloc

* buf -> _buf

* finish _buf, and program -> runtime

* gpu is still working, clang isn't

* clang in new style

* ops_metal

* something broke it

* improve metal

* clean up tons of cl crap

* hack fix sync

* cleaner gpu

* gpu metal clang

* cleanups

* minor refactor

* GPUCodegen

* fix up LLVM

* blind CUDA refactor

* codegen / runtime

* keep ops naming

* linter passes

* woah, llvm was allocing 4x what it needed to

* bugfixes

* fix openpilot compiler

* fix compile_efficientnet

* method cache should fix tests

* deal with duped functions
2023-03-01 18:57:29 -08:00
George Hotz
3c8da6bd03 add typing 2023-02-28 10:54:46 -08:00
George Hotz
d584bae5c0 fine, openpilot can have 197 kernels 2023-02-27 11:48:36 -08:00
George Hotz
c9252d38b2 mypy cache breaks if you sometimes check untyped defs, no checking tests for now 2023-02-27 09:57:33 -08:00
George Hotz
e74779f19d typing fixup 2023-02-27 09:52:04 -08:00
George Hotz
edc8fbfff2 woah, why isn't OPT=2 2023-02-27 08:03:31 -08:00
George Hotz
f4ee7d2cad back to 196 kernels 2023-02-25 18:25:34 -08:00
George Hotz
6e98a172a0 fix broken contiguous 2023-02-25 17:41:49 -08:00
George Hotz
a44e8e4385 discard children on mop shuffle, 200 -> 196 kernels 2023-02-25 10:51:07 -08:00
George Hotz
758515dcc0 conv2d is an hlop (#589)
* conv2d is an hlop

* shorter conv

* KOPT=-1

* alt imp

* MULACC

* smarter mulacc

* pop conv

* 7x7 -> 5x5

* didn't fix, that's not going to work

* this is faster and matches old behavior

* oh, non lazy just won't work with mulacc

* mulacc in torch

* bool types were creeping in

* optimizer is actually better with hlop conv

* fix pushing permutes issue

* refactor einsum_mulacc

* fix up readme

* update readme

* _image_conv2d

* fix bias addition location

* pushing permutes gets back to 200 kernels

* conv cleanup

* disable hlop conv

* don't hide that in helpers
2023-02-23 17:52:31 -08:00
George Hotz
628ce067a1 add tests to mypy 2023-02-22 07:07:38 -08:00
George Hotz
714bf4b108 clang backend (#572)
* start clang backend

* mostly working

* no group for reduce w clang

* it compiles

* compiles

* a11y

* minor fixups

* formatting

* add a test

* rename test
2023-02-20 18:18:18 -08:00
James Roberts
0d405fd5bc Parallelize CI tests (#535) 2023-02-06 15:27:44 -06:00
George Hotz
90529d3750 tests are 20% faster (#529)
* pytorch CPU

* no cache, it's slower

* pytorch cpu for real

* remove double onnx
2023-02-06 09:56:14 -06:00
George Hotz
6eb0e6a650 shuffle deps: always tqdm, make linting category 2023-02-06 09:27:01 -06:00
George Hotz
1d80639646 make linter test install testing deps 2023-02-06 09:21:48 -06:00
George Hotz
60bb64811c merge mypy into linters, no useless package update 2023-02-06 09:14:00 -06:00
Martin Loretz
97f0a82be7 Cache pip packages in github actions (#522)
* Cache pip dependencies in github actions

* Add setup.py as cache-dependency-path

* Test caching

* Test caching

* Upgrade setup python action

* Test caching

* Remove setup.py from cache-dependency-path

* Don't remove cache-dependency-path

* Don't cache linter package's

* Test caching

* Test caching

* Test caching

* Upgrade actions/checkout to v3
2023-02-03 20:04:20 -08:00
George Hotz
e313c8af20 update openpilot tests from OPENCL to GPU 2023-01-24 14:05:20 -08:00
George Hotz
49c6e6d472 Latest attempt to add image (#462)
* add image

* load + store + boring stuff:

* image tests pass

* thneed print GFLOPS

* op conv test

* more debugging

* hack for multiview image

* shapetracker creates less views

* disable image tests

* working better

* ugh, lkey not key

* print in DEBUG, and allow views

* works

* simple padding conv2d

* use index for image

* that was bad code

* debug print

* fix types

* less lines

* save lines
2023-01-12 17:36:30 -08:00
George Hotz
27211103ae docker: no -it 2023-01-09 12:49:59 -08:00
George Hotz
d6e86a29a8 docker: forgot to checkout code 2023-01-09 12:48:03 -08:00
George Hotz
73ce9a771e that fix it 2023-01-09 12:46:33 -08:00
George Hotz
bfd4f4e35c testdocker 2023-01-09 12:41:52 -08:00
George Hotz
5e07d4669d the speedy chonker is going to replace the old chonker (#432)
* bringing back reshape and permute

* done with E701

* 4x4 works in generic way

* max and sum not vectorizing...

* special case single float

* support comparing to MPS

* improve matmul speed, consider generic principles

* GlobalCounter

* fix op tracking

* faster

* comment that out for now

* err, it needs that

* fix minor issues

* fix global_mem
2022-11-11 18:34:24 -08:00
George Hotz
b8c94a67c9 Simple chonker (#431)
* chonker will make llvm fast

* work

* better speed tests, we will make them fast

* with the cache add is the same speed

* relu and neg are fast

* fix sum speed

* maximum maxnum?

* hack for gemm opt

* gemm very slow

* zeros like

* test_permute

* shapetracker returns self

* fix shapetracker factorization

* err, int strides

* permutes are faster now in tinygrad than pytorch

* support -1 in expand

* gemm unrolled

* improve final test case

* WIP GEMM

* why isn't GEMM fast?

* revert cache dim

* ffp contract works on clang, not llvm?

* ignore llvm ir

* this makes fma work at least, but no faster

* USE_4x4

* 63 GFLOPS

* 87 GFLOPS

* that wasn't matmul, 44 GFLOPS now

* 82 GFLOPS permuted

* this permute too

* a little speed for the convs

* 45 GFLOPS

* speed tests pass again

* clean up prints

* fix FMA WHAT A WASTE OF TIME

* colors

* moar fair

* GPU

* useless on chonker

* cleanups

* improve factorized shapetracker

* better threshold

* label conv

* work

* ops test pass again

* hot load the index

* run the last view, no need to create

* ZeroView needs a repr for the key to work

* fix segfault on out of bounds

* one more test

* start amx, and llvm.initialize_native_asmparser

* amx works

* nice AMX class

* nicer AMX class

* refactor get_idxs

* amx working

* is slower...

* useless flip

* cache

* SZ_X

* AMX_SZ_X/Y work alone

* Contiguous mlop

* test gemm packed

* PREPARE in packed

* use_amx factor

* prefetch isn't faster

* loop

* same 3ms

* 2.24 ms

* allow double on store in TG

* amx reduce is the same speed as non amx reduce

* include memory bandwidth

* clean up shapetracker

* flip returns stride

* prepare for upstream

* Update ops_llvm.py (#426)

* permutes are yellow and green now

* faster conv

* llvm cleanups

* Show optimised IR under debug 4 (#428)

* ASTKernel class

* Make tinygrad work with older python version (#427)

* Make tinygrad work with older python version

* Use partialmethod instead of partial

* smiple chonker is chonking

* remove junk from test speed vs torch

* fix linker and types

* AMX is only here now

* add LLVM tests, it's a valid backend now

* oops, run llvm test

* contiguous_op

* fix loadops compare

* dedup reduceops

Co-authored-by: calledit <1573053+calledit@users.noreply.github.com>
2022-11-10 23:17:09 -08:00
Liam
8dc28dd733 Create python-publish.yml (#163) 2022-11-08 08:45:01 -08:00
George Hotz
d02f8f9bc0 can we lose the lines with E701 still there? 2022-10-28 08:36:03 -07:00
George Hotz
ef62db3186 cleanups, remove E701 2022-10-28 08:28:56 -07:00
George Hotz
b65b70812a Exec AST (#404)
* working exec ast

* exec_ast is staticmethod

* GenericExecAST

* fold that sometimes

* ExplicitExecAST

* exec_ast for GPU

* gpu working

* get_lazyop_shape

* now gpubuffer is ExplicitExecAST

* dedup

* add a type

* RESHAPE in opencl code

* fix linter

* that too for linter

* cleanups

* remove dead code

* GenericShape is less lines

* add ALLOWED_KERNEL_COUNT to tests

* fix mypy

* that's gotta be recursive

* fix opencl shape processing

* remove unneeded lambda
2022-10-28 08:27:03 -07:00
George Hotz
3b9b7eda48 remove run_thneed dead code 2022-10-20 17:24:18 -07:00
YassineYousfi
ae0f9b17df openpilot: new models and onnx ops (#401)
* ngrl stuff

* fngrl

* fix typo in compile script

* workflow dispatch

* new models in tests

* dont need to up this threshold

Co-authored-by: HaraldSchafer <harald.the.engineer@gmail.com>
2022-10-20 11:49:19 -07:00
George Hotz
8382c51c12 always MATMUL, test the ops in OPENCL 2022-10-01 13:31:29 -04:00
George Hotz
e737513c52 external_test_opt 2022-09-28 23:29:41 -04:00
George Hotz
f215534a64 1100 lines, but sane linter rules 2022-09-06 13:47:45 -07:00
George Hotz
fa33ba716e flip that 2022-08-28 11:27:33 -07:00
George Hotz
227937d6d2 fix test maybe 2022-08-22 09:50:14 -07:00
George Hotz
11626053b0 run_thneed with test 2022-08-22 09:45:46 -07:00
George Hotz
0aca848d10 enable the openpilot test 2022-08-21 12:05:48 -07:00
George Hotz
a8734df030 add openpilot tests to tinygrad 2022-08-21 12:03:37 -07:00
George Hotz
ccd539e93e maybe that's a better way to do this 2022-08-20 08:25:12 -07:00
George Hotz
b7d782b921 hmm, with the new reduce, we have to opt 3 for memory usage 2022-08-20 08:16:15 -07:00
George Hotz
b132de677d tinygrad.nn (#367)
* tinygrad.nn

* flake8

* working on pylint

* more pylint

* more pylint

* pylint passes

* networkx

* mypy can't infer that type

* junk
2022-08-18 07:41:00 -07:00
George Hotz
ef1100fdff touchups 2022-07-19 09:30:06 -07:00
George Hotz
ef4afdb5d2 tests maybe 2022-07-18 08:24:14 -07:00
George Hotz
a2c4bcf313 disable opencl tests 2022-07-18 08:17:21 -07:00
George Hotz
762e859089 testopencl 2022-07-17 11:56:40 -07:00