Commit Graph

587 Commits

Author SHA1 Message Date
geohotstan
074a67a6eb combine get inputs and type_parse function in onnx (#8069)
* 1 is simpler than 2

* variable name

* change error wording

* shapes for sequence type must be homogeneous
2024-12-06 07:42:35 -05:00
chenyu
5c6ed5dba6 lower test_conv_3x3_256_32_32_256_256 expectation (#8060)
failed https://github.com/tinygrad/tinygrad/actions/runs/12182799887/job/33982676812#step:9:210
2024-12-05 10:30:56 -05:00
George Hotz
20878be2af lower test_gemv_4096_16384 expectations 2024-12-05 12:08:26 +08:00
geohotstan
5ce8090d42 simple onnx_ops cleanups (#8003)
* simple clean ups first

* more work

* kinda have adam

* ooo momentum worked nicely

* almost there

* wow.. is the onnx test wrong

* nicer optim stuff

* just skip that test

* small comment changes

* use naming convention from other parts of codebase

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-04 15:33:03 -05:00
chenyu
0693158d28 lower v_theoretical gemv on red (#8042)
tiny7 is still slower https://github.com/tinygrad/tinygrad/actions/runs/12166149038/job/33931736130#step:8:209
2024-12-04 13:59:40 -05:00
George Hotz
08657cb7b0 hotfix: bump expectations in speed_v_theoretical 2024-12-04 19:00:33 +08:00
George Hotz
ea65c79ba2 hotfix: don't spam BEAM debug in speed_v_theoretical 2024-12-04 18:47:16 +08:00
George Hotz
09b00b1b04 hotfix: use kernel timings instead of python timings in speed_v_theoretical 2024-12-04 18:36:17 +08:00
uuuvn
e9c5b23ba1 Use MTLCompiler directly (v2) (#7920)
* Use MTLCompiler directly (v2)

* to_block_literal and REQUEST_TYPE_COMPILE

* Rewrite command encoding

* Revert to_block_literal

* Maybe that's more readable to some people?

* Typo and comment about stdlib caching

* Update ops_metal.py

* Update ops_metal.py

* Update ops_metal.py

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2024-12-04 16:36:48 +08:00
George Hotz
09eac42fd6 cache indexed uops in st [pr] (#8008)
* cache indexed uops in st [pr]

* remove arg from range
2024-12-03 21:27:07 +08:00
George Hotz
b8bf5b2787 minor uop speedups [pr] (#8002)
* minor uop cleaner [pr]

* free uop creation speed by removing WeakValueDictionary

* a lil faster

* disable that test

* lines

* and it doesn't print non hit patterns
2024-12-03 17:04:48 +08:00
George Hotz
0905f87b68 hotfix: print only kernel time 2024-12-03 14:25:08 +08:00
chenyu
b91fa24387 script to run regressed sd conv on metal (#7995)
* script to run regressed sd conv on metal

this and other similar `conv2d + add` kernels contributed to most of the speed regression

* # ruff: noqa: E501
2024-12-02 15:34:27 -05:00
qazal
b797aee720 uop global buf number tracking try 2 [pr] (#7912)
* uop buffer init small refactor [pr]

* add early

* this way it doesn't need late

* buffer_num

* itertools.count

* count from 0

* down to 380
2024-12-02 14:45:17 +08:00
George Hotz
cbcc1c20eb second try at block linearize (#7892)
* second try at block linearize

* weeee, works for lil matmul

* it's so beautiful

* test tiny passes

* fix bugs

* combine matching BLOCKENDS

* wrapping

* test lin failures passes

* those failures were fake

* flip sort order

* fix ptx tests

* deal with store better

* dumb ptx fix

* expect less

* reduce lines

* reduce lines

* less lines and cleaner

* no defaultdict

* tighter

* simpler block_parent_count
2024-12-02 13:43:09 +08:00
George Hotz
6c1efb9a72 hotfix: amd gemv was flaky 2024-12-02 11:08:24 +08:00
chenyu
bb23469f93 lower conv threshold on red (#7948) 2024-11-28 13:31:06 -05:00
chenyu
f54508549f don't search conv weight init in speed_v_theoretical (#7943) 2024-11-28 10:03:18 -05:00
geohotstan
cea5853cfa add Tensor.scatter (#7737)
* working I think

* where are my onnx scatter tests??

* forward_only for now

* try if nan hack fix NV

* looks like issue is different... CUDA WHY

* oops that was wrong. Try if this fixes CUDA

* simpler multiply

* actually finish this up tmrw morning :x

* fix tests?

* improve tests

* improve test and implementation

* fix ruff

* complete but lots of expected failure...

* reviewed tests

* add onnx tests

* is this a processing op?

* add return type to indicate that it's not in-place

* final cleanups

* use or and improve tests a little

* add masked_index_select

* call it masked_setitem instead

* try

* FIXED

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-11-27 10:52:04 -05:00
George Hotz
9d0038bccb small changes from block linearizer [pr] (#7888)
* small changes from block linearizer [pr]

* fix test_gc
2024-11-25 15:27:04 +08:00
chenyu
5c5b1b994c less flaky benchmarks (#7855)
JIT=2 for metal cifar with HALF, and lower tflops for nv test_gemm_4096. failures in https://github.com/tinygrad/tinygrad/actions/runs/11980239535/job/33404098428?pr=7830
2024-11-22 16:39:39 -05:00
qazal
9828277c03 view doesn't have buffer, fix the tests [pr] (#7841)
* view doesn't have buffer, fix the tests [pr]

* need assigns
2024-11-22 20:41:55 +08:00
George Hotz
e9ae2ccd09 _prg to match _buf [pr] (#7816) 2024-11-21 12:44:48 +08:00
George Hotz
c5d458ce02 BufferSpec and ProgramSpec [pr] (#7814)
* BufferSpec and ProgramSpec [pr]

* delete preallocate, it's unused

* Revert "delete preallocate, it's unused"

This reverts commit dcfcfaccde.
2024-11-21 12:18:05 +08:00
George Hotz
9df5a62c5e unify to HWQueue [pr] (#7812)
* unify to HWCommandQueue [pr]

* all is HWQueue
2024-11-21 10:33:08 +08:00
chenyu
11cea00090 lower vs_theoretical conv tflops threshold for nv (#7811)
less flaky
2024-11-20 20:03:49 -05:00
George Hotz
eb0bb7dc0b final dname to device [pr] (#7806)
* final dname to device [pr]

* oops, fix nv
2024-11-20 20:20:28 +08:00
George Hotz
bc977fec53 dname -> device [pr] (#7804)
* dname -> device [pr]

* a few more

* only one left
2024-11-20 17:57:14 +08:00
George Hotz
d71fe7faa5 rename allocator methods to not conflict [pr] (#7788)
* rename allocator methods to not conflict [pr]

* forgot those

* transfer + offset
2024-11-20 00:10:29 +08:00
qazal
1e31b5ba6b hotfix: ctx doesn't impact process replay [pr] (#7785) 2024-11-19 20:17:01 +08:00
ignaciosica
597a239e28 Remove UnaryOps, BinaryOps, TernaryOps, MetaOps [pr] (#7725)
* remove unaryops

* remove ternaryops

* remove metaops

* hotfix

* remove binaryops

* hotfix: test_pattern_matcher

---------

Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com>
2024-11-16 20:56:56 +08:00
qazal
e84d089ef1 delete ReduceOps, only use REDUCE_AXIS (#7667) 2024-11-13 19:04:27 +08:00
chenyu
1884f021e3 add conv3x3 to speed_v_theoretical (#7658)
* add conv3x3 to speed_v_theoretical

* show test duration
2024-11-12 16:41:56 -05:00
chenyu
962dafb467 use randn in speed_v_theoretical instead of rand (#7656)
* use randn in speed_v_theoretical instead of rand

this made green gemv 20% faster... but why?

* update threshold
2024-11-12 15:00:32 -05:00
chenyu
6159790ab8 add gemv to speed_v_theoretical (#7654)
* add gemv to speed_v_theoretical

getting ~300GB/s if we just count the memory of inputs and output

* better green numbers

* flip
2024-11-12 11:19:35 -05:00
chenyu
99f29e50b2 update speed_v_theoretical numbers (#7647)
better amd after set compute profile
2024-11-11 20:05:13 -05:00
chenyu
773d5b60bf beam benchmark tests (#7638)
* beam benchmark tests

* lower AMD number somehow

* less flaky
2024-11-11 18:11:18 -05:00
nimlgen
4d81b7952a qcom match texture/sampler descriptors to OpenCL (#7622)
* qcom ioctl compare more regs

* bug fix
2024-11-11 21:56:51 +03:00
chenyu
8ca422e21a script to compare kernel opt with BEAM (#7604)
intersting that on m1 max hcopt wins BEAM 2 about 20% of the time
2024-11-08 17:40:28 -05:00
Harald Schäfer
e7cbc29f48 openpilot benchmark: add cast from numpy to benchmark (#7593)
* openpilot benchmark: add cast from numpy to benchmark

* whitespace

* comment
2024-11-08 19:31:00 +08:00
George Hotz
205befa788 move is_dtype_supported to device [pr] (#7575) 2024-11-07 20:38:03 +08:00
Carl Basho
630a7f37cf update tests (#7554)
Co-authored-by: John Doe <null@mail.com>
Co-authored-by: chenyu <chenyu@fastmail.com>
2024-11-05 11:35:15 -05:00
chenyu
207bca6cea set PAGE_SIZE=1 and generate new dataset (#7559)
13080 rows in total. both generating and loading this are pretty broken now. filters are wrong for example
2024-11-05 11:25:01 -05:00
George Hotz
99bd4372a5 Ops.ALU is no more, the arg is just an op (#7525)
* op arg alu [pr]

* more

* more passing

* fix more tests

* more tests passing

* fix single failing test

* so much cleaner

* noop to not have process replay trigger

* fix ptx
2024-11-05 00:22:22 +08:00
George Hotz
0c19b6298b rename ops to have unique names (#7522) 2024-11-04 17:09:45 +08:00
George Hotz
c8bf09b7d4 s/UOps/Ops (#7500)
* s/UOps/Ops [pr]

* fix
2024-11-03 11:26:10 +08:00
qazal
e955aa1bee hotfix: process replay (#7418) 2024-10-30 22:45:40 +02:00
George Hotz
4e2895f8d2 safe changes from new dtype branch [pr] (#7397)
* safe changes from new dtype branch [pr]

* only image test on GPU
2024-10-30 17:18:48 +08:00
qazal
51c0c8d27e cachable small graph rewrite (#7371) 2024-10-29 22:28:13 +08:00
qazal
e46edc22aa use unittest helpers in TestTensorMetadata [pr] (#7329)
* use unittest helpers in TestTensorMetadata [pr]

* fix that

* 5 args
2024-10-28 18:38:30 +08:00