Commit Graph

7115 Commits

Author SHA1 Message Date
qazal
0356657ced move view_supported_devices to device [pr] (#8085) 2024-12-06 16:44:15 +02:00
Ahmed Harmouche
fad3eaa35e Use atomicLoad builtin when loading atomic type (#8084) 2024-12-06 15:33:11 +01:00
qazal
79966fade0 free up lines for const_arg [pr] (#8083) 2024-12-06 16:28:51 +02:00
Ahmed Harmouche
ba35c4138b Use matching JS TypedArray for buffer dtype (#8080) 2024-12-06 14:52:23 +01:00
geohotstan
a684d72e55 add ceil_mode for avg_pool and max_pool (#7579)
* wip pool

* check CI for remove alternative implementation

* Revert "check CI for remove alternative implementation"

This reverts commit 7b1bb900e5.

* fix test

* tests tests tests

* slap a resolve on it

* fix comment

* a little simpler pool

* check CI for removal again

* Revert "check CI for removal again"

This reverts commit be798b7857.

* small

* update

* some ez tests

* english

* clean up code

* fix ruff

* how did I +25 lines?

* small clean ups

* moar clean ups

* try test_avgpool2d_failure2 in CI

* final clean up

* exclude bug fix

* avg underscore pool

* no more edge case stuff

* add better comments for explanation

* add test cases for decreasing end padding

* address feedback

* improve test coverage

* tiny more polish as we wait for lines :D

* more readable code ordering

* add to documentation

* oops

* set to False instead

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-06 08:34:14 -05:00
chenyu
b73d9a7d24 Revert "combine get inputs and type_parse function in onnx (#8069)" (#8079)
This reverts commit 074a67a6eb.
2024-12-06 08:04:21 -05:00
Sieds Lykles
c8313a3669 Cleaner rule for mul/idiv by power of two [pr] (#8076)
* Cleaner rule for mul/idiv by power of two

* Change comment
2024-12-06 08:02:24 -05:00
chenyu
a77ee72d11 clean up reshape size check [pr] (#8067)
removed a resolve, and remove special case for 0 size assert since it's covered by generic size check
2024-12-06 07:51:19 -05:00
geohotstan
074a67a6eb combine get inputs and type_parse function in onnx (#8069)
* 1 is simpler than 2

* variable name

* change error wording

* shapes for sequence type must be homogeneous
2024-12-06 07:42:35 -05:00
nimlgen
c0240855b9 qcom has not transfer (#8075)
* qcom alloc is not hcq alloc

* maybe base?

* test
2024-12-06 14:45:01 +03:00
Ahmed Harmouche
ce72fe1411 u32 to f16 in tinygrad (#8074)
* f16 decompression in tinygrad

* Typing and cleanup
2024-12-06 12:00:13 +01:00
George Hotz
e37bff6c19 fix bug in jit prune with copy [pr] (#8073) 2024-12-06 18:38:23 +08:00
George Hotz
aae8557ada test copy inside jit [pr] (#8072) 2024-12-06 17:51:50 +08:00
George Hotz
e2fe7f0d2f hotfix: actually fix pylint, it's a python 3.10 issue 2024-12-06 13:53:46 +08:00
George Hotz
b28d660172 update self_tokenize, fix pylint maybe 2024-12-06 13:49:41 +08:00
George Hotz
344fd4845c example: self_tokenize. someday tinygrad will be recursively self improving 2024-12-06 13:35:02 +08:00
JaSpa99
3c5d5f9414 mypy==1.13.0 (#7990)
* explicit instantiation and narrowing asserts

* explicit cast

* bump

* one line assert

* handle case for no copy_queue_t

* Revert "handle case for no copy_queue_t"

This reverts commit 38347806ca.

* more readable control flow

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2024-12-06 12:09:14 +08:00
leopf
65b6696f3b refactor safe_load (#8035)
* refactor safe_load

* cleanup
2024-12-06 12:08:21 +08:00
chenyu
e7d5fe4a32 improve idiv _min_max (#8066)
for the cases that the we don't know the exact bounds, we might still know the sign. with this, can remove some resolve for symbolic shapetracker
2024-12-05 23:02:16 -05:00
chenyu
13b954f22c unify expand conditions [pr] (#8065)
same condition (check if old == new or old == 1) in tensor and view. also renamed _pad_left to _align_left because it's not really a pad
2024-12-05 21:40:14 -05:00
chenyu
aefdff4ef5 reshape mask cleanups [pr] (#8064)
don't need canonicalize_st because we always merge 1 in `_merge_dims`
2024-12-05 20:20:43 -05:00
chenyu
05dba6e4ee minor to_indexed_uops cleanup [pr] (#8063) 2024-12-05 17:15:03 -05:00
chenyu
b2dd703592 fix typing of UOp.range [pr] (#8062)
start/end should not be float or bool
2024-12-05 14:56:34 -05:00
Sieds Lykles
49c6dab74b Add pattern for div mod recombine with gcd (#8061)
Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-05 13:16:58 -05:00
geohotstan
707e9a9c8e add _one_hot_along_dim helper for Tensor.arange masking (#8039)
* feelsbadman

* feelsextrabadman

* make sure indices is on same device as self Tensor

* renamed to _one_hot_along_dim

* revert onnx change will do them in onnx only PRs

* address feedback

* add onnx changes here too

* make pad arg better

* revert pad arg

* maybe still keep dim

* simplify onehot onnx ops more

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-05 12:43:00 -05:00
chenyu
3c5983473a combine parentless reduce rule [pr] (#8059) 2024-12-05 11:28:35 -05:00
chenyu
87594a8153 simpler dtypes.max for int [pr] (#8058) 2024-12-05 10:31:41 -05:00
geohotstan
66b8242375 Simple onnx.py clean ups (#8054)
* start

* simplify ops

* why did this not work before

* will split buffer parse to separate pr

* flip the error order

* only this much for now

* to_python_const clean up

* minimize diff

* move tensor_methods into onnx.py

* improve some type signatures

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-05 10:31:26 -05:00
chenyu
5c6ed5dba6 lower test_conv_3x3_256_32_32_256_256 expectation (#8060)
failed https://github.com/tinygrad/tinygrad/actions/runs/12182799887/job/33982676812#step:9:210
2024-12-05 10:30:56 -05:00
Ahmed Harmouche
c6f5bb03fa YoloV8 WebGPU fixes (#8057)
* Bump up input size to 416, show if webgpu is not supported

* Minor fix in export_model
2024-12-05 16:23:45 +01:00
nimlgen
78c01a5c2b amd general _gpu_alloc (#8056)
* amd general _gpu_alloc

* hmm

* ops
2024-12-05 15:50:23 +03:00
nimlgen
8071600897 nv one _gpu_alloc (#8055) 2024-12-05 15:22:03 +03:00
Ahmed Harmouche
ff9a89f714 Proper dtypes for input/output of exported WebGPU model (#8053)
* Respect input/output dtypes in exported WebGPU model

* Add some comments about skipped dtypes
2024-12-05 10:38:05 +01:00
qazal
435a51e10c reduce folding simple tests [pr] (#8040)
* reduce folding simple tests [pr]

* test for view and realized src pattern

* realize / buffer behavior
2024-12-05 12:22:45 +08:00
George Hotz
20878be2af lower test_gemv_4096_16384 expectations 2024-12-05 12:08:26 +08:00
George Hotz
83aecbdc70 do gpuocelot copy manually [pr] (#8050) 2024-12-05 11:51:20 +08:00
George Hotz
4a208bfb28 bump download cache version 2024-12-05 11:42:34 +08:00
George Hotz
df18e7cc37 accept filename decorator [pr] (#8049)
* accept filename decorator [pr]

* add test for safe_load

* bring old tar tests back
2024-12-05 11:40:59 +08:00
Francis Lata
c3187087f7 QwQ-32B-Preview support (#7962)
* load weights with some debugging

* start running a prompt

* cleanup

* optionally permute layers and cleanup

* add validation for simple prompt

* small cleanup

* minor cleanup with formatting download links

* add a longer prompt

* add timing option

* some typings

* remove unused arg

* reset GlobalCounters

* minor cleanups
2024-12-04 21:46:37 -05:00
chenyu
b3220ca7b1 test cases of always True/False lt (#8048)
* test cases of always True/False lt

* one more
2024-12-04 20:38:40 -05:00
chenyu
8bb806888b hook_overflow -> safe_exp2 [pr] (#8047)
that's the only use case, so no need for indirection
2024-12-04 19:05:38 -05:00
chenyu
99abdc6d39 minor push_swizzle_down_through_elementwise cleanup [pr] (#8046)
walrus, and if x are the same, prod(x) must be the same
2024-12-04 17:22:37 -05:00
chenyu
5933ec8dc3 use argfix in smax/smin and remove if [pr] (#8045) 2024-12-04 17:06:13 -05:00
chenyu
4e518334b8 minor get_grouped_dims cleanup [pr] (#8044) 2024-12-04 16:22:51 -05:00
geohotstan
5ce8090d42 simple onnx_ops cleanups (#8003)
* simple clean ups first

* more work

* kinda have adam

* ooo momentum worked nicely

* almost there

* wow.. is the onnx test wrong

* nicer optim stuff

* just skip that test

* small comment changes

* use naming convention from other parts of codebase

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-04 15:33:03 -05:00
Sieds Lykles
70db1bab5c Fold nested div with const (#8010)
* Rebase nested div and with const

* Update the ordering

* return None on vectors

Fixes cpu test

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-12-04 14:59:09 -05:00
chenyu
0693158d28 lower v_theoretical gemv on red (#8042)
tiny7 is still slower https://github.com/tinygrad/tinygrad/actions/runs/12166149038/job/33931736130#step:8:209
2024-12-04 13:59:40 -05:00
chenyu
5c2b1089b2 vectorized input in div_and_mod_folding returns None [pr] (#8041) 2024-12-04 13:36:41 -05:00
qazal
ff6def9ffb simple contiguous_while_contiguous prereqs [pr] (#8038)
* simple contiguous_while_contiguous prereqs [pr]

* early realize

* fine if it's folding a non-contig buffer
2024-12-04 23:00:28 +08:00
Ahmed Harmouche
c9e7701417 Fast YoloV8 on WebGPU (#8036)
* Fast yolov8 with downscaled input

* Faster + FPS meter

* Add loader while model is downloading/compiling

* Title touchup
2024-12-04 15:23:09 +01:00