qazal
9defbc7d54
add symbolic_simple to the scheduler [pr] ( #8419 )
2024-12-26 20:05:08 +08:00
Sieds Lykles
6bb54eb532
Add variations for some ADD patterns ( #8393 )
...
* Add variations for some ADD patterns
* Add test and remove redundant rule
2024-12-25 19:49:39 -05:00
nimlgen
a562ee2c6e
BumpAllocator rename start -> base ( #8415 )
2024-12-25 23:12:55 +03:00
nimlgen
9ed064710a
hcq remove old profiler lines ( #8414 )
2024-12-25 23:12:28 +03:00
chenyu
4712847766
make self_tokenize output more like a python file ( #8411 )
...
use comment for file name and join with newline instead of null byte when export to file
2024-12-25 14:16:30 -05:00
George Hotz
c1cd94baf6
precompute excluded nodes in VIZ [pr] ( #8410 )
2024-12-25 11:02:04 -08:00
George Hotz
8aa68e8d5c
hotfix: render full int in int const in VIZ
2024-12-25 10:41:19 -08:00
nimlgen
393d39da58
do not cache subgraphs in toposort [pr] ( #8403 )
...
* is it correct?
* style
* set
2024-12-25 20:48:30 +03:00
qazal
313bdfa43f
Add View lt support back [pr] ( #8407 )
...
* Revert "remove unused View.t and lt [pr] (#8374 )"
This reverts commit 8fdcb60461 .
* green test_masked_const_elementwise
2024-12-26 01:09:59 +08:00
qazal
4cbe5919d6
tensor uops symbolic folding spec [pr] ( #8406 )
2024-12-26 00:26:41 +08:00
qazal
6422936b62
fix pre-commit ruff error [pr] ( #8405 )
2024-12-26 00:12:57 +08:00
chenyu
3f46425f1e
typos found by gemini [pr] ( #8400 )
...
not very effective... maybe due to tokenizer
2024-12-24 22:32:25 -05:00
chenyu
a35eef8d58
optionally output to file in self_tokenize.py ( #8399 )
...
can paste the whole tinygrad in gemini this way
2024-12-24 21:09:26 -05:00
chenyu
de3705168e
update idiv doc and test cases ( #8398 )
...
test more cases when either numerator and denominator is negative and has remainder or not
2024-12-24 17:03:18 -05:00
nimlgen
a647f3dd2c
move mockgpu to tests [pr] ( #8396 )
...
* move mockgpu to tests
* linter
* i'm so sorry
* sorry, python
* path
2024-12-24 23:48:02 +03:00
chenyu
2c93f27652
remove explicit np.array and np.int32 in test_div_int [pr] ( #8395 )
...
vals default loads as int32 now in test_ops
2024-12-24 13:09:30 -05:00
qazal
5c2fe04bb6
a few more UPat.var -> UPat.cvar in the scheduler [pr] ( #8391 )
...
* a few more UPat.var -> UPat.cvar in the scheduler [pr]
* keep it assert
* minimal diff
2024-12-24 20:36:24 +08:00
qazal
3273972f44
delete is_unrealized_const, it's just CONST [pr] ( #8390 )
2024-12-24 16:46:12 +08:00
qazal
3a556a7e8b
fully local tensor const representation: CONST(VIEW(DEVICE)) [pr] ( #8389 )
2024-12-24 16:15:56 +08:00
George Hotz
b589dec06e
remove some VIEWs we don't need [pr] ( #8353 )
...
* remove some VIEWs we don't need [pr]
* unmasked view and movement op on BUFFER are a part of the spec
---------
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com >
Co-authored-by: qazal <qazal.software@gmail.com >
2024-12-24 14:57:42 +08:00
chenyu
0d6fe6200c
test case view from an empty view ( #8388 )
...
currently it behaves differently depends on first view somehow
2024-12-23 17:40:49 -05:00
chenyu
c587b3b08c
test case view the padded area of a view ( #8386 )
...
these cases view the padded area of first view
2024-12-23 16:47:31 -05:00
Francis Lata
239d2a7214
explicitly check value for not None ( #8382 )
2024-12-23 11:12:39 -05:00
geohotstan
78cb47dfc5
docs and tests clean ups ( #8383 )
2024-12-23 11:12:13 -05:00
chenyu
a556adf028
add test for Tensor silu and swish ( #8381 )
...
was only tested in onnx, added to test_ops for completeness
2024-12-22 21:08:59 -05:00
chenyu
572ebd9f27
unneeded TYPE_CHECKING in uopgraph.py [pr] ( #8379 )
...
can just import. 3 TYPE_CHECKING left. 2 are about python 3.12 and format str and those are fine. the last one is ops <-> device and shapetracker
2024-12-22 12:58:08 -05:00
qazal
8d4439282b
swizzle assign [pr] ( #8378 )
2024-12-22 18:17:50 +02:00
qazal
e6f4c24619
try 2 on VIEW(BUFFER, <op>) scheduling + spec [pr] ( #8377 )
...
* second iteration on VIEW(BUFFER, <op>) scheduling + spec [pr]
* image
* notes
2024-12-22 16:30:35 +02:00
chenyu
b7397c1322
more typing cleanups [pr] ( #8376 )
...
List, Tuple, DefaultDict
2024-12-22 05:21:03 -05:00
chenyu
afcd70af97
remove untriggered/untested code [pr] ( #8375 )
...
ran with `coverage` on test. not sure about if we still want max_var_const so just commented it out
2024-12-22 04:07:53 -05:00
chenyu
8fdcb60461
remove unused View.t and lt [pr] ( #8374 )
2024-12-22 02:26:54 -05:00
chenyu
7ea633f94f
remove from __future__ import annotations from runtimes [pr] ( #8373 )
...
it's not needed if we move the Device before Program and Allocator, which need Device.
not updating hcq because it has a lot more stuff, and CLDevice requires CLDevice
2024-12-21 23:46:07 -05:00
chenyu
e934f987c6
minor cstyle cleanup [pr] ( #8371 )
...
* minor cstyle cleanup [pr]
* *
2024-12-21 22:17:45 -05:00
qazal
514a6740e4
Revert "CONST(VIEW(DEVICE)) ( #8365 )" ( #8372 )
...
This reverts commit 83284985f0 .
2024-12-22 04:44:34 +02:00
qazal
83284985f0
CONST(VIEW(DEVICE)) ( #8365 )
2024-12-22 04:18:35 +02:00
qazal
88bc51385c
scheduler: don't trade complexity for speed ( #8370 )
...
* scheduler: don't trade complexity for speed
* don't need is_scheduled
* make those tests real world
* graph_rewrite dedup
2024-12-22 03:30:51 +02:00
qazal
991b91d4d6
fix string repr of arg in viz and print [pr] ( #8369 )
2024-12-21 23:44:10 +02:00
ignaciosica
ba0c844a83
special tol when f16 and bf16 are tc input dtypes ( #8183 )
2024-12-21 11:32:26 -05:00
geohotstan
3f83748661
update onnx and onnx_ops to 3.10+ typing ( #8360 )
...
* fixed mypy and updated to modern typing
* selective ruff check changes (all except E501)
* some more clean ups
* fix comment
* small nit
2024-12-21 11:17:47 -05:00
qazal
72aa38aa3b
BIND in tensor_uop_spec + cleanups [pr] ( #8363 )
...
* Ops.BIND pattern in tensor_uop_spec + cleanups [pr]
* use metaops there
2024-12-21 21:26:47 +08:00
qazal
4e8812db37
asserts to prepare for Tensor BIND proposal [pr] ( #8362 )
2024-12-21 20:35:08 +08:00
chenyu
1ce9851ba6
import and type cleanups [pr] ( #8359 )
...
Dict and DefaultDict and some imports
2024-12-20 21:52:02 -05:00
chenyu
18dca3c3d7
isolate train_gpt2 slow kernels [pr] ( #8358 )
...
also fixed run_linearizer with var_vals=None
2024-12-20 17:59:01 -05:00
George Hotz
9f62c80f68
hotfix: this is a loan
2024-12-20 14:47:04 -08:00
qazal
2649e87546
delete the fake buffer from const ( #8355 )
...
* delete the fake buffer from const
* fix test_sink_childless_const_alt
* it should be CONST(VIEW(DEVICE))
2024-12-21 04:20:28 +08:00
George Hotz
b7499764f5
horfix: have viz hide the stupid -1 BUFFERs
2024-12-20 10:47:44 -08:00
chenyu
cd79a904c5
add back explicit dict[DType, str] in ptx [pr] ( #8352 )
2024-12-20 13:19:48 -05:00
George Hotz
074315ec08
hotfix: simpler test_mnist_model
2024-12-20 10:18:17 -08:00
chenyu
20eebbc61a
minor PTX cleanups [pr] ( #8351 )
2024-12-20 12:52:53 -05:00
qazal
59f4b8da95
Tensor uop spec ( #8311 )
...
* Tensor uop spec
* minor
* feedback
* restrict ShapeTracker of VIEW(BUFFER) to contiguous
* in image base mutates, how do we rewrite the view?
* cast post realize
* now ucache errors
* how strict can this be?
* put constraints on EMPTY
* merge
* save lines
* import import
* overloaded assign target
* more strict
* fine don't overload it
* more
* actually, this is better
* and it even exists
* this way it works for BUFFER
* Revert "this way it works for BUFFER"
This reverts commit 71c15f6b14 .
* make it like linearize.py
* assign take 4
* minor
* all int, space and that's already base
* target
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com >
2024-12-20 23:47:40 +08:00