Christopher Milan
fdb30cba96
DEV is a ContextVar ( #15505 )
2026-03-27 00:57:09 -04:00
Christopher Milan
67a50fb738
move where on load with casts ( #15492 )
2026-03-26 22:11:27 -04:00
Christopher Milan
bc180a963c
deprecate <dev>=1 in favor of DEV=<dev> ( #15467 )
...
* start work on target
* add test
* update actions to use DEV
* update docs
* update readmes
* tests need that too
* update example
* update tests (comments)
* fix that test
* ruff
* mypy
* oops
* remove getenvs
* don't add Target yet
* and the test
* lint
* and docs
* more stuff
* assert
* few more fixes
* test assert
2026-03-26 03:48:03 -04:00
chenyu
7c8f992894
move EXPAND dtype cast back to gradient.py ( #15481 )
...
only a concern for gradient, not mixin
2026-03-25 19:25:26 -04:00
chenyu
713b322e70
add weakint to promo_lattice ( #15463 )
...
sits between bool and smallest int
2026-03-25 00:27:34 -04:00
George Hotz
fe2690399b
llm: support assistant prefill + refactor to TransformerConfig ( #15457 )
...
* llm: support assistant prefill
* refactor to ModelConfig
* TransformerConfig
* more
2026-03-25 10:50:48 +08:00
qazal
652bab8aad
viz: support nested track_rewrites ( #15454 )
...
* simple test
* stack active groups
2026-03-25 05:01:30 +09:00
chenyu
b7960841af
support shape broadcast in UOp.alu ( #15442 )
...
i think it can integrate tighter, but now Tensor also does ufix from UOp and implicit dtype upcast
2026-03-24 10:14:57 -04:00
George Hotz
a33ac869aa
llm server: temperature + test client ( #15444 )
...
* improvements to the llm server
* eval script
* eval llm
* better eval gets 58.71
* cleanups
* add temperature, but multinomial is absurdly slow
* claude is so smart
* lint
* remove slop
* no more stop
2026-03-24 21:07:15 +08:00
George Hotz
85dee83f5d
amd flash attention cleanups + emulator fixes ( #15431 )
...
* amd flash attention cleanups
* simpler
* params
* fix emulator bugs
* fix idiv bug
* remove that test
* more emu fixes
2026-03-24 10:10:46 +08:00
nimlgen
fa4cdb422e
memplan on linears ( #15422 )
...
* memplan
* test
* x
* arenas
* correct
* set any size
* ugh
* make hevc happy
* x
* x
* held
* rm old
* del
* x
* fu
* f
* cl
* cl
* ok
2026-03-23 19:50:16 +08:00
George Hotz
c62dea6881
ai slop flash attention (it works) ( #15401 )
...
* ai slop flash attention (it works)
* speed up, 2 TFLOPS + 7 GB/s
* simpler
* simpler
* optimize
* faster
* warp shuffle
* sqtt: link dispatch to exec (#15396 )
* sqtt packet linking infra
python
* javascript
* ~doubly linked list
* ui works
* work
* exec can also highlight the pc, coloring work
* more work
* rm sqtt/model.py, doesn't need to be upstreamed
* viz: no context enters in cli, update llama profile (#15404 )
* removed unused named arg in rules [pr] (#15414 )
* viz: sqtt printer in viz/cli.py (#15411 )
* work
* sqtt timeline in CLI
* format all printers nicely
* s/Showed/Printed
* ansistrip
* sys.exit
* keep colors in list
* work from amd_copy_matmul
* has_more always gets returned
* linter
* don't print colors
* more colors
* wow this is so deep
* work
* minor details
* selected
* improve progress bar
* remove it
* 22, global_load_vaddr is so long
* remove *0 hack in sign, gradient materializes zeros for unconnected nodes (#15416 )
Amp-Thread-ID: https://ampcode.com/threads/T-019d1612-6322-706b-a94d-a812400a55cb
Co-authored-by: Amp <amp@ampcode.com >
* works
* cnt=20
* revert that
* uop slice tests
* simpler
---------
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com >
Co-authored-by: chenyu <chenyu@fastmail.com >
Co-authored-by: gg <ggordbegli@gmail.com >
Co-authored-by: Amp <amp@ampcode.com >
2026-03-23 16:15:10 +08:00
nimlgen
9656d97d97
jit: captures linears, not execitems ( #15399 )
...
* jit: captures linears, not execitems
* x
* um
* etsts
* mockcuda
2026-03-21 16:32:12 +08:00
Christopher Milan
1560b534a5
remove IMAGE=2 ( #15312 )
2026-03-20 06:26:52 -04:00
Christopher Milan
0c89340a1e
automatically emulate unsupported (tiny) floats [skip_process_replay] ( #15366 )
2026-03-20 02:31:44 -04:00
chenyu
da1700e16b
dtypes.index -> dtypes.weakint ( #15377 )
2026-03-20 01:08:46 -04:00
nimlgen
86eec01f97
limit gl*lc ( #15359 )
2026-03-19 12:38:55 +08:00
nimlgen
d4836ddbb0
canonicalize device from tuple ( #15348 )
...
* will it ifx ci?
* test
* um
2026-03-18 20:35:52 +08:00
George Hotz
5524916e39
llama compute gradients explicitly + 243 GB of RAM on MP=8 ( #15343 )
...
* llama compute gradients explicitly
* apply grads
* fix multi issue
* multi BUFFER_VIEW support
* simpler
* skip the flaky test
2026-03-18 19:54:40 +08:00
nimlgen
f853371c83
fix compilers autoselect ( #15346 )
2026-03-18 18:19:53 +08:00
chenyu
94926d00d8
fix rand > uint32.max ( #15330 )
...
need to keep low and high as 1D tensor.
`PYTHONPATH=. LLAMA3_SIZE=405B python3 examples/mlperf/models/flat_llama.py` works now
2026-03-17 22:00:01 -04:00
qazal
00817cf65e
viz: all tests can run on the NULL device ( #15328 )
...
* remove that
* move to test_viz
* get_cfg
* do not use os.environ
* hm
* it's always on NULL
* import renderer
* no import *
2026-03-18 04:14:20 +09:00
chenyu
14eb8170e4
skip TestRunAsModule if libclang is loaded ( #15323 )
...
reverse rule of TestAutogen skip, otherwise `NULL=1 python -m pytest test/null/test_autogen.py test/null/test_device.py` crashes for me
2026-03-17 06:02:53 -04:00
Christopher Milan
9047249a7c
m.where(x.pad_to(m.shape), Invalid) ranges shrink ( #15275 )
2026-03-14 07:26:36 -04:00
Christopher Milan
dabdc986df
shrink guarded ranges, try 2 ( #15272 )
2026-03-14 04:24:05 -04:00
Christopher Milan
7cf4b16c91
Revert "shrink guarded ranges" ( #15271 )
2026-03-14 03:44:38 -04:00
Christopher Milan
d9951e2f8e
shrink guarded ranges ( #15263 )
2026-03-14 03:38:48 -04:00
chenyu
90b7f4341d
failed two level divmod recombine case ( #15233 )
2026-03-12 04:04:36 -04:00
chenyu
842c978df3
remove staticmethod dtypes.max/min ( #15227 )
...
always use x.dtype.max/min
2026-03-11 23:11:24 -04:00
chenyu
fce87f19a8
better fold_add_divmod_recombine ( #15214 )
2026-03-10 23:24:22 -04:00
chenyu
df8deec949
test for nest_by_factor selection ( #15213 )
2026-03-10 22:41:31 -04:00
chenyu
be6b0bce1f
variations of (x%c)+(x//c)*c ( #15212 )
...
put those into one function
2026-03-10 22:41:14 -04:00
chenyu
8389a8d7c5
remove_nested_mod can work with negative ( #15205 )
2026-03-10 03:10:08 -04:00
Christopher Milan
ffaafd391a
Invalid in Tensor ( #15154 )
2026-03-10 02:49:54 -04:00
chenyu
68c7c3ca84
divmod test_gcd_with_remainder ( #15204 )
...
test cases for gcd_with_remainder
2026-03-09 23:51:47 -04:00
chenyu
60215deb60
tiebreak in fold_divmod_congruence ( #15190 )
...
need to try both direction
2026-03-09 03:40:39 -04:00
chenyu
a8d8351e5a
match IDIV and MOD in nest_by_factor ( #15188 )
2026-03-09 00:50:38 -04:00
chenyu
83b80da8f3
even more divmod recombine ( #15163 )
2026-03-08 23:52:26 -04:00
qazal
25e82a9aca
viz: exclude redundant traceback from SDMA ( #15185 )
...
* viz: exclude redundant traceback from SDMA
* ctx
* cpu_profile
2026-03-09 05:12:14 +09:00
nimlgen
6ac99fd4c9
memplanner opt copy bufs ( #15110 )
...
* mtp
* x
* tests
* ss
* simp
* less slop
* x
* cleaner
* rm
* m
* c
* x
* f
2026-03-08 22:28:01 +03:00
Roelof van Dijk
4ed8bb7445
tie break for divmod ( #15169 )
2026-03-06 08:05:38 -05:00
George Hotz
6fd18ef875
rename CAT to VCAT ( #15167 )
2026-03-06 18:46:28 +08:00
chenyu
da61088ca4
more divmod recombine ( #15162 )
2026-03-05 12:53:22 -05:00
chenyu
167a1d56a6
improve divmod folding ( #15148 )
...
canonicalize to div than mod which enables more simplifcation
2026-03-05 10:07:36 -05:00
qazal
5bf542469d
viz: python traceback for USER device ( #15160 )
...
* start
* ux
* unittests
2026-03-05 20:22:09 +09:00
George Hotz
8a82b26522
llm: print the prefill cache size ( #15146 )
...
* print the llm prefill cache size
* mock that too
2026-03-05 12:13:28 +08:00
George Hotz
72a9ed6e23
fix render depth bug + add warmup to serve + no realize default ( #15144 )
...
* fix render depth bug + add warmup to serve
* make realize not the default
2026-03-05 11:21:16 +08:00
chenyu
04da527a7a
minor div_and_mod_symbolic cleanups ( #15138 )
2026-03-04 19:05:44 -05:00
chenyu
4cce283790
relax test_tqdm_perf ( #15134 )
2026-03-04 12:58:47 -05:00
Christopher Milan
592f9bf6c6
set OPENPILOT_HACKS=1 to enable replace assign ( #15123 )
2026-03-04 05:26:04 -05:00