nimlgen
86eec01f97
limit gl*lc ( #15359 )
2026-03-19 12:38:55 +08:00
nimlgen
d4836ddbb0
canonicalize device from tuple ( #15348 )
...
* will it ifx ci?
* test
* um
2026-03-18 20:35:52 +08:00
George Hotz
5524916e39
llama compute gradients explicitly + 243 GB of RAM on MP=8 ( #15343 )
...
* llama compute gradients explicitly
* apply grads
* fix multi issue
* multi BUFFER_VIEW support
* simpler
* skip the flaky test
2026-03-18 19:54:40 +08:00
nimlgen
f853371c83
fix compilers autoselect ( #15346 )
2026-03-18 18:19:53 +08:00
chenyu
94926d00d8
fix rand > uint32.max ( #15330 )
...
need to keep low and high as 1D tensor.
`PYTHONPATH=. LLAMA3_SIZE=405B python3 examples/mlperf/models/flat_llama.py` works now
2026-03-17 22:00:01 -04:00
qazal
00817cf65e
viz: all tests can run on the NULL device ( #15328 )
...
* remove that
* move to test_viz
* get_cfg
* do not use os.environ
* hm
* it's always on NULL
* import renderer
* no import *
2026-03-18 04:14:20 +09:00
chenyu
14eb8170e4
skip TestRunAsModule if libclang is loaded ( #15323 )
...
reverse rule of TestAutogen skip, otherwise `NULL=1 python -m pytest test/null/test_autogen.py test/null/test_device.py` crashes for me
2026-03-17 06:02:53 -04:00
Christopher Milan
9047249a7c
m.where(x.pad_to(m.shape), Invalid) ranges shrink ( #15275 )
2026-03-14 07:26:36 -04:00
Christopher Milan
dabdc986df
shrink guarded ranges, try 2 ( #15272 )
2026-03-14 04:24:05 -04:00
Christopher Milan
7cf4b16c91
Revert "shrink guarded ranges" ( #15271 )
2026-03-14 03:44:38 -04:00
Christopher Milan
d9951e2f8e
shrink guarded ranges ( #15263 )
2026-03-14 03:38:48 -04:00
chenyu
90b7f4341d
failed two level divmod recombine case ( #15233 )
2026-03-12 04:04:36 -04:00
chenyu
842c978df3
remove staticmethod dtypes.max/min ( #15227 )
...
always use x.dtype.max/min
2026-03-11 23:11:24 -04:00
chenyu
fce87f19a8
better fold_add_divmod_recombine ( #15214 )
2026-03-10 23:24:22 -04:00
chenyu
df8deec949
test for nest_by_factor selection ( #15213 )
2026-03-10 22:41:31 -04:00
chenyu
be6b0bce1f
variations of (x%c)+(x//c)*c ( #15212 )
...
put those into one function
2026-03-10 22:41:14 -04:00
chenyu
8389a8d7c5
remove_nested_mod can work with negative ( #15205 )
2026-03-10 03:10:08 -04:00
Christopher Milan
ffaafd391a
Invalid in Tensor ( #15154 )
2026-03-10 02:49:54 -04:00
chenyu
68c7c3ca84
divmod test_gcd_with_remainder ( #15204 )
...
test cases for gcd_with_remainder
2026-03-09 23:51:47 -04:00
chenyu
60215deb60
tiebreak in fold_divmod_congruence ( #15190 )
...
need to try both direction
2026-03-09 03:40:39 -04:00
chenyu
a8d8351e5a
match IDIV and MOD in nest_by_factor ( #15188 )
2026-03-09 00:50:38 -04:00
chenyu
83b80da8f3
even more divmod recombine ( #15163 )
2026-03-08 23:52:26 -04:00
qazal
25e82a9aca
viz: exclude redundant traceback from SDMA ( #15185 )
...
* viz: exclude redundant traceback from SDMA
* ctx
* cpu_profile
2026-03-09 05:12:14 +09:00
nimlgen
6ac99fd4c9
memplanner opt copy bufs ( #15110 )
...
* mtp
* x
* tests
* ss
* simp
* less slop
* x
* cleaner
* rm
* m
* c
* x
* f
2026-03-08 22:28:01 +03:00
Roelof van Dijk
4ed8bb7445
tie break for divmod ( #15169 )
2026-03-06 08:05:38 -05:00
George Hotz
6fd18ef875
rename CAT to VCAT ( #15167 )
2026-03-06 18:46:28 +08:00
chenyu
da61088ca4
more divmod recombine ( #15162 )
2026-03-05 12:53:22 -05:00
chenyu
167a1d56a6
improve divmod folding ( #15148 )
...
canonicalize to div than mod which enables more simplifcation
2026-03-05 10:07:36 -05:00
qazal
5bf542469d
viz: python traceback for USER device ( #15160 )
...
* start
* ux
* unittests
2026-03-05 20:22:09 +09:00
George Hotz
8a82b26522
llm: print the prefill cache size ( #15146 )
...
* print the llm prefill cache size
* mock that too
2026-03-05 12:13:28 +08:00
George Hotz
72a9ed6e23
fix render depth bug + add warmup to serve + no realize default ( #15144 )
...
* fix render depth bug + add warmup to serve
* make realize not the default
2026-03-05 11:21:16 +08:00
chenyu
04da527a7a
minor div_and_mod_symbolic cleanups ( #15138 )
2026-03-04 19:05:44 -05:00
chenyu
4cce283790
relax test_tqdm_perf ( #15134 )
2026-03-04 12:58:47 -05:00
Christopher Milan
592f9bf6c6
set OPENPILOT_HACKS=1 to enable replace assign ( #15123 )
2026-03-04 05:26:04 -05:00
wozeparrot
759c7fc81c
failing test for allreduce memory usage ( #15106 )
2026-03-03 23:38:38 -08:00
wozeparrot
529318259c
fix: fix null tests to actually use null device ( #15104 )
2026-03-03 02:05:47 -08:00
wozeparrot
92c16810ac
feat: per device mem_used ( #15100 )
2026-03-03 01:31:28 -08:00
George Hotz
d483e4153a
buffer view is like buffer ( #15082 )
...
* buffer view is like buffer
* fix
* swap_reshape_shrink
* contiguous on gguf, fix overlap
* revert that
* _device_supports_view
* this
* fix that test
* 0 buffers
* that test was wrong
* this
* check correct size
* contig BUFFER_VIEW
* this
* fix tests
* buffer view tests
* om
* fix torch
* no MOCKGPU
* skip
2026-03-03 09:52:33 +08:00
chenyu
14d1c5fdfd
assign fusion tests on detach and contiguous_backward ( #15092 )
2026-03-02 15:21:51 -05:00
qazal
f7aeff6061
viz: cli.py cleanups, do not require PYTHONPATH ( #15085 )
...
* cleanup the print
* sys.exit
* equal check
* cleanup unpacker
* cli doesn't need PYTHONPATH
* no semicolons
* %s/PYTHONPATH=. //g
2026-03-02 19:24:38 +09:00
chenyu
fe0fa8333b
Revert "improve Tensor.sort indices ( #15070 )" ( #15072 )
...
This reverts commit e3003631f2 .
2026-02-28 14:40:30 -05:00
chenyu
e3003631f2
improve Tensor.sort indices ( #15070 )
...
* improve Tensor.sort indices
instead of N^2 match at the end, have an arange to start and go through the same N(logN)^2 path
* contiguous
2026-02-28 14:16:16 -05:00
chenyu
d345f7f5dc
remove _pending_assigns ( #15040 )
2026-02-26 22:38:10 -05:00
George Hotz
e3fa9896b7
start function and add walk rewrite ( #14992 )
...
* start function and add walk rewrite
* work
* add function on feed_forward
* llm progress
* stuff
* none of that
2026-02-25 13:56:27 +08:00
George Hotz
b643fca51e
clean up complete_create_schedule_with_vars ( #14980 )
...
* clean up complete_create_schedule_with_vars
* transform_to_call
* update viz tests
2026-02-24 16:12:36 +08:00
ttomsa
0366474089
Bool cast to cmpne ( #14544 )
...
* test
* rm in llvmir
* rm in ptx and nir
* hmmmm
* rm in decompositions
* skip tests
* add test
* just this
* rm comment
---------
Co-authored-by: chenyu <chenyu@fastmail.com >
2026-02-23 10:31:36 -05:00
George Hotz
b824490e3f
allocate generates a call ( #14958 )
...
* allocate generates a call
* symbolic works too
* DEFINE_VAR is param
* replace param later
* apply buffers
* name
* upd
* this was a bug...
2026-02-23 15:59:20 +08:00
chenyu
4424757b9a
update test_sharded_memory ( #14956 )
...
cleaned up and moved to test/null
2026-02-22 16:56:08 -05:00
qazal
c5029fa460
jit case with Tensor.empty input, realized means allocated ( #14930 )
...
* simple failing jit test case with Tensor.empty
* this used to exist in ops.py...
* Revert "removed if self.buffer.is_allocated() in realized (#14836 )"
This reverts commit 72cf603805 .
2026-02-21 16:33:55 +09:00
George Hotz
df7774661a
remove late numbering of UOps ( #14923 )
...
* remove late numbering of UOps
* stupid fix
* dead code
2026-02-21 09:18:48 +08:00