chenyu
92dfef8060
Tensor(uop) does not need explicit device ( #15361 )
2026-03-19 00:44:33 -04:00
nimlgen
f32c2e43a7
memory: use pfree ( #15360 )
2026-03-19 12:39:23 +08:00
nimlgen
86eec01f97
limit gl*lc ( #15359 )
2026-03-19 12:38:55 +08:00
chenyu
b39816e998
failed test case for Tensor(np, "bf16") ( #15358 )
2026-03-18 23:40:14 -04:00
chenyu
e407ee410c
cosmetic Tensor._do_reduction cleanups ( #15357 )
2026-03-18 22:27:50 -04:00
chenyu
6aebf95dac
move neg and invert to mixin ( #15356 )
2026-03-18 22:03:41 -04:00
wozeparrot
f6687d1ffc
feat: sd seed0 update ( #15354 )
2026-03-18 18:42:00 -07:00
wozeparrot
c45a606750
feat: no if in rand ( #15333 )
2026-03-18 15:09:51 -07:00
qazal
23e0431848
viz: switch sqtt sidebar to a simple asm list ( #15350 )
...
* work
* something like this
* Revert "something like this"
This reverts commit 6c45098d2b .
* less
* path includes
* scroll only jumps up and down
* it's only pc and line now
2026-03-19 01:40:25 +09:00
qazal
709fc52d7b
viz: fix auto zoom range in sqtt, include endpgm packet ( #15349 )
...
* viz: fix automatic zoom range in sqtt packets
* it's x+width
* include s_endpgm
* endpgm also doesn't have exec
2026-03-18 22:52:32 +09:00
nimlgen
d4836ddbb0
canonicalize device from tuple ( #15348 )
...
* will it ifx ci?
* test
* um
2026-03-18 20:35:52 +08:00
George Hotz
5524916e39
llama compute gradients explicitly + 243 GB of RAM on MP=8 ( #15343 )
...
* llama compute gradients explicitly
* apply grads
* fix multi issue
* multi BUFFER_VIEW support
* simpler
* skip the flaky test
2026-03-18 19:54:40 +08:00
nimlgen
ff004d2114
remote: fix mmio ( #15347 )
2026-03-18 18:20:39 +08:00
nimlgen
f853371c83
fix compilers autoselect ( #15346 )
2026-03-18 18:19:53 +08:00
chenyu
761ce8c0d3
fix Invalid combine rules ( #15345 )
...
* fix Invalid combine rules
wrong conditions broke setiem into invalids
* fix
2026-03-18 04:58:02 -04:00
nimlgen
c0499ca3e8
nv: use mmio iface ( #15342 )
...
* nv: use mmio iface
* nv: use mmio iface
* revert
* f
2026-03-18 16:53:09 +08:00
Christopher Milan
499ad9a356
benchmark openpilot 0.11.0 ( #15341 )
2026-03-18 03:28:43 -04:00
George Hotz
6e196195d8
add test for flat llama ( #15327 )
...
* add test for flat llama
* simpler
* back to split w1/w3
* env
* still too much ram
* invalid
2026-03-18 15:16:33 +08:00
chenyu
fceb21c315
Tensor(uop) uses device from uop ( #15340 )
2026-03-18 02:56:06 -04:00
George Hotz
6109117af1
anonymous buffers are Invalid ( #15336 )
...
* anonymous buffers are Invalid
* unique_const
* work
* remove invalid writes
* test_anonymous_buffers_in_function
2026-03-18 14:52:56 +08:00
chenyu
e644e1cb6a
less Tensor(...).uop indirection in Tensor.__init__ ( #15339 )
2026-03-18 02:17:38 -04:00
nimlgen
0315faf938
remote bench ( #15331 )
2026-03-18 14:03:51 +08:00
nimlgen
d720d50e12
memory: traverse all valid ranges only ( #15338 )
...
* memory: traverse all valid ranges only
* x
2026-03-18 14:03:39 +08:00
chenyu
ac7a348d06
dtypes.as_const -> DType.const ( #15337 )
...
does not need to be a staticmethod
2026-03-18 00:48:41 -04:00
Christopher Milan
864d3917d5
add openpilot onnx parser test ( #15334 )
2026-03-18 00:12:02 -04:00
Christopher Milan
0222bfdf69
Revert "don't use intermediate dict in onnx parse" ( #15332 )
2026-03-17 23:46:30 -04:00
chenyu
94926d00d8
fix rand > uint32.max ( #15330 )
...
need to keep low and high as 1D tensor.
`PYTHONPATH=. LLAMA3_SIZE=405B python3 examples/mlperf/models/flat_llama.py` works now
2026-03-17 22:00:01 -04:00
wozeparrot
b45edeb965
fix: rand supports large tensors ( #15329 )
2026-03-17 15:45:41 -07:00
qazal
00817cf65e
viz: all tests can run on the NULL device ( #15328 )
...
* remove that
* move to test_viz
* get_cfg
* do not use os.environ
* hm
* it's always on NULL
* import renderer
* no import *
2026-03-18 04:14:20 +09:00
George Hotz
2605840ee2
flat llama ( #15324 )
...
* FlatTransformer
* works
* pass in buffer views
* print stuff
* print
* bugfixes
2026-03-17 19:39:55 +08:00
nimlgen
0a641ce17d
system: remote ( #15318 )
...
* system: remote
* listen
* print
* fix
* minor
2026-03-17 19:25:37 +08:00
Christopher Milan
69eefdca20
images with height=1 have less strict width rules ( #15325 )
2026-03-17 07:07:22 -04:00
chenyu
14eb8170e4
skip TestRunAsModule if libclang is loaded ( #15323 )
...
reverse rule of TestAutogen skip, otherwise `NULL=1 python -m pytest test/null/test_autogen.py test/null/test_device.py` crashes for me
2026-03-17 06:02:53 -04:00
qazal
e7c26b6319
viz: rename to Start Cycle for the sqtt graph ( #15320 )
2026-03-17 18:53:06 +09:00
nimlgen
e89a103984
remove dmaref ( #15321 )
...
* remove dmaref
* imports
2026-03-17 17:52:09 +08:00
chenyu
3090d4a6e0
disallow reshape from None shape [pr] ( #15322 )
...
test_multigpu_clip_score works without it now
2026-03-17 05:46:53 -04:00
nimlgen
a50fdb0528
nvcc macos ( #15308 )
...
* fix nvcc install macos
* um
* arm
* per
* tm
2026-03-17 17:25:33 +08:00
George Hotz
9d95321be3
set allow_implicit=False by default ( #15319 )
...
* set allow_implicit=False by default
* modernize beautiful mnist
2026-03-17 17:14:38 +08:00
nimlgen
e1c2d09720
system: rebar to remote devs ( #15316 )
2026-03-17 16:09:12 +08:00
chenyu
79d2e83853
tighter ALU/variable min==max -> CONST rule [pr] ( #15317 )
...
only check Ops that can be simplified through this rule. halved the time for that rule in `PYTHONPATH=. TRACK_MATCH_STATS=2 python3 -O test/external/external_benchmark_schedule.py`
2026-03-17 03:44:24 -04:00
George Hotz
584ec75aa2
precompile backward ( #15311 )
...
* add precompile backward support
* cleanups
* fix
* compact grad
* split v not split
* simpler
* no NOOPT
2026-03-17 15:28:40 +08:00
chenyu
6b6d1814ca
update no_vectorized_index [pr] ( #15313 )
...
combine no_vectorized_index and no_vectorized_index_broadcast
2026-03-17 03:05:23 -04:00
b1tg
856a839efc
llm: fix qwen3 moe topk renormalization ( #15201 )
2026-03-17 12:57:33 +08:00
chenyu
1283b57b4e
update fix_store_after_hazard ( #15309 )
...
actual gate is just not CONTIGUOUS, also don't need to check against full backward_slice
2026-03-16 23:55:59 -04:00
Christopher Milan
575b40b93a
determine image shapes before index devectorization ( #15304 )
2026-03-16 23:16:33 -04:00
George Hotz
3ff03be413
call always has tuple ( #15297 )
...
* call always has tuple
* fix pre-commit and simplify
* update
* fix
* move that assert
* tuple
* fix multi
* cleanups
* fix merge
2026-03-17 10:58:46 +08:00
chenyu
1b8b151195
simpler Tensor.assign ( #15302 )
2026-03-16 22:37:25 -04:00
wozeparrot
674c760974
embedded bwd vocab shard ( #15001 )
...
* fix: remove more multi from call
* feat: embedding bwd vocab sharding
* clean: unused import
* clean: don't actually need this pattern
2026-03-16 19:37:16 -07:00
Christopher Milan
62bfd48d95
smarter padding in image_conv2d ( #15289 )
2026-03-16 22:17:48 -04:00
chenyu
e1fab4d2a9
UOp.store is always void [pr] ( #15301 )
2026-03-16 21:58:05 -04:00