nimlgen
ff004d2114
remote: fix mmio ( #15347 )
2026-03-18 18:20:39 +08:00
nimlgen
f853371c83
fix compilers autoselect ( #15346 )
2026-03-18 18:19:53 +08:00
chenyu
761ce8c0d3
fix Invalid combine rules ( #15345 )
...
* fix Invalid combine rules
wrong conditions broke setiem into invalids
* fix
2026-03-18 04:58:02 -04:00
nimlgen
c0499ca3e8
nv: use mmio iface ( #15342 )
...
* nv: use mmio iface
* nv: use mmio iface
* revert
* f
2026-03-18 16:53:09 +08:00
Christopher Milan
499ad9a356
benchmark openpilot 0.11.0 ( #15341 )
2026-03-18 03:28:43 -04:00
George Hotz
6e196195d8
add test for flat llama ( #15327 )
...
* add test for flat llama
* simpler
* back to split w1/w3
* env
* still too much ram
* invalid
2026-03-18 15:16:33 +08:00
chenyu
fceb21c315
Tensor(uop) uses device from uop ( #15340 )
2026-03-18 02:56:06 -04:00
George Hotz
6109117af1
anonymous buffers are Invalid ( #15336 )
...
* anonymous buffers are Invalid
* unique_const
* work
* remove invalid writes
* test_anonymous_buffers_in_function
2026-03-18 14:52:56 +08:00
chenyu
e644e1cb6a
less Tensor(...).uop indirection in Tensor.__init__ ( #15339 )
2026-03-18 02:17:38 -04:00
nimlgen
0315faf938
remote bench ( #15331 )
2026-03-18 14:03:51 +08:00
nimlgen
d720d50e12
memory: traverse all valid ranges only ( #15338 )
...
* memory: traverse all valid ranges only
* x
2026-03-18 14:03:39 +08:00
chenyu
ac7a348d06
dtypes.as_const -> DType.const ( #15337 )
...
does not need to be a staticmethod
2026-03-18 00:48:41 -04:00
Christopher Milan
864d3917d5
add openpilot onnx parser test ( #15334 )
2026-03-18 00:12:02 -04:00
Christopher Milan
0222bfdf69
Revert "don't use intermediate dict in onnx parse" ( #15332 )
2026-03-17 23:46:30 -04:00
chenyu
94926d00d8
fix rand > uint32.max ( #15330 )
...
need to keep low and high as 1D tensor.
`PYTHONPATH=. LLAMA3_SIZE=405B python3 examples/mlperf/models/flat_llama.py` works now
2026-03-17 22:00:01 -04:00
wozeparrot
b45edeb965
fix: rand supports large tensors ( #15329 )
2026-03-17 15:45:41 -07:00
qazal
00817cf65e
viz: all tests can run on the NULL device ( #15328 )
...
* remove that
* move to test_viz
* get_cfg
* do not use os.environ
* hm
* it's always on NULL
* import renderer
* no import *
2026-03-18 04:14:20 +09:00
George Hotz
2605840ee2
flat llama ( #15324 )
...
* FlatTransformer
* works
* pass in buffer views
* print stuff
* print
* bugfixes
2026-03-17 19:39:55 +08:00
nimlgen
0a641ce17d
system: remote ( #15318 )
...
* system: remote
* listen
* print
* fix
* minor
2026-03-17 19:25:37 +08:00
Christopher Milan
69eefdca20
images with height=1 have less strict width rules ( #15325 )
2026-03-17 07:07:22 -04:00
chenyu
14eb8170e4
skip TestRunAsModule if libclang is loaded ( #15323 )
...
reverse rule of TestAutogen skip, otherwise `NULL=1 python -m pytest test/null/test_autogen.py test/null/test_device.py` crashes for me
2026-03-17 06:02:53 -04:00
qazal
e7c26b6319
viz: rename to Start Cycle for the sqtt graph ( #15320 )
2026-03-17 18:53:06 +09:00
nimlgen
e89a103984
remove dmaref ( #15321 )
...
* remove dmaref
* imports
2026-03-17 17:52:09 +08:00
chenyu
3090d4a6e0
disallow reshape from None shape [pr] ( #15322 )
...
test_multigpu_clip_score works without it now
2026-03-17 05:46:53 -04:00
nimlgen
a50fdb0528
nvcc macos ( #15308 )
...
* fix nvcc install macos
* um
* arm
* per
* tm
2026-03-17 17:25:33 +08:00
George Hotz
9d95321be3
set allow_implicit=False by default ( #15319 )
...
* set allow_implicit=False by default
* modernize beautiful mnist
2026-03-17 17:14:38 +08:00
nimlgen
e1c2d09720
system: rebar to remote devs ( #15316 )
2026-03-17 16:09:12 +08:00
chenyu
79d2e83853
tighter ALU/variable min==max -> CONST rule [pr] ( #15317 )
...
only check Ops that can be simplified through this rule. halved the time for that rule in `PYTHONPATH=. TRACK_MATCH_STATS=2 python3 -O test/external/external_benchmark_schedule.py`
2026-03-17 03:44:24 -04:00
George Hotz
584ec75aa2
precompile backward ( #15311 )
...
* add precompile backward support
* cleanups
* fix
* compact grad
* split v not split
* simpler
* no NOOPT
2026-03-17 15:28:40 +08:00
chenyu
6b6d1814ca
update no_vectorized_index [pr] ( #15313 )
...
combine no_vectorized_index and no_vectorized_index_broadcast
2026-03-17 03:05:23 -04:00
b1tg
856a839efc
llm: fix qwen3 moe topk renormalization ( #15201 )
2026-03-17 12:57:33 +08:00
chenyu
1283b57b4e
update fix_store_after_hazard ( #15309 )
...
actual gate is just not CONTIGUOUS, also don't need to check against full backward_slice
2026-03-16 23:55:59 -04:00
Christopher Milan
575b40b93a
determine image shapes before index devectorization ( #15304 )
2026-03-16 23:16:33 -04:00
George Hotz
3ff03be413
call always has tuple ( #15297 )
...
* call always has tuple
* fix pre-commit and simplify
* update
* fix
* move that assert
* tuple
* fix multi
* cleanups
* fix merge
2026-03-17 10:58:46 +08:00
chenyu
1b8b151195
simpler Tensor.assign ( #15302 )
2026-03-16 22:37:25 -04:00
wozeparrot
674c760974
embedded bwd vocab shard ( #15001 )
...
* fix: remove more multi from call
* feat: embedding bwd vocab sharding
* clean: unused import
* clean: don't actually need this pattern
2026-03-16 19:37:16 -07:00
Christopher Milan
62bfd48d95
smarter padding in image_conv2d ( #15289 )
2026-03-16 22:17:48 -04:00
chenyu
e1fab4d2a9
UOp.store is always void [pr] ( #15301 )
2026-03-16 21:58:05 -04:00
chenyu
02afb45f29
remove UOp.assign [pr] ( #15300 )
...
* remove UOp.assign [pr]
it's all store and after, UOp is immutable
* fix test
2026-03-16 21:45:41 -04:00
qazal
33bd33e783
sqtt: add CDNA ops enum, show in viz ( #15140 )
2026-03-17 09:38:42 +09:00
chenyu
3e2b7803e6
view assign replaces at buffer identity ( #15298 )
...
matches what functions capture
2026-03-16 19:58:38 -04:00
qazal
346596cdce
viz: nanoseconds time axis in sqtt ( #15299 )
...
* ui
* secondaryTick is optional
* shader markers data
* instSt infra
* path forward
* details
2026-03-17 07:20:18 +09:00
nimlgen
1bc4cb254c
signed tinygpu as default ( #15296 )
...
* signed tinygpu as default
* f
* no sip
2026-03-16 19:29:41 +08:00
Christopher Milan
0de519c7c2
[pr] fewer simplify calls in image_fixup ( #15283 )
2026-03-16 06:57:52 -04:00
nimlgen
27e29127b5
system: remote prereqs ( #15290 )
...
* x
* new format for apl
* this
* typing
* rpc
* tuple
* linter+new tinygpu
2026-03-16 18:45:41 +08:00
chenyu
837b06c609
style cleanups in allocations.py [pr] ( #15295 )
2026-03-16 05:45:24 -04:00
George Hotz
476276f4b4
support grads on tuples ( #15287 )
...
* support grads on tuples
* simpler
* grad_fxn works
* cleanups
* unused
2026-03-16 17:39:34 +08:00
chenyu
20799df10b
remove Ops.ASSIGN [pr] ( #15294 )
...
goodbye
2026-03-16 05:22:21 -04:00
chenyu
b3378e7022
UOp.assign is store+after [pr] ( #15292 )
2026-03-16 04:51:50 -04:00
George Hotz
2e1c81c23f
allow_implicit to disable implicit params ( #15291 )
...
* allow_implicit to disable implicit params
* get both Tensor and UOp
* no implicits in llm
2026-03-16 16:40:14 +08:00