Commit Graph

8186 Commits

Author SHA1 Message Date
George Hotz
68053d0510 dsp stuff / sniff ioctls from snpe (#9490)
* sniff ioctls from snpe

* dump input buffers

* snpe logs from dsp

* NHWC support

* knum 3

* this run?

* revert those

---------

Co-authored-by: Comma Device <device@comma.ai>
2025-03-20 10:38:23 +08:00
qazal
2223b93338 add UPat.or_casted [pr] (#9513) 2025-03-20 10:08:32 +08:00
qazal
1839e8c9b3 place masks in INDEX for TestGatedStoreRewrite [pr] (#9512) 2025-03-20 09:46:53 +08:00
b1tg
bd731a8624 AMDCompiler refactor (no_comgr prereq) (#9497)
* add amdgpu_disassemble to helpers

* refactor hip compiler

---------

Co-authored-by: b1tg <b1tg@users.noreply.github.com>
2025-03-20 09:44:07 +08:00
geohotstan
8c0d0a122c Add return_indices to max_pool (#9506)
* wow argmax is so good

* 1 less line

* clean up and better variable names

* is this torch thing right...?

* add more tests

* slap a TODO on it

* clean ups

* prettier looking code and fix ceil mode test

* add return types and some docs

* ok that was a bad example since indices == value, just no example
2025-03-19 15:25:37 -04:00
chenyu
189f62d44f add rounding to tqdm unit scale (#9507)
fixed `AssertionError: ' 1.00/10.0  1000it/s]' != ' 1.00/10.0  1.00kit/s]'`
2025-03-19 12:08:46 -04:00
nimlgen
a5c971ff3a am: prereqs for rdna4 1/n (#9495)
* am: ip_ver rename for acc

* am: refactor this

* fix version

* ugh
2025-03-19 17:14:57 +08:00
Francis Lam
1e5d9ad8f7 extra/gemm/max_matmul: start of custom kernels for GEMM (#6926)
* extra/gemm/max_matmul: start of custom kernels for GEMM

* add an unoptimized FP16/FP16 MMA example

* add slow 3-stage fp16 acc example

* add correct 3-stage pipeline with unswizzled/flat smem input (slow)

* add acc fp16 example with 3 stages and swizzle (no bank conflicts)

* add max version of NV fp16_fp16_fp16

* fix up comments and removed unused code in max variations

* add start of no_xor example

* fix to account for UOps to Ops
2025-03-19 15:04:57 +08:00
George Hotz
865f23dd7b olmoe memory usage cleanups 2025-03-19 12:28:18 +08:00
b1tg
2c87a22cf2 fix prg size calculation when there are adjacent mapped ranges (#9498)
Co-authored-by: b1tg <b1tg@users.noreply.github.com>
2025-03-19 11:55:03 +08:00
b1tg
1d71436e6a use libllvm19 in ci (#9494)
Co-authored-by: b1tg <b1tg@users.noreply.github.com>
2025-03-19 11:53:32 +08:00
b1tg
a95b489a55 nanoGPT train works with tiny torch backend (#9283)
* train_shakespeare_char.py works

* move aten.where.self_out to tiny_backend_out

* fix memory leak

* corealize in the backward_hook

* Update backend.py

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2025-03-19 11:51:02 +08:00
chenyu
f8976dd2eb enable more webgpu tests (#9502)
OSX has larger buffer number limit, and it supports fp16 now
2025-03-18 23:03:54 -04:00
qazal
ae688e4103 simple failing test for scheduling parallel reduce [pr] (#9501)
* simple failing test for scheduling parallel reduce [pr]

* atol
2025-03-19 10:52:13 +08:00
leopf
e4dad99145 nn.state docs cleanup (#8332)
* doc cleanup

* extension cleanup

* manual definition

* bring back accept_filename for gguf_load

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
Co-authored-by: chenyu <chenyu@fastmail.com>
2025-03-18 17:16:40 -04:00
chenyu
1ea4876dfa olmoe touchups (#9499)
GlobalCounters.reset() and only validate if temperature is 0
2025-03-18 15:25:45 -04:00
geohotstan
f7506c6c25 JIT OLMoE (#9396)
* jit the forward

* might timeout, idk just send it

* this is dumb

* naive bitonic lol

* idk if this is correct, but that squeeze before is definitly not

* vectorized bitonic sort, but still slow

* yay 1 layer is correct

* alright its pretty good

* good enough

* rerun CI

* nit improve comment
2025-03-18 14:49:02 -04:00
Ignacio Sica
5c56cac0a0 MI300 mfma support (#9417)
* add f16/f32 mfma support for MI300

- add 16x16 mfma shape support for f16 with f32 acc
- add ops_python mfma emulation
- add arch to AMDRenderer

* minor cleanup

* minor cleanup

* add mfma emulation task to ci

* add back todo

* hotfix: comment

* add tc=3 job to ci
2025-03-18 14:33:30 -03:00
hooved
5500887eed improve reproducibility of WebGPU CI puppeteer test (#9496)
* try to make CI test fail with slow JS import

* prevent race between model import and reference

* revert artificial delay in JS module import
2025-03-18 09:27:38 -04:00
qazal
cde4fd3be3 do not view_left assign + elementwise sources always have a shape [pr] (#9491) 2025-03-18 17:42:51 +08:00
George Hotz
117b7a16ef VALIDATE_WITH_CPU [pr] (#9488)
* VALIDATE_WITH_CPU [pr]

* fix test
2025-03-18 15:15:04 +08:00
qazal
935cd01f56 simple failing test for graph_rewrite children [pr] (#9489)
* simple failing test for graph_rewrite children [pr]

* lint

* update too
2025-03-18 13:07:21 +08:00
George Hotz
d20494e6d7 move buffer logic to Buffer [pr] (#9487)
* move buffer logic to Buffer [pr]

* pass shape into as_typed_buffer

* pass shape into as_typed_buffer

* work

* cleaner

* fix tests
2025-03-18 11:21:21 +08:00
qazal
3be228182f unbind Tensor variables last [pr] (#9486)
* reorder do_realize [pr]

* move merge_views

* unbind all variables at the end [pr]
2025-03-18 09:52:01 +08:00
qazal
b44f9c409a reorder do_realize [pr] (#9485)
* reorder do_realize [pr]

* move merge_views
2025-03-18 09:30:10 +08:00
nimlgen
a82c9332d3 am: rename soc21 to soc (#9482) 2025-03-18 08:54:26 +08:00
qazal
b100fc0b20 split the rule that uses context in scheduler simplifier [pr] (#9484)
* split the rule that uses context in scheduler simplifier [pr]

* add
2025-03-18 08:12:26 +08:00
Anish Umale
5e58f4b65b Tiny backend test_ops fix part 3 (#9483)
* extract straightforward things from https://github.com/tinygrad/tinygrad/pull/9302

* pass dtype and device for ones_like
2025-03-17 18:01:51 -04:00
TJ
9fcef4d009 add masked_select to tensor.py (#9468)
* add masked_select to tensor.py

* fix tests

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2025-03-17 16:05:36 -04:00
chenyu
4f8eac59ea failed test case for threefry (#9469)
* failed test case for threefry

not sure if it's always like this, but increment before _threefry_random_bits is incorrect. the counts should start with random numbers generated so far.

use jax to generate 20 + 20 + 10 random numbers, the first 20 + 20 matches and the last 10 are different. just moving increment after _threefry_random_bits matches the number but jit test failes

* workaround

* why is this different?

* revert those

* and that
2025-03-17 14:52:10 -04:00
b1tg
6dd8e5ba7c refactor llvm compiler (#9403)
* refactor LLVMCompiler

* new interface

* automatic configuration

---------

Co-authored-by: b1tg <b1tg@users.noreply.github.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2025-03-18 00:13:49 +08:00
geohotstan
53d6f1e1bb Add bitonic cat sort (#9422)
* poc

* repeated values fail, sigh

* is this being timed out?

* fix up down names

* bitonic v2, does this run?

* bitonic v3, faster

* bitonic v3.1, faster

* bitonic v3.1.1, same speed unlucky

* support dim and indices

* bitonic v3.2, simpler code, TODO repeated indices

* bruv gimme green for once cmon

* cat (stack) implementation, slow but maybe one day when cat is fast meow

* revert to v3.2

* bitonic v4, who let the cats out edition

* clean up variable names

* figured out repeated indices :D

* ruff check --fix

* use sort for topk

* add Tensor.sort everywhere

* fix docs and add some types

* slightly better variable names

* am I doing torch inplace correctly?

* delegate sort to values_stable

* add a contig, faster first sort

* maybe don't test_inplace

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2025-03-17 12:01:23 -04:00
chenyu
f53be010d7 lower bert learning rate (#9481)
slightly better. first sub 3hr run https://wandb.ai/chenyuxyz/MLPerf-BERT/runs/0or96ink/overview
2025-03-17 10:49:56 -04:00
qazal
e03c0aacf2 more explicit DONT_PUSH_VIEWS [pr] (#9479)
* more explicit DONT_PUSH_VIEWS [pr]

* update tests to not handcode ast

* lint

* test_recursive_swizzle and test_simple_store_reshape
2025-03-17 20:43:21 +08:00
qazal
3b00a778ba fix view_left for unsafe pad ops [pr] (#9478) 2025-03-17 19:02:02 +08:00
qazal
813f713edc merge_views for buffer ops + create valids last (#9472)
* merge_views for buffer ops + create valids last

* view.arg

* pass
2025-03-17 17:15:44 +08:00
qazal
bd1f71c1e2 simple failing test for extra ops in VALID [pr] (#9474)
* simple failing test for extra valids [pr]

* this has DEBUG=4
2025-03-17 17:02:40 +08:00
qazal
e26caf4c3a hotfix: skip test_mean_half_precision_underflow on amd ci (#9476)
The global size is very large (781250 gidx) and the emulated version takes more than 1
minute to execute the kernel.
2025-03-17 16:47:48 +08:00
George Hotz
824c5f41ac dsp work try 3 (#9475)
* dsp work try 3

* padding
2025-03-17 16:42:12 +08:00
George Hotz
242daa4f9a ptrcat (#9473) 2025-03-17 16:06:37 +08:00
George Hotz
52ae9af4dd Fast DSP for MobileNetV2 (try 2) (#9467)
* Fast DSP for MobileNetV2 (try 2)

* enable fast path on uchar

* fix tests
2025-03-17 15:10:36 +08:00
George Hotz
15ee742afa add get_children_map to uop (#9470)
* add get_children_map to uop

* update_children

* fix new children
2025-03-17 14:36:13 +08:00
chenyu
d2cfbd8a4d bert lower learning rate and total steps (#9466)
closer to the other submission with BS=240. converged with 10% less epochs
2025-03-16 17:21:20 -04:00
George Hotz
09e7708b49 minimum change for rdna4 [pr] (#9455) 2025-03-16 13:39:24 +08:00
qazal
be2161652b reorder into swizzler + ast_fixup [pr] (#9456) 2025-03-15 09:00:14 +01:00
George Hotz
cb7a7f69c7 quantization preprocessor from DSP, should be universal (#9437)
* quantization preprocessor from DSP, should be universal

* touchups

* fix tests
2025-03-15 07:49:37 +08:00
chenyu
ca5064a5b6 remove Kernel.float4_axis [pr] (#9448) 2025-03-14 17:54:32 -04:00
chenyu
0e591baf43 redo simple_matmul change (#9450)
numpy does not support bfloat16
2025-03-14 17:53:52 -04:00
chenyu
b0f63d3c04 Revert "simple_matmul.py uses np to generate random (#9438)" (#9449)
This reverts commit 14018050c1.
2025-03-14 17:14:22 -04:00
Ignacio Sica
14018050c1 simple_matmul.py uses np to generate random (#9438)
* np generates randoms

* hotfix: use generator for int dtype

* float32 as default dtype for float generator

* use np.float32 instead of stirng

* add dtype= to integers generator

* change import _to_np_dtype source
2025-03-14 17:36:50 -03:00