George Hotz
ec00cefa5b
llm is the only app ( #15779 )
...
* tinygrad/llm is the only app
* upd pyproject
* claude refs
* scoping
* min diff
2026-04-17 10:44:48 +08:00
chenyu
f0c12a2004
another form of assign to itself ( #15770 )
2026-04-16 15:17:19 -04:00
chenyu
d147e2a549
update test_nested_after_contiguous_store ( #15763 )
...
add kernel counts and some TODOs
2026-04-16 09:59:26 -04:00
George Hotz
f57380cbc2
simplify GatedDeltaNetBlock using two state tensors ( #15704 )
...
* test double after
* simpler ssm
* no double test
2026-04-16 21:14:00 +08:00
George Hotz
d1cce7a476
put the ranges on store instead of after ( #15759 )
...
* put the ranges on store instead of after
* better assert
* fix stuff
* comment out slow rules i don't understand
* simpler rule
* closer
* return false for store
* fix loop
* only a few schedule failures remain
* remove stores to self
* all tests pass locally
* remove junk
* regression test and fix
* better test, bump broken torch count
* bugfix with regression test
* new fusion is better
2026-04-16 19:06:40 +08:00
George Hotz
d24466c844
CALL with return value is FUNCTION ( #15758 )
...
* CALL with return value is FUNCTION (GPT try)
* cleanups
2026-04-16 13:25:07 +08:00
chenyu
10c262ced8
update tests that use UOp.size ( #15753 )
2026-04-15 21:58:27 -04:00
George Hotz
1ae6528bb6
move schedule into schedule ( #15736 )
...
* move schedule into schedule
* callify to root
* sched docs
2026-04-15 11:03:25 +08:00
wozeparrot
2b8d303f75
allreduce in precast dtype ( #15689 )
2026-04-13 20:24:12 -07:00
George Hotz
7610bdc59e
block multistore, it's not supported ( #15708 )
2026-04-13 20:57:59 +08:00
George Hotz
4c1fb18a09
Revert "Revert "Tests for GatedDeltaNetBlock + fix multi after assign issue (…" ( #15703 )
...
This reverts commit 0cec42db71 .
2026-04-13 19:09:38 +08:00
George Hotz
0cec42db71
Revert "Tests for GatedDeltaNetBlock + fix multi after assign issue ( #15700 )" ( #15702 )
...
This reverts commit 6f5d756282 .
2026-04-13 19:06:44 +08:00
George Hotz
6f5d756282
Tests for GatedDeltaNetBlock + fix multi after assign issue ( #15700 )
...
* broken after/assign test
* test for GatedDeltaNet
* better comments
* fix issue 1 with multi kernel
* fix 2
* fix
* linter
* public api + cleanup
2026-04-13 18:43:23 +08:00
chenyu
e706f408cb
suppress test warnings from numpy ( #15688 )
2026-04-11 22:33:20 -04:00
Graham Robbins
4ca844e96b
add Q1_0 gguf type ( #15683 )
...
* add Q1_0
* better description
* fix trailing whitespace
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com >
2026-04-11 18:17:24 +08:00
wozeparrot
457508d5a0
llama: save more 2 ( #15681 )
2026-04-11 01:03:36 -07:00
George Hotz
b5a9465b13
llm: add support for moonlight (deepseek MLA) ( #15466 )
...
* add gguf Q5_0
* it works
* rebase
* simpler test
* class
* less diff
* dicts
* normal names
* simplify
* this
* simpler
* work
* work
2026-04-11 10:32:48 +08:00
George Hotz
9092f2a8c0
llm: add shared_expert and rope_dim support from qwen35 ( #15673 )
...
* llm: add shared_expert and rope_dim support from qwen35
* refactor into FFNBlock and TransformerBlock
* norms where they belong
2026-04-10 19:18:27 +08:00
George Hotz
35e3983840
Add Q5_0, Q5_1, and bfloat16 GGUF types ( #15644 )
2026-04-08 17:16:19 +08:00
Christopher Milan
acf239e4d2
specify renderer in DEV, <dev>_<ren>=1 is deprecated ( #15551 )
2026-03-31 18:35:14 -04:00
nimlgen
5181c8e23a
llm: fix nan in kvcache ( #15552 )
2026-04-01 00:38:45 +03:00
b1tg
a63392a565
llm: pairwise ranking topk for MoE expert selection ( #15499 )
2026-03-31 12:46:39 +08:00
nimlgen
0d6fc0f571
jit: graphing in uops ( #15489 )
...
* jit: graphing as rewrite rule
* f
* +metal,cuda
* x
* cl
* x
* x
* simpler
* f
* m
* x
* revert?
* revert2
* back
* back
* t
* x
* m
* x
* c
* x
* l
* x
* comment
* smaller
* rv
* x
* x
2026-03-27 19:09:02 +03:00
Christopher Milan
bc180a963c
deprecate <dev>=1 in favor of DEV=<dev> ( #15467 )
...
* start work on target
* add test
* update actions to use DEV
* update docs
* update readmes
* tests need that too
* update example
* update tests (comments)
* fix that test
* ruff
* mypy
* oops
* remove getenvs
* don't add Target yet
* and the test
* lint
* and docs
* more stuff
* assert
* few more fixes
* test assert
2026-03-26 03:48:03 -04:00
George Hotz
fe2690399b
llm: support assistant prefill + refactor to TransformerConfig ( #15457 )
...
* llm: support assistant prefill
* refactor to ModelConfig
* TransformerConfig
* more
2026-03-25 10:50:48 +08:00
George Hotz
a33ac869aa
llm server: temperature + test client ( #15444 )
...
* improvements to the llm server
* eval script
* eval llm
* better eval gets 58.71
* cleanups
* add temperature, but multinomial is absurdly slow
* claude is so smart
* lint
* remove slop
* no more stop
2026-03-24 21:07:15 +08:00
chenyu
c491345766
pass device into Tensor._frompy ( #15385 )
...
* pass device into Tensor._frompy
with this, canonicalize_device is the only usage of Device in tensor.py
* export_model.py
2026-03-20 05:09:01 -04:00
George Hotz
3b75d8a7a2
fix double after bug in rangeify ( #15381 )
2026-03-20 14:53:46 +08:00
Christopher Milan
0c89340a1e
automatically emulate unsupported (tiny) floats [skip_process_replay] ( #15366 )
2026-03-20 02:31:44 -04:00
chenyu
bf33c5f796
remove gradient materialize_grads ( #15367 )
...
effectively default to True
and removed *0 hack in Tensor.copysign. now dy/dx=0 if y does not depend on x
remove
2026-03-19 23:36:03 -04:00
chenyu
fceb21c315
Tensor(uop) uses device from uop ( #15340 )
2026-03-18 02:56:06 -04:00
George Hotz
9d95321be3
set allow_implicit=False by default ( #15319 )
...
* set allow_implicit=False by default
* modernize beautiful mnist
2026-03-17 17:14:38 +08:00
George Hotz
584ec75aa2
precompile backward ( #15311 )
...
* add precompile backward support
* cleanups
* fix
* compact grad
* split v not split
* simpler
* no NOOPT
2026-03-17 15:28:40 +08:00
b1tg
856a839efc
llm: fix qwen3 moe topk renormalization ( #15201 )
2026-03-17 12:57:33 +08:00
George Hotz
3ff03be413
call always has tuple ( #15297 )
...
* call always has tuple
* fix pre-commit and simplify
* update
* fix
* move that assert
* tuple
* fix multi
* cleanups
* fix merge
2026-03-17 10:58:46 +08:00
wozeparrot
674c760974
embedded bwd vocab shard ( #15001 )
...
* fix: remove more multi from call
* feat: embedding bwd vocab sharding
* clean: unused import
* clean: don't actually need this pattern
2026-03-16 19:37:16 -07:00
chenyu
02afb45f29
remove UOp.assign [pr] ( #15300 )
...
* remove UOp.assign [pr]
it's all store and after, UOp is immutable
* fix test
2026-03-16 21:45:41 -04:00
chenyu
3e2b7803e6
view assign replaces at buffer identity ( #15298 )
...
matches what functions capture
2026-03-16 19:58:38 -04:00
George Hotz
476276f4b4
support grads on tuples ( #15287 )
...
* support grads on tuples
* simpler
* grad_fxn works
* cleanups
* unused
2026-03-16 17:39:34 +08:00
George Hotz
08662bc4ab
add TUPLE/GETTUPLE, simple tests pass ( #15286 )
...
* simple tuple stuff passes
* resolved
2026-03-16 15:06:02 +08:00
chenyu
cd14e8e64b
allocations contiguous is store+after ( #15280 )
2026-03-15 11:58:40 -04:00
Sieds Lykles
4b59083d7c
assign into empty works ( #15256 )
2026-03-13 10:24:29 -04:00
chenyu
018c01508d
test case for call precompile multi ( #15254 )
2026-03-13 06:28:43 -04:00
b1tg
18dc77ccab
add fp8 fnuz dtypes with PYTHON backend support ( #14945 )
...
* add fp8 fnuz dtypes with PYTHON backend support
* rm emu related change
* clarify fp8 fnuz zero handling
* Revert "rm emu related change"
This reverts commit efa4763c22 .
---------
Co-authored-by: b1tg <b1tg@users.noreply.github.com >
Co-authored-by: chenyu <chenyu@fastmail.com >
2026-03-11 22:30:18 -04:00
George Hotz
4f3f55328b
do not patch on invalid tensor tests ( #15226 )
...
* do not patch on invalid tensor tests
* cleanup
2026-03-12 09:35:20 +08:00
Christopher Milan
2fb8a7f60f
fix test_invalid_tensor when before values are nan ( #15215 )
2026-03-10 23:51:19 -04:00
Christopher Milan
ffaafd391a
Invalid in Tensor ( #15154 )
2026-03-10 02:49:54 -04:00
chenyu
a53187eef7
fix TestPartialAssignToSharedBuffer ( #15202 )
...
bufferize_to_store issue with assign
2026-03-09 23:14:23 -04:00
b1tg
891a73befc
llm: fix chunked prefill ( #15182 )
...
* llm: fix chunked prefill
* less lines
---------
Co-authored-by: b1tg <b1tg@users.noreply.github.com >
2026-03-07 22:08:31 +08:00
Ananta Ranganathan
5bdad8ee41
update mxfp4 tests to use the same patterns as the others ( #15177 )
...
* update mxfp4 tests to use the same patterns as the others
* fix typo in test call not sure how it committed
2026-03-06 13:21:40 -05:00