Commit Graph

8072 Commits

Author SHA1 Message Date
chenyu
9eb45eb629 add a flag to skip bert train (#9349) 2025-03-04 17:13:00 -05:00
nimlgen
14c88abf27 add some options to allreduce bench (#9348) 2025-03-04 23:46:36 +03:00
nimlgen
993ef42bd5 am: hdp cg (#9346) 2025-03-04 20:44:09 +03:00
chenyu
e301f21f63 CI ubuntu-20.04 -> ubuntu-22.04 (#9345)
20.04 is removed now
2025-03-04 11:39:12 -05:00
hooved
01f7a4fadc tinychat in browser, Part 2: model export (#9274)
* load llama3-1B to WEBGPU device

* include compile script for loading llama3 to WEBGPU

* parametrize max_context in build_transformer fxn

* jit_model with two different args sets

* compile for webgpu, split weights

* load model weight parts in browser

* export all tensors from initialized transformer

* run transformer inference in browser

* enable tiktoken with llama bpe in browser

* count total tokens on client with tiktoken.js

* full client-side chat streaming, eliminate server

* revert change that enabled jitting with 2 argsets

* llama without Variable or cache_kv, for webgpu

* have client use mask tokens / whole context

* cleanup staged weights

* add tiktoken.js build script, README

* export CLANG for Q6_k to float32 decompression

* fix and test exported CLANG code for Q6_k to fp32

* revert changes to jit and export_model

* isolate clang export

* test Q6_K to float32 decompression in browser

* gguf_load now also returns t_infos and data_start

* prepare llama-1B Q6_K gguf chunks for browser

* cache and decompress quantized llama in browser

* enable separate deployment of large files

* fix kv cache and symbolic with llama wgpu

* eliminate browser lag during decompression

* hash metadata and weight chunks

* delete obsolete indexeddb cache to free disk

* add progress bar, track model download/decompress

* refactor progress callback

* skip buffer hash verification for speed

* Display progress for entire loading scope

* Report page load errors to user

* actually display errors

* skip prompt tokens already seen by model

* skip prefilling with last assistant message tokens

* on page load tell user if webgpu not enabled

* push deployed URL root to window.history

* make note of bug sources with TODO items

* isolate bug in CLANG with BEAM=2

* remove clang_bug.py from diff

* decompress q6k to f32 on webgpu instead of clang

* remove unused code

* inter-weight decomp with larger wgpu kernels

* parallelize decompression submissions

* refactor dequantize scheduling

* add progress bar back

* fix bug

* temp fix for loading GGUF Q6_K to fp16 not fp32

* fix rendering of exported CLANG

* remove weight casts, sketch js functions for clang

* get symbolic vars from jit_cache for model export

* include symbolic vars in exported CLANG

* render js for clang transformer

* toggle clang/webgpu deployment; refactor decomp

* compile and render clang Q6_K->fp16 and int8 quant

* fix rendered clang for abs(fp16), to work in wasm

* simplify clang js wrapping

* run compiled clang in worker

* prepare llama weights in workers, q6k to int8/fp16

* tinychat on clang in browser, f32/int8 weights

* move wasm inference to (now flexible) worker

* don't load redundant embeddings

* modest wasm perf gain with compile flags

* set default backend, enable backend choice/backup

* render symbolic vars in exported WEBGPU

* quantize webgpu llama to int8/f32

* improve UX arising from rendered WEBGPU

* clean up webgpu launch

* new weights split: smaller chunks, tinygrad quant.

* switch webgpu inference to int8 quant

* remove unneeded clang decompression

* eliminate unneeded kv cache transfer to wasm

* use 1 worker for simplified clang decompression

* display launch errors

* refactor: stream load weight chunks to WebGPU

* show loading chunk completion

* quantize embeddings to int8

* test float16 as input for quantization

* webgpu: use f16 source, int8 embed, eliminate q6k

* simplify split weights prep: all from state_dict

* revert change to nn.state.gguf_load

* remove unneeded decompression from webgpu client

* remove unneeded code

* decrease dl chunks from 47 to 16 MiB

* improve stability of webgpu loading on mobile

* autodetect mobile, improve load stability

* refactor: progress closure

* refactor: one unified progress bar

* remove unneeded code

* revert changes to tinygrad core library

* enforce ios18.3 nerfed max buf size

* BEAM=3 webgpu

* cache integrity, mobile save throttling

* improve mobile UX - no autozoom on prompt box

* clang: int8 from f16, remove q6k

* reduce concurrent dls on mobile to 2 for stability

* refactor: wasm backend with stream loading

* prevent race between wasm load and indexedb save

* split wasm kernels into separate modules

* js wrapper for multiple wasm module inference

* revert multi-module wasm to single module

* make mobile wasm load more stable/fast

* refactor: copy weights into wasm without crashes

* fix bug in download queue; increase mobile dls

* refactor exported clang wrapper, split weights

* remove unnecessary code

* greatly improve int8 quant quality with rounding

* eliminate mobile throttling

* increase webgpu context to 4096 tokens

* export webgpu js functions

* enable separate hosted weights for mobile/pc

* enable prompt-thread switching during generation

* stop generation when max_context is reached

* show progress bar for prefill

* tell user if webgpu fails, while wasm loads

* make loading messages more concise

* update font

* revert changes to tinychat python app launch

* cleanup quantization, add scale_dtype param

* cleanup kv cache code

* cleanup compile code

* link tok_embeddings with output in webgpu export

* refactor: export_model webgpu: symbolic vars

* refactor: export_model weight loading

* forgot to commit export_model.py

* change CLANG to CPU

* deal with pylint incorrectly failing tests

* simplify f-strings for older CI python version

* fix pre-python3.12 parser errors

* [Int32Array] not Int32Array

* cleanup webgpu compile after refactor export_model

* refactor WASM export into export_model

* merge WebGPU/WASM compile scripts

* simplify max_contexts for local deployment

* fix parser issues and whitespace

* deduplicate variable defs for non-wasm clang export

* cleanup code

* cleanup compile scripts

* simplify wasm inference wrapping

* simplify webgpu symbolic vars export

* refactor: unify export of symbolic variables

* simplify WASM export

* simplify clang/wasm export

* update README and build scripts

* separate files for browser/python apps

* restore original python tinychat app files

* browser and python tinychats share assets

* minor cleanup

* isolate compile/export model

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2025-03-04 15:53:30 +08:00
Friedrich Carl Eichenroth
94db8426cb Add consistent typing for python 3.10 in tensor.py (#9326)
* add consistent typing for python 3.10 in tensor.py

* pull

* Tensor.copysign (#9329)

* fast amd gemm (#9318)

* 50 TFLOP AMD gemm

* add lds tiling

* register tiling

* flip locals

* work

* comment

* remove those

* fix Tensor.view with a tuple arg (#9330)

* reorder binops (#9328)

* reorder binops

* test improvements + fix string tests

* ugh, okay this

* Make const moving not depend on the order (#9245)

Since floats are not being flipped anymore this should help with const
folding for floats

* use empty for test instead of rand (#9332)

* linter

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
Co-authored-by: Sieds Lykles <93992551+S-Lykles@users.noreply.github.com>
2025-03-03 16:00:17 -05:00
chenyu
019417743c ruff torch backend (#9341) 2025-03-03 15:15:23 -05:00
nimlgen
f9e4c638f1 torch_hook fixes (#9334) 2025-03-03 23:07:30 +03:00
chenyu
40619a4bbc separate workflow for TINY_BACKEND=1 mnist (#9339)
* separate workflow for TINY_BACKEND=1 mnist

* rebalance
2025-03-03 13:05:24 -05:00
Anish Umale
bafa40fe12 Tiny backend test_ops fix part1 (#9338)
* extract name methods from https://github.com/tinygrad/tinygrad/pull/9302

* t.grad.numpy() -> t.grad.cpu().numpy()

* revert TORCH_DEBUG change

* revert dtype change in aten.sum
2025-03-03 12:36:51 -05:00
George Hotz
0d4ba7dd87 import tinygrad.frontend.torch (#9337)
* import tinygrad.frontend.torch

* type ignore
2025-03-04 00:15:29 +08:00
Friedrich Carl Eichenroth
b4028e48ae Torch Backend Refinement (#9327)
* fix some torch tests

* fixup

* small change

* fixup

* fix test

* use default function

* add todo

* bunch of small changes

* fix tests

* more tests

* fix

* fix

* test fix

* simplify
2025-03-03 10:24:02 -05:00
qazal
23084fd850 merge merge_views and remove_movement_ops [pr] (#9333)
* merge merge_views and remove_movement_ops [pr]

* fix that assert
2025-03-03 12:38:59 +01:00
George Hotz
ece0a0f305 use empty for test instead of rand (#9332) 2025-03-03 16:19:06 +08:00
Sieds Lykles
27e899aea5 Make const moving not depend on the order (#9245)
Since floats are not being flipped anymore this should help with const
folding for floats
2025-03-03 16:09:27 +08:00
George Hotz
2cc4cb74f0 reorder binops (#9328)
* reorder binops

* test improvements + fix string tests

* ugh, okay this
2025-03-03 14:58:18 +08:00
chenyu
146eb73790 fix Tensor.view with a tuple arg (#9330) 2025-03-02 23:35:23 -05:00
George Hotz
a73d8717f3 fast amd gemm (#9318)
* 50 TFLOP AMD gemm

* add lds tiling

* register tiling

* flip locals

* work

* comment

* remove those
2025-03-03 12:01:14 +08:00
chenyu
ba4b8c2c23 Tensor.copysign (#9329) 2025-03-02 21:33:49 -05:00
Friedrich Carl Eichenroth
06ef9cc9f4 aten leaky_relu, div.out_mode, clamp_max, clamp_min, copysign (#9323)
* fix some torch tests

* fixup

* small change

* fixup

* fix test

* use default function

* add todo
2025-03-02 19:12:16 -05:00
Friedrich Carl Eichenroth
ac9c96dae1 add tensor.py typing (#9325) 2025-03-02 16:24:39 -05:00
George Hotz
cd03458ab3 fix barrier + comgr asm + mini viz fix (#9322)
* fix barrier + comgr asm + mini viz fix

* delete peephole
2025-03-02 23:35:00 +08:00
nimlgen
8cae00833c flaky test in ci (#9321) 2025-03-02 16:27:22 +03:00
nimlgen
91c421fb7d adaptive am_smi (#9319) 2025-03-02 15:45:07 +03:00
Ali Ladjevardi
00028e87bb Failing test for not realizing intermediate expand in multi-GPU (#9320) 2025-03-02 12:54:48 +01:00
George Hotz
ba97fd0b9c hotfix: add test/external/external_benchmark_disk_raw 2025-03-02 02:32:15 +00:00
chenyu
cc2bbb0bf1 Tensor.isfinite (#9316) 2025-03-01 19:58:56 -05:00
geohotstan
d9ec05cea6 Test Onnx quantization behavior (#9301)
* add DynamicDequantizeLinear and corresponding tests

* wow qlinearops are round away from zero

* this passes locally...

* again

* try

* try separate test

* round to even again

* also add QLinearMul

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2025-03-01 19:21:58 -05:00
uuuvn
a8a6e22cbd Do not "oob"-write zero to regCOMPUTE_THREAD_TRACE_ENABLE every exec (#9314)
```python
self.pkt3(amd_gpu.PACKET3_SET_SH_REG, gfxreg(amd_gpu.regCOMPUTE_RESTART_X), 0, 0, 0, 0)
```

Will write zero to regCOMPUTE_RESTART_{X,Y,Z} and regCOMPUTE_THREAD_TRACE_ENABLE because of:

```python
regCOMPUTE_RESTART_X = 0x1bbb
regCOMPUTE_RESTART_Y = 0x1bbc
regCOMPUTE_RESTART_Z = 0x1bbd
regCOMPUTE_THREAD_TRACE_ENABLE = 0x1bbe
```

There is also a similar "oob"-write to regCOMPUTE_PIPELINESTAT_ENABLE and regCOMPUTE_PERFCOUNT_ENABLE in:

```python
self.pkt3(amd_gpu.PACKET3_SET_SH_REG, gfxreg(amd_gpu.regCOMPUTE_START_X), 0, 0, 0, *local_size, 0, 0)
```

But it doesn't seem to be strictly required for SQTT, so i left it alone
2025-03-01 17:07:58 +03:00
Priyank Patel
f4148ac46a torch fix casting and add ops for sd vae(s) (#9297)
* torch fix copy casting and add upsample op

* update cast and add test

* fix lint

* add pad for sdxl vae to work
2025-03-01 08:49:10 -05:00
qazal
845814f396 revert buffer_view change (#9311)
* Revert "BUFFER_VIEW is a node in the kernel graph + delete ViewOp (#9298)"

This reverts commit 3210b656b6.

* Revert "substitute ast from kernel op [pr] (#9293)"

This reverts commit 5a9c788ae6.
2025-03-01 11:00:12 +01:00
nimlgen
80b8756150 add npy as_buffer (#9309)
* npy -> dev copies faster

* rv and cpout
2025-03-01 12:34:29 +03:00
chenyu
fe0f860209 update test_ops for tensors from torch (#9308)
a few detach().numpy() -> detach().cpu().numpy()
2025-02-28 15:57:25 -05:00
chenyu
38d7aae3b7 onnx fmod (#9307) 2025-02-28 14:09:22 -05:00
chenyu
7c7db78feb support float mod (#9306)
also added spec check on Ops.MOD to be ints only
2025-02-28 13:33:58 -05:00
chenyu
90808e2dd0 div rounding_mode (#9304) 2025-02-28 11:38:25 -05:00
chenyu
3ae66e59a3 least_upper_float is at least default_float (#9303)
* least_upper_float is at least default_float

en route for div rounding mode. dtype of true int division would change from int32 to default_float, which matches torch too.

* fix bert acc
2025-02-28 10:41:56 -05:00
qazal
3210b656b6 BUFFER_VIEW is a node in the kernel graph + delete ViewOp (#9298) 2025-02-28 12:15:04 +02:00
qazal
5a9c788ae6 substitute ast from kernel op [pr] (#9293)
* substitute ast from kernel op [pr]

* add buffer_map

* without tensor_map + complete in_degree/children graph

* count groups + append assigns

* first
2025-02-28 11:44:48 +02:00
nimlgen
052722a7bc torch hook: address comments (#9295)
* torch hook: address comments

* failed test
2025-02-28 11:51:52 +03:00
Eitan Turok
d657d5f754 [Bounty] Vectorize Transcendental (#9058)
* init

* cast everythig right

* more casting

* install pillow in test

* quick tests

* simplify

* quick tests

* delete test

* tests

* fix import error

* add vec to ldexp3k

* vec for bitcast

* some helper tests

* high level tests

* clean tests

* change tolerance so cuda passes

* ruff passes

* remove tests for transcendental helpers

* ruff passes

* make exponent in power vectorized

* fix pow test

* add newline

* add vec dtype to ilogb2k

* comment + clean up

* ruff

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2025-02-28 15:47:25 +08:00
Priyank Patel
8ae215dd3d torch backend fix manual seed warning (#9292) 2025-02-28 13:45:32 +08:00
George Hotz
5aa80cb602 add CUSTOMI for inline custom (#9291) 2025-02-28 11:11:30 +08:00
George Hotz
ac40316692 hotfix: group cpu functions in torch backend 2025-02-28 10:39:00 +08:00
George Hotz
b32595dbbc torch examples (#9290)
* torch, fix examples/mnist

* fix vae torch example

* where out
2025-02-28 10:16:06 +08:00
ZwX1616
c977781b3c no numpy change if no NPY (#9281)
* skip np change check if no NPY

* use any
2025-02-28 09:32:35 +08:00
hooved
3b9950241e tinychat in browser, Part 1: llama (#9273)
* load llama3-1B to WEBGPU device

* include compile script for loading llama3 to WEBGPU

* parametrize max_context in build_transformer fxn

* jit_model with two different args sets

* compile for webgpu, split weights

* load model weight parts in browser

* export all tensors from initialized transformer

* run transformer inference in browser

* enable tiktoken with llama bpe in browser

* count total tokens on client with tiktoken.js

* full client-side chat streaming, eliminate server

* revert change that enabled jitting with 2 argsets

* llama without Variable or cache_kv, for webgpu

* have client use mask tokens / whole context

* cleanup staged weights

* add tiktoken.js build script, README

* export CLANG for Q6_k to float32 decompression

* fix and test exported CLANG code for Q6_k to fp32

* revert changes to jit and export_model

* isolate clang export

* test Q6_K to float32 decompression in browser

* gguf_load now also returns t_infos and data_start

* prepare llama-1B Q6_K gguf chunks for browser

* cache and decompress quantized llama in browser

* enable separate deployment of large files

* fix kv cache and symbolic with llama wgpu

* eliminate browser lag during decompression

* hash metadata and weight chunks

* delete obsolete indexeddb cache to free disk

* add progress bar, track model download/decompress

* refactor progress callback

* skip buffer hash verification for speed

* Display progress for entire loading scope

* Report page load errors to user

* actually display errors

* skip prompt tokens already seen by model

* skip prefilling with last assistant message tokens

* on page load tell user if webgpu not enabled

* push deployed URL root to window.history

* make note of bug sources with TODO items

* isolate bug in CLANG with BEAM=2

* remove clang_bug.py from diff

* decompress q6k to f32 on webgpu instead of clang

* remove unused code

* inter-weight decomp with larger wgpu kernels

* parallelize decompression submissions

* refactor dequantize scheduling

* add progress bar back

* fix bug

* temp fix for loading GGUF Q6_K to fp16 not fp32

* fix rendering of exported CLANG

* remove weight casts, sketch js functions for clang

* get symbolic vars from jit_cache for model export

* include symbolic vars in exported CLANG

* render js for clang transformer

* toggle clang/webgpu deployment; refactor decomp

* compile and render clang Q6_K->fp16 and int8 quant

* fix rendered clang for abs(fp16), to work in wasm

* simplify clang js wrapping

* run compiled clang in worker

* prepare llama weights in workers, q6k to int8/fp16

* tinychat on clang in browser, f32/int8 weights

* move wasm inference to (now flexible) worker

* don't load redundant embeddings

* modest wasm perf gain with compile flags

* set default backend, enable backend choice/backup

* render symbolic vars in exported WEBGPU

* quantize webgpu llama to int8/f32

* improve UX arising from rendered WEBGPU

* clean up webgpu launch

* new weights split: smaller chunks, tinygrad quant.

* switch webgpu inference to int8 quant

* remove unneeded clang decompression

* eliminate unneeded kv cache transfer to wasm

* use 1 worker for simplified clang decompression

* display launch errors

* refactor: stream load weight chunks to WebGPU

* show loading chunk completion

* quantize embeddings to int8

* test float16 as input for quantization

* webgpu: use f16 source, int8 embed, eliminate q6k

* simplify split weights prep: all from state_dict

* revert change to nn.state.gguf_load

* remove unneeded decompression from webgpu client

* remove unneeded code

* decrease dl chunks from 47 to 16 MiB

* improve stability of webgpu loading on mobile

* autodetect mobile, improve load stability

* refactor: progress closure

* refactor: one unified progress bar

* remove unneeded code

* revert changes to tinygrad core library

* enforce ios18.3 nerfed max buf size

* BEAM=3 webgpu

* cache integrity, mobile save throttling

* improve mobile UX - no autozoom on prompt box

* clang: int8 from f16, remove q6k

* reduce concurrent dls on mobile to 2 for stability

* refactor: wasm backend with stream loading

* prevent race between wasm load and indexedb save

* split wasm kernels into separate modules

* js wrapper for multiple wasm module inference

* revert multi-module wasm to single module

* make mobile wasm load more stable/fast

* refactor: copy weights into wasm without crashes

* fix bug in download queue; increase mobile dls

* refactor exported clang wrapper, split weights

* remove unnecessary code

* greatly improve int8 quant quality with rounding

* eliminate mobile throttling

* increase webgpu context to 4096 tokens

* export webgpu js functions

* enable separate hosted weights for mobile/pc

* enable prompt-thread switching during generation

* stop generation when max_context is reached

* show progress bar for prefill

* tell user if webgpu fails, while wasm loads

* make loading messages more concise

* update font

* revert changes to tinychat python app launch

* cleanup quantization, add scale_dtype param

* cleanup kv cache code

* cleanup compile code

* link tok_embeddings with output in webgpu export

* refactor: export_model webgpu: symbolic vars

* refactor: export_model weight loading

* forgot to commit export_model.py

* change CLANG to CPU

* deal with pylint incorrectly failing tests

* simplify f-strings for older CI python version

* fix pre-python3.12 parser errors

* [Int32Array] not Int32Array

* cleanup webgpu compile after refactor export_model

* refactor WASM export into export_model

* merge WebGPU/WASM compile scripts

* simplify max_contexts for local deployment

* fix parser issues and whitespace

* deduplicate variable defs for non-wasm clang export

* cleanup code

* cleanup compile scripts

* simplify wasm inference wrapping

* simplify webgpu symbolic vars export

* refactor: unify export of symbolic variables

* simplify WASM export

* simplify clang/wasm export

* update README and build scripts

* separate files for browser/python apps

* restore original python tinychat app files

* browser and python tinychats share assets

* minor cleanup

* isolate diffs to llama files

* minor cleanup

* set default scale_dtype

* set default scale_dtype for NF4 quantization

* make quantization of tok_embeds optional

* match output with tok_embeds if not quantizing

* minor change
2025-02-27 15:57:37 -05:00
chenyu
184030168d fix aten.reflection_pad2d (#9289)
tested the torch doc example
2025-02-27 15:53:46 -05:00
chenyu
0de6585df0 fix aten.normal_ arg (#9288)
should be mean and std.
2025-02-27 15:36:25 -05:00
chenyu
8ee2b460ee Tensor.var_mean (#9287) 2025-02-27 15:15:31 -05:00