Commit Graph

6279 Commits

Author SHA1 Message Date
nimlgen
159ee04489 include qcom in view_supported_devices (#6985)
* include qcom in view_supported_devices

* ignore images
2024-10-11 01:10:51 +03:00
nimlgen
f9d454aed5 correct kernargs alignment (#6984) 2024-10-11 00:06:28 +03:00
qazal
2b17279d4e viz don't default open the browser [pr] (#6983)
* viz don't default open the browser [pr]

* move st

* scale down
2024-10-10 22:12:18 +03:00
qazal
4f60252210 reduce scheduler process replay overhead [pr] (#6981) 2024-10-10 20:03:38 +03:00
Friedrich Carl Eichenroth
859d6d0407 Fix mypy examples/beautiful_*.py (#6978)
* fix mypy examples/beautiful_*.py

* backwards

* add test

* Revert "add test"

This reverts commit 4d88845ba3.

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-10-10 11:34:29 -04:00
qazal
4ef5310039 track viz context even if rewrite errors [pr] (#6976) 2024-10-10 18:33:15 +03:00
chenyu
592e5f1df2 skip test_viz test_no_dedup_different_opts (#6979) 2024-10-10 11:10:24 -04:00
chenyu
e3dc10f8f6 improve fold_unrolled_divs (#6977)
addressed #6935
the first few terms in fold_unrolled_divs might have been folded already, so the check should first try to add those terms back. there is a case that every but one term is folded which is not an add chain anymore, so just added as a failed test case for now
2024-10-10 10:52:05 -04:00
qazal
3481468702 bring viz to core (#6970)
* move viz to core

* pathfix

* move test_viz to core

* cleanup test_viz diff

* use contextvars
2024-10-10 16:56:26 +03:00
nimlgen
fad575ec76 qcom tiny cleanups (#6973) 2024-10-10 12:26:41 +03:00
qazal
3724a66716 move test_viz to test/, prereq for tinygrad/viz [pr] (#6972) 2024-10-10 11:40:46 +03:00
Kinvert
960c495755 added beautiful fashion mnist and example (#6961)
* added beautiful fashion mnist and example

* fixing whitespace

* refactor Fashion MNIST to fewer lines

* fix newline to reduce diff

* Update beautiful_mnist.py

* Update beautiful_mnist.py

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2024-10-10 12:01:07 +08:00
chenyu
b5546912e2 10% more TRAIN_STEPS for bert (#6971)
got two very close run, adding more steps for buffer
2024-10-09 19:21:43 -04:00
nimlgen
f90d8493cc add HCQDEV_WAIT_TIMEOUT_MS (#6968) 2024-10-09 19:50:00 +03:00
chenyu
35cf48659b limit beam param for bert on green (#6966)
seems to mitigate the crash
2024-10-09 11:48:18 -04:00
mesozoic-egg
0e8bcda07e get readable error from wait_check (#6965)
Co-authored-by: Mesozoic Egg <mesozoic.egg@proton.me>
2024-10-09 17:28:58 +03:00
qazal
20d3c2d113 unify UOps.SHAPETRACKER and UOps.SWIZZLE with UOps.VIEW (#6955)
* add UOps.VIEW

* update hardcoded asts

* update sops.gz
2024-10-09 02:00:17 +08:00
nimlgen
137ad5519f amd fix cwsr for gfx11 (#6950)
* amd cwsr

* ()
2024-10-08 17:44:29 +03:00
nimlgen
0d526e251e nv sync on gpu before local update (#6954) 2024-10-08 17:43:58 +03:00
qazal
2800520dd5 even smaller process_replay.py [pr] (#6941)
* even smaller process_replay.py [pr]

* delete those tests

* dedup asts
2024-10-08 20:43:22 +08:00
qazal
851f39653a rename to BUFFER_VIEW + MetaOps cleanup (#6953) 2024-10-08 20:09:22 +08:00
chenyu
1ff2c98f8a fix logfile name for bert red (#6952) 2024-10-08 05:37:52 -04:00
czhu
08bfa8632b embedding shape (#6930) 2024-10-08 14:42:20 +08:00
vladov
20a9683403 Make self.fd Optional. (#6855)
* Make self.fd Optional.

* Fix io_uring when missing fd.

* Compress io_uring fast path code.
2024-10-08 13:25:34 +08:00
chenyu
a78c96273a update bert epoch logging (#6940)
* update bert epoch logging

epoch for bert is simply number of examples seen (which is used for RCP check)

* update total steps too

* more changes
2024-10-08 00:34:06 -04:00
George Hotz
0498e846a5 break out metaops (#6948) 2024-10-08 12:08:54 +08:00
nimlgen
42609300ff hcq no timeline signals in init (#6944) 2024-10-07 23:36:19 +03:00
qazal
0ecc417dd2 prep for viz move to core [pr] (#6938)
* prep for viz move to core [pr]

* polish
2024-10-07 23:24:04 +08:00
chenyu
e4c0743188 failed example for logcumsumexp (#6936)
need cummax for numerical stability
2024-10-07 10:55:45 -04:00
chenyu
102dfe5510 back to 2**10 for bert loss scaler (#6934)
getting 2 NaN for this, revert back to 2**10
2024-10-07 10:17:21 -04:00
qazal
9250452da4 no codegen import in ops [pr] (#6888)
* no codegen import in ops [pr]

* @track_rewrites

* all functions need this

* polish
2024-10-07 20:54:21 +08:00
George Hotz
f7f94cd62f bitcast cleanup [pr] (#6933) 2024-10-07 19:16:16 +08:00
chenyu
0cf815a93a bert use BS=66 and update hparams (#6932)
with dropout memory improvement, we can fit BS=66 now. revert back to the hparams in #5891 too
2024-10-07 05:08:27 -04:00
ignaciosica
32ac24c45c Generic wmma rendering for cuda, ptx [run_process_replay] (#6838)
* generic wmma rendering for cuda, ptx

- also adds wmma generic shape ops_python support

* hotfix: fixed values in ops_python

* hotfix: more fixed values

* hotfix: revert changes in ops_python

* refactor wmma rendering

* hotfix: get n_args directly

* hotfix: use n_args[0] for a

* hotfix: simplify

* hotfix: add args_slices

* hotfix: rename args back to operands

* hotfix: fix spacing

* hotfix: rename upc to sz

* hotfix: rename args to operands in assembly

* hotfix: space

* hotifx: add comment for literal 4

* hotfix: rename some variables and change for clarity
2024-10-07 16:36:36 +08:00
qazal
b82023c97e process replay cleanup to generic _pmap [pr] (#6929)
* process replay cleanup to generic _pmap [pr]

* delete `COMPARE_SCHEDULE`
2024-10-07 13:57:05 +08:00
qazal
16312b4c59 rip out old scheduler process replay stuff, diff pure UOps [pr] (#6927) 2024-10-07 13:20:35 +08:00
chenyu
999e3780e9 dropout contiguous after >= p (#6892)
make it a bool buffer
2024-10-06 19:40:42 -04:00
wozeparrot
9eb6eef441 seed in tensor (#6869) 2024-10-06 14:46:58 -04:00
Tobias Fischer
f9e32f2bb2 clip device fix (#6924) 2024-10-07 00:47:32 +08:00
chenyu
01a2d7316d dtype=float in bert log_softmax for loss and accuracy (#6916) 2024-10-06 11:15:56 -04:00
jeffzh4ng
19a7e41113 implement logcumsumexp (#6921)
* implement logcumsumexp

* change axis=None to axis=0
2024-10-06 10:45:36 -04:00
George Hotz
f588169fdc hotfix: ad for DEBUG=2 in the mnist tutorial 2024-10-06 21:05:48 +08:00
qazal
10ff1d6fb9 viz prep refactor for tracked scope decorator [pr] (#6920)
* viz prep refactor for tracked scope decorator [pr]

* fix fuzzer
2024-10-06 16:02:09 +03:00
qazal
837f9c6832 new viz fuzz tests, track multiple contexts (#6913)
* add FUZZ_VIZ option

* add FUZZ_VIZ=1 tests

* use .replace

* rewrites test

* add rewrite_stack

* add FUZZ_VIZ to ops

* what if FUZZ_VIZ was up there

* leave fuzz_viz for now
2024-10-06 14:58:15 +03:00
chenyu
75d9dcf000 support dtype in softmax and log_softmax (#6914)
matches torch. for mixed precision training, we would want to use float for softmax
2024-10-06 07:18:15 -04:00
chenyu
718b959349 log epoch start and stop for bert (#6912) 2024-10-06 06:39:46 -04:00
qazal
b066ef2282 small changes from the viz_rewrite branch [pr] (#6907)
* simpler replace

* dont show shapetracker consts

* changed_nodes shouldn't exist for the first sink
2024-10-06 12:00:55 +03:00
chenyu
16c1fa4208 use BEAM=3 for red box bert runs (#6904)
BEAM=4 slightly exceeded 30 minutes setup
2024-10-05 09:21:12 -04:00
chenyu
0e706227a2 add seed to bert result log filename (#6903)
* add seed to bert result log filename

* different name for different benchmark
2024-10-05 09:15:24 -04:00
George Hotz
8ed3a00c9c ceildiv helper [pr] (#6899) 2024-10-05 14:59:10 +08:00