George Hotz
a71bb09ec3
remove symbolic file [pr] ( #7012 )
2024-10-12 18:44:44 +08:00
George Hotz
5ae2de9845
UOp.variable ( #7010 )
...
* UOp.variable [pr]
* fix tests
* clean
* improve name rendering
* last bug
2024-10-12 18:20:44 +08:00
Markiian Novosad
8831c691e2
Add slice parameter type checking to disallow Tensor usage for slices ( #6967 )
...
* add support for single el tensors for slices
* rm trailing spaces
* cleanup long lines
* remove tensor in slice support, add comprehensive err msg
* cleanup getitem, add slice type check
* Edit err message
2024-10-11 16:20:21 -04:00
qazal
7451812bbf
delete AST_REWRITE ctx var ( #6995 )
2024-10-11 11:33:16 +03:00
George Hotz
e7a0ffe46a
break out linearization [pr] ( #6994 )
2024-10-11 15:27:33 +08:00
George Hotz
e441794c4b
remove custom op support, we waste time maintaining this ( #6991 )
...
* remove custom op support, we waste time maintaining this
* customop is over
2024-10-11 14:31:09 +08:00
George Hotz
c08521e823
minor cleanups from toonygrad ( #6990 )
2024-10-11 14:19:10 +08:00
George Hotz
f50d0e0ee0
cloud device [pr] ( #6964 )
...
* first try at cloud device [pr]
* real separation
* we're free
* clang works
* unhappy with timeout
* better timeouts and free
* unrelated
* use http verbs + add test
* lines + better test
* fix DELETE
* shorter cloud
* split key
* fix sending renderer
* PTXRenderer serialization
* add sessions
* http.client
* minor timeout bump
* fix keep-alive
* inc server timeout
* real fix timeout
* that one too
2024-10-11 12:24:06 +08:00
Bhavya Gada
23c09f4b4c
add support for padding='same' in nn.conv ( #6975 )
...
* add support for padding='same' in nn.conv
* express concisely
* simplify loop
* test same padding with dilation and conv1d
* fix bad indentation
* make loop one liner
2024-10-11 11:39:07 +08:00
qazal
4ef5310039
track viz context even if rewrite errors [pr] ( #6976 )
2024-10-10 18:33:15 +03:00
chenyu
592e5f1df2
skip test_viz test_no_dedup_different_opts ( #6979 )
2024-10-10 11:10:24 -04:00
chenyu
e3dc10f8f6
improve fold_unrolled_divs ( #6977 )
...
addressed #6935
the first few terms in fold_unrolled_divs might have been folded already, so the check should first try to add those terms back. there is a case that every but one term is folded which is not an add chain anymore, so just added as a failed test case for now
2024-10-10 10:52:05 -04:00
qazal
3481468702
bring viz to core ( #6970 )
...
* move viz to core
* pathfix
* move test_viz to core
* cleanup test_viz diff
* use contextvars
2024-10-10 16:56:26 +03:00
qazal
3724a66716
move test_viz to test/, prereq for tinygrad/viz [pr] ( #6972 )
2024-10-10 11:40:46 +03:00
qazal
20d3c2d113
unify UOps.SHAPETRACKER and UOps.SWIZZLE with UOps.VIEW ( #6955 )
...
* add UOps.VIEW
* update hardcoded asts
* update sops.gz
2024-10-09 02:00:17 +08:00
qazal
2800520dd5
even smaller process_replay.py [pr] ( #6941 )
...
* even smaller process_replay.py [pr]
* delete those tests
* dedup asts
2024-10-08 20:43:22 +08:00
czhu
08bfa8632b
embedding shape ( #6930 )
2024-10-08 14:42:20 +08:00
chenyu
e4c0743188
failed example for logcumsumexp ( #6936 )
...
need cummax for numerical stability
2024-10-07 10:55:45 -04:00
qazal
b82023c97e
process replay cleanup to generic _pmap [pr] ( #6929 )
...
* process replay cleanup to generic _pmap [pr]
* delete `COMPARE_SCHEDULE`
2024-10-07 13:57:05 +08:00
qazal
16312b4c59
rip out old scheduler process replay stuff, diff pure UOps [pr] ( #6927 )
2024-10-07 13:20:35 +08:00
wozeparrot
9eb6eef441
seed in tensor ( #6869 )
2024-10-06 14:46:58 -04:00
jeffzh4ng
19a7e41113
implement logcumsumexp ( #6921 )
...
* implement logcumsumexp
* change axis=None to axis=0
2024-10-06 10:45:36 -04:00
chenyu
75d9dcf000
support dtype in softmax and log_softmax ( #6914 )
...
matches torch. for mixed precision training, we would want to use float for softmax
2024-10-06 07:18:15 -04:00
chenyu
08414d7b7c
cleanup test_uop_symbolic.py ( #6894 )
...
no more test_symbolic for reference, so force expected output to be exact instead of a set
2024-10-04 20:53:10 -04:00
ignaciosica
555bcb5e54
static access for code_for_op ( #6889 )
2024-10-05 07:38:01 +08:00
vladov
5f6b6162b3
Suppress warnings in transcendental tests. ( #6891 )
2024-10-05 07:37:17 +08:00
George Hotz
4df5c7a4ef
move lazy to engine [pr] ( #6886 )
...
* move lazy to engine [pr]
* engine.lazy
2024-10-04 23:19:26 +08:00
George Hotz
6b063450df
move hcq device to runtime [pr] ( #6879 )
...
* things that are only used in one place don't belong in helpers [pr]
* start moving hcq device [pr]
* fix paths
2024-10-04 22:26:50 +08:00
George Hotz
8ca506ee37
remove the magic methods for moving between devices [pr] ( #6881 )
...
* remove the magic methods for moving between devices [pr]
* remove unneeded clang
2024-10-04 20:27:52 +08:00
George Hotz
a0cb16ac61
node cleanup + local metal test speed [pr] ( #6880 )
...
* node cleanup [pr]
* fix tests, including the double one on metal
* no time tqdm tests
2024-10-04 18:14:23 +08:00
George Hotz
cdff1d75b6
things that are only used in one place don't belong in helpers [pr] ( #6878 )
...
* things that are only used in one place don't belong in helpers [pr]
* pretty print moved
2024-10-04 17:27:38 +08:00
George Hotz
f4ec39fe58
switch symbolic from old to uops, final PR ( #6872 )
...
* switch symbolic from old to uops, final PR
* two wrong answers
* not needed resolves
* symbolic ops passes
* symbolic ops passes
* progress
* tests pass (almost)
* fix last test
* fix some tests
* global binding and unbinding
* Revert "global binding and unbinding"
This reverts commit 9456725630 .
* that test works now
* vars on uop doesn't recurse
* fix fuzzer
* update
* fix type
* fix gpt, it's UOp now
* ssimplify symbolics
2024-10-04 16:42:27 +08:00
George Hotz
738a5794a9
last update for new symbolic [pr] ( #6877 )
2024-10-04 14:58:51 +08:00
qazal
17068410e6
give EXT schedules metadata [pr] ( #6865 )
2024-10-03 20:14:18 +08:00
George Hotz
e10245909a
explore global uop cache [pr] ( #6863 )
...
* explore global uop cache
* wvd uops
* remove useless lru caches
* key is is
* simpler rewriter
2024-10-03 13:08:13 +08:00
chenyu
c3c93f332a
symbolic bool raise ValueError when not sure [pr] ( #6853 )
2024-10-02 09:10:58 -04:00
George Hotz
7214450c23
little symbolic changes [pr] ( #6849 )
...
* little symbolic changes [pr]
* symbolic needs resolve too
* no resolve
* less change
2024-10-02 17:12:30 +08:00
George Hotz
be12409b51
changes for symbolic ( #6844 )
...
* changes for symbolic
* only for ints
* check int first
2024-10-02 12:57:16 +08:00
George Hotz
100ce7a684
hotfix: min/max on CMPNE was wrong
2024-10-02 10:15:03 +08:00
George Hotz
1ac83aaa4b
lil sym changes ( #6837 )
...
* lil sym changes [pr]
* fix inf crap
* Update ops.py
* remove that, it's wrong
2024-10-02 09:54:17 +08:00
George Hotz
84726e8855
good changes from symbolic removal [run_process_replay] ( #6835 )
...
* good changes from symbolic removal [run_process_replay]
* fix __ne__
2024-10-01 18:49:09 +08:00
qazal
c5b252cdb3
add pr alias [pr] ( #6834 )
2024-10-01 18:48:44 +08:00
George Hotz
e907b25792
move some pm rules to uopgraph.py [run_process_replay] ( #6831 )
...
* move some pm rules to uopgraph.py [run_process_replay]
* move more
* move lt and clean
* end maybe
* put back
2024-10-01 18:28:41 +08:00
vladov
501cfde7e6
Fix GPT2 with OpenCL backend. ( #6821 )
...
* Fix GPT2 with OpenCL backend.
* Add test for unaligned copies into OpenCL buffers.
2024-10-01 16:57:22 +08:00
qazal
a16a8c5958
color process replay stats [run_process_replay] ( #6830 )
2024-10-01 15:29:11 +08:00
George Hotz
547733e57c
stunning_mnist [run_process_replay] ( #6828 )
...
* stunning_mnist [run_process_replay]
* add loss to stunning mnist
2024-10-01 15:00:48 +08:00
qazal
391497a311
schedule independent of Device [run_process_replay] ( #6829 )
2024-10-01 14:46:26 +08:00
George Hotz
8a93c48901
pickle main pattern matcher [run_process_replay] ( #6827 )
...
* pickle main pattern matcher [run_process_replay]
* del line
2024-10-01 13:58:42 +08:00
George Hotz
d726eb6f48
uop resolve [run_process_replay] ( #6826 )
...
* uop bool and int and stuff [run_process_replay]
* add ne support
* can't even be None anymore
* BinaryOps.AND support
* less compare
2024-10-01 13:11:42 +08:00
George Hotz
50dd6bd951
move cmp tuple out [run_process_replay] ( #6825 )
...
* move cmp tuple out [run_process_replay]
* was unneeded
2024-10-01 10:38:28 +08:00