Commit Graph

10417 Commits

Author SHA1 Message Date
George Hotz
46b05daf7c simple display_name (#2416)
* simple display_name

* name functions

* self.global_size [1]

* CompiledASTRunner display_name

* assert sizes are len 3

* 3 dims for GPU

* auto self.global_size
2023-11-23 19:50:23 -08:00
George Hotz
12023b6824 onnx ops cleanup (#2413)
* onnx ops cleanup

* revert those
2023-11-23 18:39:49 -08:00
George Hotz
8f89e21fca torch and numpy don't share ops anymore (#2412)
* torch and numpy don't share ops anymore

* that should be filtered out elsewhere

* still const

* graph + enet example cleanup

* hmm, we do still need it because of symbolic
2023-11-23 16:58:10 -08:00
George Hotz
193be14b6c that had bugs, force an order (#2411) 2023-11-23 15:52:16 -08:00
George Hotz
65f4e6971b beautiful_mnist.py link 2023-11-23 14:58:22 -08:00
George Hotz
1b3b8de5e2 update readme examples 2023-11-23 14:54:52 -08:00
George Hotz
5bb720a777 Cocoa is no longer used 2023-11-23 14:31:21 -08:00
George Hotz
095e2ced61 add name support to fetch (#2407)
* add name support

* use fetch in gpt2

* remove requests from main lib, networkx also optional

* umm, keep that assert

* updates to fetch

* i love the walrus so much

* stop bundling mnist with tinygrad

* err, https

* download cache names

* add DOWNLOAD_CACHE_VERSION

* need env.

* ugh, wrong path

* replace get_child
2023-11-23 14:16:17 -08:00
nimlgen
397c093656 fix wait in jit (#2408) 2023-11-23 13:54:13 -08:00
qazal
b927942d58 Move HIP render logic to its dedicated place (#2394)
* update HIP language

* vectorized render_cast with special treatment for hip only

* test coverage for all cases

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2023-11-23 13:03:29 -08:00
Francis Lata
6d672785db Update Whisper to use fetch helper (#2401)
* update whisper to use new fetch helper

* simplify file opening

* update name

* update key name to "downloads-cache"
2023-11-23 12:59:59 -08:00
George Hotz
0505c5ea50 remove force_wait, refactor to graph (#2405)
* remove force_wait

* refactor

* get rid of stupid ASTRunner

* fix del in diskbuffer

* BufferOps.FROM_UNDERLYING

* put offset in the rawbuffer

* fix bugs

* use exec
2023-11-23 12:46:07 -08:00
Ivan Beňovic
c5d585ea35 Fix Triton README broken link (#2406)
* Remove triton from README

* Fix broken link
2023-11-23 12:38:17 -08:00
chenyu
b27c845531 minor cleanup for View strides (#2404) 2023-11-23 13:40:01 -05:00
chenyu
64aa2f4156 clean up to_shape_strides (#2402) 2023-11-23 13:04:00 -05:00
George Hotz
e4026dc197 don't pass lazybuffer to rawbuffer (#2400)
* don't pass lazybuffer to rawbuffer

* tensor comments
2023-11-23 09:40:28 -08:00
Ryan Dorrington
aefa97a962 Remove runtime imports in realize (#2157)
* steal from https://github.com/PalauReq

* tests passing but not correct

* move _realize_from if statements to lib.py

* oneline

* cleanup

* remove imports & add P2P back in

* cleanup

* fromBuffer & call fromCPU rather than super().fromBuffer

* remove whitespace

* move RawBufferMapped.fromBuffer functionality to RawDiskBuffer

* remove classmethod and realize

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2023-11-23 09:17:04 -08:00
George Hotz
4f8f0ac139 minor cleanups, remove dead files (#2398)
* minor cleanups, remove dead files

* s.name

* use disk

* pytest passes on mac
2023-11-23 09:01:50 -08:00
George Hotz
66c75f30c6 remove triton (#2396) 2023-11-23 07:40:59 -08:00
George Hotz
8656eebb42 jit doesn't use named tensors (#2393)
* jit doesn't use named tensors

* move to compile2

* remove broken single root junk

* explicit float32

* skip slow test
2023-11-23 00:13:18 -08:00
George Hotz
80e4ad8bf5 faster get_recursive_parents (#2392)
* faster get_recursive_parents

* skip test for those

* full sum works everywhere

* timing

* debug print
2023-11-22 20:37:19 -08:00
chenyu
8798d120bb autopad shapetracker for BEAM (#2375)
* autopad shapetracker for BEAM

* OptOps.PADTO

* skip that test for now

* correct padding reduce axis

* just 32

* avoid more than double the FLOPs

* cleanups

* test case

* no support for triton and llvm yet

* typos

* symbolic shape would not work

* cannot PADTO with MAX kernel

* advance db version

* no breaking change - don't advance db version

* is triton just python?

* Revert "is triton just python?"

This reverts commit 17e776c25587615e33a3634c2fb0bb8591ce65d4.

* Revert "Revert "is triton just python?""

This reverts commit 6c434c01e1c4b0ea0431ec18632cd859fb3cf260.

* support llvm

* is it really passing in CI only?

* update tests

* oh triton test passed

* simpler

* revert that, with a test

* check if st are the same

* Revert "check if st are the same"

This reverts commit d2a5eac110a5da1af82a2728c883779ef69c3cad.

* update the db version

* rebase artifact
2023-11-22 21:05:25 -05:00
Tiny Box
162db466c3 hotfix: fix hip WMMA casting hack 2023-11-22 17:58:08 -08:00
George Hotz
6ceecc961e hotfix: scalar 2023-11-22 17:48:24 -08:00
qazal
0eda545946 dtypes.float.vec(sz) (#2386)
* replace all _dtypen with dtype.vec(n)

fix: print works

* conceptul refactor of cstyle render_load logic

* linearizer GEP is explicit that its dtype is the scalar version of localtype

* vectorized global_store and load don't need a conditional
2023-11-22 17:43:14 -08:00
George Hotz
cbb8486779 ResNet training changes (update benchmark) (#2390)
* default arg for chunk

* bring back to_

* good changes

* new set

* unused hash

* fix optim

* new torch loader

* fix test lr scheduler
2023-11-22 17:41:12 -08:00
George Hotz
2dec86970a hotfix: default remains gen 1 llama 2023-11-21 14:43:02 -08:00
mmmkkaaayy
7f0cc4a4e8 whisper: support audio >30s (#2378)
* whisper: support audio >30s

* make prompt indexing consistent with reference repo

* fix online
2023-11-21 14:37:51 -08:00
Oleg Rybalko
7220f5c9fc fixed hf convert and now it's working with tinyllama (#2374)
* fixed hf convert and now it's working with tinyllama

* added tinyllama config

* refactored code and made it work with all llama models

* prettier order

* prettier order

* fixed suffix for tinyllama and refactored convert_from_hf

* dynamically update help if MODEL_PARAMS changes and default size is the 1st
2023-11-21 14:36:52 -08:00
chenyu
d0f966b320 add a segfault linearizer test case (#2383)
* add a segfault linearizer test case

* another interesting one
2023-11-21 15:06:41 -05:00
chenyu
9eeba968cd fix the variable arg order (#2382) 2023-11-21 12:02:31 -05:00
nimlgen
c5f429a40a Fix linearizer cache (#2371)
* fix linearizer cache

* better comments

* a bit cleaner
2023-11-21 07:58:35 -08:00
Umut Zengin
0da72119bb Readable and Faster Union of Vars (#2380)
* functool reduce to set.union

* flake8
2023-11-21 09:45:19 -05:00
qazal
15c316b9b1 add marker (#2379) 2023-11-21 09:44:15 -05:00
wozeparrot
fb0d650b25 feat: don't optimize buffers when its not an astrunner (#2377) 2023-11-20 22:07:31 -08:00
wozeparrot
abbcc7aefa missed cleanup from cache_id removal (#2376) 2023-11-21 01:03:43 -05:00
Duc TranMinh
179551a55c remove file writing in metal ops (#2369)
* remove file writing in metal ops

* remove unused import

---------

Co-authored-by: ductm104 <ductm>
2023-11-20 19:24:39 -08:00
chenyu
c4cc4966ed update some test_tensor.py cases with 0 in shape (#2368) 2023-11-19 20:35:05 -05:00
chenyu
6add808f6a support tuple shape input for rand and empty (#2367) 2023-11-19 20:20:39 -05:00
chenyu
e9847be790 remove whisper +1-1 hack (#2360)
* remove whisper +1-1 hack

* Revert "remove whisper +1-1 hack"

This reverts commit 5db3800f09.

* update whisper tests

* comment context
2023-11-19 17:56:36 -05:00
George Hotz
a0890f4e6c move fetch to helpers (#2363)
* switch datasets to new fetch

* add test_helpers

* fix convnext and delete old torch load
2023-11-19 12:29:51 -08:00
chenyu
03968622a2 Pretty multinomial (#2365)
* pretty multinomial

p, cdf_normalized -> weight, cdf
symmetric unsqueeze / squeeze
check num_sample > 0

TODO: how do we want to handle 0/0 in general?

* no 0-dim input

* single sum
2023-11-19 15:10:10 -05:00
Friedrich Carl Eichenroth
0eb0defa6f remove unused key properties (#2359) 2023-11-18 23:30:21 -08:00
Friedrich Carl Eichenroth
b3a21eee7d just new types (#2358) 2023-11-18 23:29:46 -08:00
chenyu
f203d37258 retry test_webgpu.js 3 times (#2362) 2023-11-18 21:24:47 -05:00
mmmkkaaayy
08d09eb666 Enable whisper test in CI for more backends (#2355) 2023-11-18 17:52:50 -05:00
chenyu
d7d078c7f9 Node.vars() returns a set and properly dedup (#2356)
* dedup RedNode.vars()

* vars returns a set

* fix more vars

* unused import

* update to_movement_ops

* comment
2023-11-18 17:44:52 -05:00
chenyu
0443cbfbb9 fix shm path test on macos (#2357)
AttributeError: 'PosixPath' object has no attribute 'startswith'
2023-11-18 17:37:42 -05:00
chenyu
f02e17a967 Variable.num -> NumNode (#2354) 2023-11-18 15:45:52 -05:00
George Hotz
40246d35bc ops_shm removed (#2351)
* ops_shm removed

* buf.cast

* err, forgot those
2023-11-18 11:41:58 -08:00