chenyu
b7397c1322
more typing cleanups [pr] ( #8376 )
...
List, Tuple, DefaultDict
2024-12-22 05:21:03 -05:00
chenyu
afcd70af97
remove untriggered/untested code [pr] ( #8375 )
...
ran with `coverage` on test. not sure about if we still want max_var_const so just commented it out
2024-12-22 04:07:53 -05:00
chenyu
8fdcb60461
remove unused View.t and lt [pr] ( #8374 )
2024-12-22 02:26:54 -05:00
chenyu
7ea633f94f
remove from __future__ import annotations from runtimes [pr] ( #8373 )
...
it's not needed if we move the Device before Program and Allocator, which need Device.
not updating hcq because it has a lot more stuff, and CLDevice requires CLDevice
2024-12-21 23:46:07 -05:00
chenyu
e934f987c6
minor cstyle cleanup [pr] ( #8371 )
...
* minor cstyle cleanup [pr]
* *
2024-12-21 22:17:45 -05:00
qazal
514a6740e4
Revert "CONST(VIEW(DEVICE)) ( #8365 )" ( #8372 )
...
This reverts commit 83284985f0 .
2024-12-22 04:44:34 +02:00
qazal
83284985f0
CONST(VIEW(DEVICE)) ( #8365 )
2024-12-22 04:18:35 +02:00
qazal
88bc51385c
scheduler: don't trade complexity for speed ( #8370 )
...
* scheduler: don't trade complexity for speed
* don't need is_scheduled
* make those tests real world
* graph_rewrite dedup
2024-12-22 03:30:51 +02:00
qazal
991b91d4d6
fix string repr of arg in viz and print [pr] ( #8369 )
2024-12-21 23:44:10 +02:00
ignaciosica
ba0c844a83
special tol when f16 and bf16 are tc input dtypes ( #8183 )
2024-12-21 11:32:26 -05:00
geohotstan
3f83748661
update onnx and onnx_ops to 3.10+ typing ( #8360 )
...
* fixed mypy and updated to modern typing
* selective ruff check changes (all except E501)
* some more clean ups
* fix comment
* small nit
2024-12-21 11:17:47 -05:00
qazal
72aa38aa3b
BIND in tensor_uop_spec + cleanups [pr] ( #8363 )
...
* Ops.BIND pattern in tensor_uop_spec + cleanups [pr]
* use metaops there
2024-12-21 21:26:47 +08:00
qazal
4e8812db37
asserts to prepare for Tensor BIND proposal [pr] ( #8362 )
2024-12-21 20:35:08 +08:00
chenyu
1ce9851ba6
import and type cleanups [pr] ( #8359 )
...
Dict and DefaultDict and some imports
2024-12-20 21:52:02 -05:00
chenyu
18dca3c3d7
isolate train_gpt2 slow kernels [pr] ( #8358 )
...
also fixed run_linearizer with var_vals=None
2024-12-20 17:59:01 -05:00
George Hotz
9f62c80f68
hotfix: this is a loan
2024-12-20 14:47:04 -08:00
qazal
2649e87546
delete the fake buffer from const ( #8355 )
...
* delete the fake buffer from const
* fix test_sink_childless_const_alt
* it should be CONST(VIEW(DEVICE))
2024-12-21 04:20:28 +08:00
George Hotz
b7499764f5
horfix: have viz hide the stupid -1 BUFFERs
2024-12-20 10:47:44 -08:00
chenyu
cd79a904c5
add back explicit dict[DType, str] in ptx [pr] ( #8352 )
2024-12-20 13:19:48 -05:00
George Hotz
074315ec08
hotfix: simpler test_mnist_model
2024-12-20 10:18:17 -08:00
chenyu
20eebbc61a
minor PTX cleanups [pr] ( #8351 )
2024-12-20 12:52:53 -05:00
qazal
59f4b8da95
Tensor uop spec ( #8311 )
...
* Tensor uop spec
* minor
* feedback
* restrict ShapeTracker of VIEW(BUFFER) to contiguous
* in image base mutates, how do we rewrite the view?
* cast post realize
* now ucache errors
* how strict can this be?
* put constraints on EMPTY
* merge
* save lines
* import import
* overloaded assign target
* more strict
* fine don't overload it
* more
* actually, this is better
* and it even exists
* this way it works for BUFFER
* Revert "this way it works for BUFFER"
This reverts commit 71c15f6b14 .
* make it like linearize.py
* assign take 4
* minor
* all int, space and that's already base
* target
---------
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com >
2024-12-20 23:47:40 +08:00
qazal
5776ea9386
hotfix: account for all changes in process_replay early stopping [pr] ( #8348 )
2024-12-20 23:46:46 +08:00
chenyu
e63c7818dc
few type cleanups [pr] ( #8347 )
2024-12-20 01:56:01 -05:00
George Hotz
82833f1b3c
a little more typing [pr] ( #8346 )
...
* a little more typing [pr]
* few more
2024-12-19 22:09:52 -08:00
George Hotz
62e5d96446
more typing work [pr] ( #8345 )
2024-12-19 21:46:35 -08:00
George Hotz
9c77e9f9b7
replace Tuple with tuple [pr] ( #8344 )
...
* replace Tuple with tuple [pr]
* replace List with list [pr]
* replace Dict with dict [pr]
* replace Set with set [pr]
2024-12-19 21:27:56 -08:00
George Hotz
adcdc583a2
small cleanups [pr] ( #8343 )
...
* small cleanups [pr]
* GPU suppress
2024-12-19 21:20:46 -08:00
George Hotz
9f306e12ac
hotfix: test_net_speed can't backward before realize
2024-12-19 20:32:59 -08:00
George Hotz
aa9462c29b
fix (some) requires_grad [pr] ( #8342 )
2024-12-19 19:34:14 -08:00
Harald Schäfer
7059459648
Openpilot compile: fix for openpilot use ( #8338 )
...
* compile3 changes
* merge conflict
* merge conflict
* give dm npy for now
* Revert "give dm npy for now"
This reverts commit bfd980da7d2c2bab5b073127442c361922032ba1.
* updates
* Always float32 floats
* Update compile3.py
* Update compile3.py
---------
Co-authored-by: ZwX1616 <zwx1616@gmail.com >
2024-12-19 19:43:15 -05:00
chenyu
7153f7709f
update test_merge_view_recursion_err2 [pr] ( #8339 )
...
the view was not created through View.create, updated the test to show the expected behavior
2024-12-19 18:29:34 -05:00
chenyu
2bf47b75da
temp fix for symbolic shape view add [pr] ( #8337 )
...
something is still wrong with symbolic shape shrink, but it should not recurse forever
2024-12-19 16:10:42 -05:00
chenyu
791a80a1c7
add failed merge view example to test_simplify_valid_idx [pr] ( #8334 )
...
* add failed merge view example to test_simplify_valid_idx [pr]
* !=True is fine
2024-12-19 12:54:03 -05:00
qazal
8e266091fb
tensor const spec [pr] ( #8331 )
2024-12-19 22:41:30 +08:00
George Hotz
0ad264ed2d
new from uops [pr] ( #8330 )
...
* new from uops [pr]
* mem_estimate is it's own thing
2024-12-18 23:42:58 -08:00
George Hotz
2aa39d03cd
cleanups from Estimate [pr] ( #8329 )
2024-12-18 23:01:14 -08:00
George Hotz
3a9ca62b9e
get_single_element [pr] ( #8328 )
2024-12-18 22:23:45 -08:00
geohotstan
423d823c50
add GatherND and ScatterND to onnx ops ( #8241 )
...
* implemented
* this implementation is now correct
* this is fine I guess
* better variable names
* finally correct gathernd
* add a note
* eh just leave it at this for now
* teeny adjustment
2024-12-19 00:35:04 -05:00
chenyu
accc186c8b
remove a leading 1 check in _reshape_mask [pr] ( #8327 )
...
the only possible mask for it is either (0, 0) or (0, 1). so the logic is no-op
2024-12-18 19:30:10 -05:00
chenyu
8a8eaa1ed9
minor change to _reshape_mask [pr] ( #8324 )
...
formatting before logic change
2024-12-18 16:29:12 -05:00
George Hotz
6608ba316d
add size of the buffer to the ptr dtype ( #8322 )
2024-12-18 12:46:35 -08:00
George Hotz
52243b258c
move flops_mem to renderer [pr] ( #8320 )
2024-12-18 12:13:17 -08:00
chenyu
d2ee304337
minor cleanup to _reshape_mask [pr] ( #8321 )
...
removed usused mask check, and combined if blocks
2024-12-18 15:09:33 -05:00
chenyu
b4bb8de7f4
remove Sigmoid from function.py [pr] ( #8318 )
2024-12-18 13:23:38 -05:00
George Hotz
8f95b578f6
use Estimates class [pr] ( #8319 )
...
* use Estimates class [pr]
* frozen dataclass
2024-12-18 10:19:32 -08:00
chenyu
63f195729d
add gguf_load to doc [pr] ( #8314 )
...
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com >
2024-12-18 12:44:09 -05:00
George Hotz
bd9c015b09
tests from grad uop path [pr] ( #8313 )
2024-12-18 09:25:05 -08:00
George Hotz
6a1987f9f9
hotfix: detach is not a metaop
2024-12-18 09:23:42 -08:00
qazal
fddaeb6344
scheduler deduping spec and asserts [pr] ( #8307 )
...
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com >
2024-12-18 09:21:41 -08:00