qazal
ed672881b0
remove additions/deletion in pr + check uops are equal [pr] ( #8779 )
...
* use warnings there [pr]
* remove those + move assert_diff [pr]
* warn after log
* remove
* back
2025-01-28 08:57:34 +02:00
Ignacio Sica
2c71c60719
opt arg is int or tuple ( #8780 )
2025-01-28 11:02:32 +09:00
George Hotz
62655e4999
move multi into engine [pr] ( #8778 )
...
* move multi into engine [pr]
* all runtime is one sz
2025-01-28 09:15:29 +09:00
nimlgen
299fa8f37b
am: unset high clocks for sleep ( #8775 )
2025-01-28 01:15:56 +03:00
chenyu
c99ae81f63
update default resnet LOSS_SCALER to 256 [pr] ( #8774 )
2025-01-27 16:59:05 -05:00
nimlgen
1c608ae34f
am_smi: better spacing ( #8773 )
...
* am_smi: better spacing
* not used
2025-01-27 23:01:02 +03:00
Ignacio Sica
b240f12593
[TIP-9] rename Opt's amt to arg 2 ( #8770 )
...
* rename Opt amt to arg
* ignore_beam_cache for test_tiny
* move ignore_beam_cache to test_tiny
* move to separate pr
* revert space change
---------
Co-authored-by: chenyu <chenyu@fastmail.com >
2025-01-27 14:19:04 -05:00
chenyu
9760688e7f
use IGNORE_BEAM_CACHE in search [pr] ( #8772 )
2025-01-27 13:41:01 -05:00
Ignacio Sica
ed1b573868
ignore beam cache in test_tiny for stateless beam ( #8771 )
2025-01-27 12:56:30 -05:00
George Hotz
3ed146a5ff
Revert "rename Opt amt to arg ( #8767 )" ( #8769 )
...
This reverts commit bf041659a5 .
2025-01-27 23:46:37 +09:00
Ignacio Sica
bf041659a5
rename Opt amt to arg ( #8767 )
2025-01-27 23:36:47 +09:00
George Hotz
96bff0b4f7
contiguous is no longer needed in SGD [pr] ( #8760 )
...
* contiguous is no longer needed in SGD [pr]
* add allow condition
2025-01-27 15:19:11 +09:00
b1tg
efc7971090
add windows test to ci ( #8761 )
...
Co-authored-by: b1tg <b1tg@users.noreply.github.com >
2025-01-27 14:53:21 +09:00
George Hotz
a9d9f98d05
hotfix: those tests fail locally on mac due to buffer count
2025-01-27 07:53:48 +09:00
George Hotz
2454bf01c3
hotfix: remove shapetracker spam in viz
2025-01-27 07:20:21 +09:00
qazal
d488bbb1ec
share merge_views/valid creation for CONST/DEFINE_VAR ( #8758 )
...
* share valid creation behavior for CONST/DEFINE_VAR
* work
2025-01-26 17:41:54 +02:00
qazal
bbb2dd8141
move VALID creation after merging the views ( #8757 )
...
* do valid creation later
* work for view_left
* only view(const) makes valids in view_left
* cleaner bind diff
2025-01-26 16:58:05 +02:00
George Hotz
a6e496b195
remove Function class [pr] ( #8753 )
...
* remove Function class [pr]
* actually remove function
* fix docs
2025-01-26 18:58:02 +09:00
qazal
ac70f63d4b
tensor_map cleanups [pr] ( #8754 )
...
* tensor_map cleanups [pr]
* update test_schedule too
2025-01-26 11:41:54 +02:00
George Hotz
b53fe7c2fc
remove unused ctx [pr] ( #8751 )
...
* remove unused ctx [pr]
* fix test
2025-01-26 17:59:15 +09:00
qazal
06b58aa7ec
move unneeded fields out of ScheduleContext [pr] ( #8752 )
2025-01-26 10:36:15 +02:00
George Hotz
1b4618e257
gradient cleanup ( #8750 )
...
* switch backward to use gradient [pr]
* set device correctly, dedup
* why does that fail?
* add noop cast
* simple backward
* fix beautiful_mnist
* touchups
* set in compute_gradient
* uop_count
* uop_count was wrong
* collections
* no note
* skip that test
* update sched kernel counts
* train mnist is 65
* fix metadata and gc
* fixes
* materialize_grads
* no pathlib stuff
* add contiguous_backward, fix bugs
* add some realize
* fix multi
* remove unused backward passes [pr]
* lower line count
2025-01-26 09:30:55 +09:00
George Hotz
b4bf6a7dea
switch backward to use gradient [pr] ( #8235 )
...
* switch backward to use gradient [pr]
* set device correctly, dedup
* why does that fail?
* add noop cast
* simple backward
* fix beautiful_mnist
* touchups
* set in compute_gradient
* uop_count
* uop_count was wrong
* collections
* no note
* skip that test
* update sched kernel counts
* train mnist is 65
* fix metadata and gc
* fixes
* materialize_grads
* no pathlib stuff
* add contiguous_backward, fix bugs
* add some realize
* fix multi
2025-01-26 09:12:16 +09:00
George Hotz
0ffd572e1e
fix multi with no real srcs ( #8749 )
2025-01-26 08:41:00 +09:00
qazal
0e42befc6e
viz cleanups 2 [pr] ( #8748 )
...
* viz cleanups 2 [pr]
* test_viz updates
2025-01-25 19:41:57 +02:00
nimlgen
c74c5901a8
am disable bind ( #8747 )
2025-01-25 19:06:35 +03:00
qazal
a037201168
test_viz cleanups + move to /unit directory ( #8746 )
...
* test_viz cleanups + move to /unit directory
* lint
2025-01-25 14:33:31 +02:00
chenyu
e2b380b743
make UOp.multi real a tuple instead of list [pr] ( #8744 )
...
tuple is immutable. also updated test_rand_like_from_alu test
2025-01-24 20:47:27 -05:00
George Hotz
cb0978b377
add Ops.CONTIGUOUS_BACKWARD ( #8743 )
2025-01-25 07:28:43 +09:00
nimlgen
2f06eccf1d
am: script and vfio msg ( #8742 )
...
* am: script and vfio msg
* use sysfs bars always for now
* tiny chnages
2025-01-25 00:33:00 +03:00
chenyu
0c759e1ff6
add bert to bechmark ci ( #8741 )
...
with `DISABLE_DROPOUT=1 BERT_LAYERS=2` for now
2025-01-24 14:45:11 -05:00
chenyu
e0e176efbc
failed test case for multi rand_like [pr] ( #8740 )
...
new multi broke multi device dropout
2025-01-24 13:56:51 -05:00
nimlgen
dc10187fc0
am: add am_smi ( #8739 )
...
* am: start monitor
* cleanups
* fixes
* hmm
* progress
* cleanup
2025-01-24 20:16:19 +03:00
George Hotz
7a2223a6c6
add merge views to ops_folding [pr] ( #8051 )
...
Co-authored-by: qazal <qazal.software@gmail.com >
2025-01-24 17:45:11 +02:00
qazal
0814a79cb4
cleanup the merge_views upats [pr] ( #8738 )
2025-01-24 16:49:54 +02:00
qazal
07069b9988
rename to tensor_uop [pr] ( #8737 )
2025-01-24 13:42:25 +02:00
George Hotz
e82ba1454b
MultiLazyBuffer is UOp [pr] ( #8662 )
...
* MultiLazyBuffer is UOp [pr]
* this is new mlb
* this is the idea
* progress
* multitensor works
* more movement ops
* this
* MultiLazyBuffer is UOp
* cleanups
* multi axis
* fix more tests
* work
* not that
* add multi grad and move shard to ops
* mops not views
* no double contig
* sweet, all mt tests passing
* port old logic
* remove lbs
* fix realized
* whitespace
* assign tweak
* test_assign_kv_cache_multi passes
* fix is_realized
* fix JIT for multi
* just a few more lines i'll pay them back soon i swear please bro just a few more
* no split reduceop for multi
2025-01-24 13:28:55 +09:00
chenyu
eb77488f85
update llama3 70B to use R1 ( #8733 )
2025-01-23 19:06:05 -05:00
George Hotz
3e987fc856
add device print with -m tinygrad.device [pr] ( #8729 )
...
* add device print with -m tinygrad.device [pr]
* fix linter
2025-01-24 05:46:27 +09:00
geohotstan
04846b91aa
reorder and categorize onnx_ops ( #8731 )
...
* new order
* remove a todo
* constant node is definitely requires_grad false
* one new line spacing
* property and graph
* oops linter
2025-01-23 13:18:54 -05:00
qazal
8e5bd0cd7a
fix buffer init and skip test_swizzle_failure_permute [pr] ( #8732 )
...
* fix buffer init and skip test_swizzle_failure_permute [pr]
* replace preload with just load
* add
2025-01-23 17:21:38 +02:00
nimlgen
e4512baea4
am: cleanup mm ( #8730 )
...
* am: cleanup mm
* cle
* ops
* entries
2025-01-23 15:49:37 +03:00
qazal
07ec99001a
keep VIEW in big_sink + copy of buffer view spec [pr] ( #8727 )
...
* keep views in sink [pr]
* tests
* things from the gpt2 bug
2025-01-23 11:29:30 +02:00
qazal
6cb74bb630
fix using clone with shrink [pr] ( #8724 )
...
* fix using clone with shrink [pr]
* remove extra arg, add test_clone_with_shrink_realized
2025-01-23 08:28:07 +02:00
chenyu
af65331b76
update beam params for bert green [pr] ( #8726 )
...
increase BEAM_UPCAST_MAX and BEAM_LOCAL_MAX to default and matched red. 3% faster step
2025-01-22 22:00:05 -05:00
qazal
907dfa0e82
image buffer realization spec [pr] ( #8420 )
...
* image buffer realization spec [pr]
* redo the spec
* work
2025-01-22 20:25:22 +02:00
chenyu
49b914ee69
simpler bert acc [pr] ( #8714 )
...
logit.log_softmax().argmax(-1) is equivalent to logit.argmax(-1)
2025-01-22 10:32:19 -05:00
nimlgen
93fb50ce77
allreduce: add flags ( #8713 )
2025-01-22 17:44:31 +03:00
qazal
891436853d
remove buffer size check in schedule item [pr] ( #8712 )
2025-01-22 13:36:30 +02:00
qazal
2dae467b75
scheduler + process_replay import cleanup ( #8711 )
2025-01-22 12:44:07 +02:00