* minor cleanups
* docs and logs
* shorter
* comma
* s/print/logging.info [run_process_replay]
* use logging.warn
* process name is noise
* revert lowerer change [run_process_replay]
* render lidx starting with 0
changed from
```
int gidx0 = gid.x; /* 4096 */
int lidx4 = lid.x; /* 8 */
int gidx1 = gid.y; /* 7 */
int lidx5 = lid.y; /* 8 */
int gidx2 = gid.z; /* 7 */
int lidx6 = lid.z; /* 2 */
```
to
```
int gidx0 = gid.x; /* 4096 */
int lidx0 = lid.x; /* 8 */
int gidx1 = gid.y; /* 7 */
int lidx1 = lid.y; /* 8 */
int gidx2 = gid.z; /* 7 */
int lidx2 = lid.z; /* 2 */
```
the existing one started from pre-limited global dims which skip number if there are more than 3 global dims
* don't need start_dim
---------
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com>
* render lidx starting with 0
changed from
```
int gidx0 = gid.x; /* 4096 */
int lidx4 = lid.x; /* 8 */
int gidx1 = gid.y; /* 7 */
int lidx5 = lid.y; /* 8 */
int gidx2 = gid.z; /* 7 */
int lidx6 = lid.z; /* 2 */
```
to
```
int gidx0 = gid.x; /* 4096 */
int lidx0 = lid.x; /* 8 */
int gidx1 = gid.y; /* 7 */
int lidx1 = lid.y; /* 8 */
int gidx2 = gid.z; /* 7 */
int lidx2 = lid.z; /* 2 */
```
the existing one started from pre-limited global dims which skip number if there are more than 3 global dims
* don't need start_dim
* add changed
* env var
* more early exit
* simpler?
* Revert "Merge branch 'lidx0' into process_replay_limit"
This reverts commit cbadcfa5e9, reversing
changes made to fc9bf37ee7.
* minor cleanup
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* beam compare 2
* found issue maybe
* correct, not fail
* full rand
* less numpy
* extra simplify doesn't fix it
* reorder
* no numpy
* check in reverse
* test new tensor behavior
* better error msg
* remove check_process_replay
* that can go to the top
* add assert back
* [run_process_replay]
* checkout code [run_process_replay]
* temp [run_process_replay]
* revert temp [run_process_replay]
* ahh this is why [run_process_replay]
* revert temp [run_process_replay]
* [Patch] Removed weird NaN Handling in xlog2 resulting in different output around 1e-203
* Patch: compare the value of xlog(x) using y, allowing x <= 1e-200
* mypy
* fuzzer tests for log2
* fix tests: use approximate dbl_min, fp64 fails at nv
* update: gradually increment the scale (if y is not inf)
* fixes on transcendental: fix for fp64 payne hanek, refactor for fp16 sin
* revert the changes on test
* refactor on xsin: removed cody_waite_reduction, always use payne_hanek
* Revert "refactor on xsin: removed cody_waite_reduction, always use payne_hanek"
This reverts commit 2fd401f251.
* still need cody_waite_reduction for the very smaller range
* test: added a regression test for transcendental sin
* test: found the worse case ulp 3.5 only in numpy
* give the input as a valid dtype
---------
Co-authored-by: chenyu <chenyu@fastmail.com>
* replays
* what's in there
* can it be up there
* sha is enough
* insert sha as the key
* fix str
* update reset utils
* that nested try/except was terrible
* github_context can go
* test: use const
* hotfix: base
* asserts
* dont push through reshape
* cleanup
* dont need the cache
* test_reduceop_reshape_dont_push and test_index_fused are next
* improve single kernel indexing
* metadata in graph (#5399)
* indexing is O(1)
* add failing test
* ugh, that all needs to be replaced with symbolic
* broken on ptx, it's fine
---------
Co-authored-by: wozeparrot <wozeparrot@gmail.com>
* indexing getting better [run_process_replay] [no_assert]
* fix test
* test_arange_2_reduce is a simpler test
* put that print back, NOOPT
* don't merge reduces (they could be different reduces)
* FUSE_AS_ONE_KERNEL
* fix tests
* fix test_var_multireduce
* w/e put that there
* fails on others too
* fix test, revert UNMUL change
* in case order matters
* one kernel indexing works
* one kernel indexing works (test other)