This is a prereq refactor for cloud multi which will make it possible to
use multiple devices from cloud host instead of just one.
I will do that via changing a session to be a `tuple[token, dev_idx]`
Previously the session was in cookies, this is a problem because a single
http request can contain many RemoteRequests with potentially different
devices.
The alternatives are either:
\- sending commands for different devices in separate http requests (slow)
\- only adding an idx in RemoteRequest in basically the same way i added
session here, keeping session a cookie and concat in server. This is how
i've done it previously and it looks just strictly worse than having it
all be in the same place.
* work on minrf example
* more
* jit sample
* t is tensor not const
* fixes
* more convs
* fix dropout
* don't print
* 504
* big patch
* onehot
* touch
* use embeddings
* dumb uses final layer
* act
* non fl
* match
* tp
* 3
* of
* ppsz
* normal
* add adln
* no t
* weird transformer
* weird transformer
* contig
* actual speed fix
* dumb
* cb
* 0
* t is 0
* mort-t
* args
* dumb days are over
* readable
* contig
* no more t mask
* mask_t
* init to zero
* clean
* steps
* work
* tt
* t
* solid
* even spacing in viz nodes
* precise dy value
* dominant-baseline text-after-edge
* add STROKE_WIDTH constant, delete dominant_baseline attr
---------
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com>
* Enhance tensor random functions with dtype support
- Updated `aten.uniform_` and `aten.normal_` to include dtype parameter in backend.py
- Added unit tests for uniform and normal tensor generation with specific dtypes in test.py
* Refactor test name for clarity
- Renamed `test_normal_dtype` to `test_normal` in `extra/torch_backend/test.py`
- Aims to improve readability and better reflect the test's purpose
* minor grouper + viz fixup [pr]
* gitignore mypy_cache
* reorder create_kernels
* replace with realized
* use tensor_map + viz before spec
* lint
* add that back
* use function for infinity instead of uniform
* test infinity math locally
* test infinity math in CI
* make pytest available to MacOS (WebGPU)
* revert to master except failing webgpu test