Fixes issue:
```
loss_cpu = loss.detach().numpy()[0]
~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed
```
Signed-off-by: David Heidelberg <david@ixit.cz>
* use scaled attn from Tensor
* add a test for bert
* linter
* no more tokenizer
* without loading weights
* remove prints
* tribute to linter lords
* smaller input and less runs
* small bert
* feat: world
* feat: tests
* feat: no more backwards
* feat: recv into
* feat: whoops
* feat: test in ci
* feat: some debug logging
* feat: workflow naming
* feat: need to set pythonpath
* feat: just send to same device
* feat: allreduce
* feat: test
* feat: need contiguous
* feat: test in ci
* feat: exit with correct code
* feat: don't need that
* feat: opencl wait_for just doesn't work
* feat: synchronize on out
* feat: try?
* feat: try again?
* feat: add extra realizes
* feat: print
* feat: seed
* feat: tol
* feat: test ones and zeros
* feat: remove print
* feat: are you just flaky
* feat: seperate scatter and gather?
* feat: just try synchronizing
* feat: remove print again
* feat: bring back difference
* feat: no sync
* feat: revert that
* feat: back to wait_for
* fix: typo
* Refactor AttnBlock, CrossAttention, CLIPAttention to share code
* Reshape and transpose in loop
* Bugfix on attention mask
Co-authored-by: Jacky Lee <39754370+jla524@users.noreply.github.com>
* feat: world
* feat: tests
* feat: no more backwards
* feat: recv into
* feat: whoops
* feat: test in ci
* feat: some debug logging
* feat: workflow naming
* feat: need to set pythonpath
* feat: just send to same device
* Implement scaled_dot_product_attention and test
* Support attn_mask
* Support is_causal too
* Use in llama
* Don't forget to reshape
* Set requires_grad=False for causal
* Remove staticmethod
* Remove extra spaces
* add disk_tensor
* fix jit
* new baseline before whitening
* whitening through torch
* whiting done currently at 91.65%
* 91.99%
* clean up mixup and 92.3%
* clean up 92.30%
* 92.49% before searching for new hyper-parameters
* fix CI
* fix white space
* add whitening init in test
* refactor, update hyperpara, 92.72%
* converting whiting to tinygrad operation
* update CI kernels count for CIFAR
* add pad reflect
* add random crop 92.53%
* update hyperpara 93%
* 93.15% on docker container, need to refactor the assignment for hyper param
* print out weights and bias to be separated
* bias/non-bias params separated
* fix whitespace
* clean up
* refactor hyper-param with dict
* refactor lr schedular params
* fix whitespace
* fix cross entropy loss
* fix whitespace
* move opt hyp to hyp dict
* minor fixup
* adjust model, loss scaling
* 92.74% while using half of compute as before
* update hyp for cutmix
* random shuffle during batches
* clean up
* updating the model
* update ConvGroup
* disable gradients for batchnorm layer weights
* whitespace
* 93.92%
* clean up
* finally 94%git add .!
* rewrite whitening to remove dependency on torch
* whitespace
* remove dependency on torch, 93.91%
* back to 94.03%
* clean up
* update test_real_world
* add stable diffusion and llama
* pretty in CI
* was CI not true
* that
* CI=true, wtf
* pythonpath
* debug=1
* oops, wrong place
* uops test broken for wgpu
* wgpu tests flaky