Commit Graph

10417 Commits

Author SHA1 Message Date
Yixiang Gao
a8f2c16f8e add contiguous (#1246) 2023-07-15 08:36:34 -07:00
Stan
872e2198fe Added nn.ConvTranspose1d (#1243)
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2023-07-15 00:42:42 -07:00
Oddity
7399f6dad7 display sass for both cuda code and ptx (#1240)
* skip nvcc compile target cubin when using PTX

* actually we should generate sass for both ptx and cuda code

* Fixed formatting, should print the error anyway

* ensure subprocess.run throws exception

* fixed linting errors and checked before commit this time
2023-07-15 00:36:04 -07:00
Stan
264d467f2b Added tensor.squeeze and support for testing exceptions (#1241)
* WIP: `tensor.squeeze` function

* Added `test_except` param to `helper_test_op` to avoid false positives

* Extracted new method `helper_test_exception` for testing exceptions

* Made `squeeze` not throw IndexError when ndim == 0 and dim <= 0 to match PyTorch
2023-07-15 00:33:24 -07:00
Stan
a8f3b3f4ed Added test for nn.Conv1d (#1242) 2023-07-15 00:30:50 -07:00
David Hou
9c135c9450 add sqrt to ptx (#1236) 2023-07-13 07:26:11 -07:00
chenyu
32be39554c Simplify symbolic.SumNode.__floordiv__ logic (#1220) 2023-07-12 12:54:12 -07:00
Diogo
a9a1df785f Webgpu support (#1077)
* initial commit

* 81 passing

* 105 passing tests

* 148 passing

* CI tests

* install dep on ci

* try opencl pkgs

* try using vulkan

* down to only 6 failing

* refactor

* cleaning up

* another test skipped due to buffer limit

* linter

* segfault

* indent fix

* another segfault found

* small touchups

* Fix max and maxpool tests

* Add constant folding

* Add javascript export script

* better asserts in codegen

* manual upcasting

* reverted token type change

* skip safetensor test due to unsupported type

* FIx efficientnet and all other model tests

* Remove np copy

* fixed indent and missing import

* manually destroy the buffer

* revert back to length

* linter errors

* removed extra val

* skip broken tests

* skipping more tests

* Make the page pretty

* Save model weights as safetensor

* Fix imagenet to c test

* Fix second imagenet to c bug

* Async and paralel kernel compilation

* workgroup support

* reversed local size

* fixed non local bug

* correct local groups

* ci experiment

* removed typo

* Fix define local by using shared memory

* Refactor

* try running on mac

* match metal tests

* add more workers

* scope down tests

* trying windows runner

* fixed windows env

* see how many it can do

* merged master

* refactor

* missed refactor

* increase test suite coverage

* missing import

* whitespace in test_efficientnet.py

* getting there

* fixed reset

* fixed bufs

* switched to cstyle

* cleanup

* min/max rename

* one more linter issue

* fixed demo

* linter

* testing ci chrome

* add unsafe webgpu arg

* add build step

* remove WEBGPU from cmd line

* use module

* try forcing directx

* trying forced metal backend

* temp disable conv2d for CI

* disable conv_trasnpose2d

---------

Co-authored-by: 0x4d - Martin Loretz <20306567+martinloretzzz@users.noreply.github.com>
Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
2023-07-12 12:52:06 -07:00
Yosef Frost
613bcd945d Added Test Coverage to Int32 and Make Sure Tests Succeed (#1174)
* Added test coverage for int32 in `test/test_dtype.py`

Tests for int32 include:
- testing that int32 can be converted into a numpy array
- testing that float and int64 can be cast into int32
- testing that int32 can be cast into float and int64
- testing addition, multiplication, and matrix multiplication with int32
- testing that addition, multiplication, and matrix multiplication with int32 and either float or int64 gets successfully cast into float and int64, respectively

Additional changes include testing that int8 casts into int32 and testing that float16 casts into int32

* Added type casting to the add, subtract, and divide binary operations

* Added automatic type casting when types differ to FusedOps.MULACC

I moved the match_types function back so that I could call it in einsum_mulacc where it would cast the types of the MULACC to be the same

* Added unit test for match_types and added type hints to the parameters

* Added tests for ops_cpu.match_types

* Changed ops_cpu.einsum logic to play nicely with PyTorch

Changed `tinygrad.runtime.ops_cpu.einsum_mulacc` logic to not perform type matching. Type matching was instead moved to the numpy_fxn_for_op dictionary in the ops_cpu file. Since ops_torch uses the same einsum_mulacc function, this should fix all the broken pytorch tests.

* empty commit to rerun ci

* reverting PR#1213 in attempt to fix broken test

* Removed all tests I added to see if they are causing CI issues

* Added back type matching tests

* removed type matching tests and added back int tests

* added back part of the type matching tests

* removed braking type matching tests

* empty commit for testing

* added test back but inside comment

* removed a test from the comment to see if it breaks CI

* removed another function

* more testing

* emptied test comment

* cleaned up comments

* Added optimize=True flag to einsum_mullac in cpu_ops.py

* Removed unnecessary imports from tests

* optimized match_types by removing unnecessary array copying
2023-07-12 10:29:15 -07:00
Roelof van Dijk
8f2e2f5ee2 style: else-after-return (#1216)
Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-07-12 10:26:38 -07:00
George Hotz
ab663c46e8 tensor cores: don't upcast if we can't. fix stable diffusion 2023-07-12 10:21:02 -07:00
Hey
4f72eb823c Outdated repository URL (#1218)
* Update outdated repo url

* Update more outdated repo url's
2023-07-11 23:14:19 -07:00
Roelof van Dijk
d0e21a7398 ci: don't install recommended packages for GPU (#1215)
Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-07-11 15:38:49 -07:00
Francis Lam
df86672bd4 Fix LazyBuffer SHUFFLE_PAD_OPS to prevent invalid pad movement (#1223)
In addition to div, any ops that will generate non-zero outputs from
zero inputs need to be guarded.
2023-07-11 15:30:35 -07:00
AN Long
f75de602df fix typo in stable diffusion example (#1219) 2023-07-11 15:26:40 -07:00
chenyu
ab645317c9 Fix constant folding for Tensor([3]) (#1227)
* Fix constant folding for Tensor([3])

* Remove duplicated prod import

* load in the same device

* better numpy

* add constant fold shape test cases

* improve tests
2023-07-11 14:01:32 -07:00
Carson Radtke
e2f6b09ffd [perf] optimize=True kwarg for np.einsum (#1213) 2023-07-09 18:31:04 -07:00
madt2709
bb316a42af Fix pow to work with negative tensors (#1191) 2023-07-09 17:33:04 -07:00
George Hotz
43385c7dbf remove contiguous on full (#1212) 2023-07-09 17:31:15 -07:00
Carson Radtke
13a1abf9e7 remove tuple from type annotation in Tensor.__init__ (#1211) 2023-07-09 16:27:07 -07:00
Roelof van Dijk
e27f098946 View as namedtuple, cached methods (#1075)
Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-07-09 14:26:02 -07:00
Carson Radtke
1eb0e0cb3f implement common subexpression elimination (#1204)
* implement common subexpr elimination

* Revert "implement common subexpr elimination"

This reverts commit 40c5487d20.

* move cse to ast_parse + add type annotations

* oneline if

* improve saved_exprs lookup
2023-07-09 14:22:53 -07:00
George Hotz
beb4d3ab01 Tensor Cores 2: Local Buffers Edition (#1057)
* local buffers

* work

* works

* invert_strides

* work

* non tc

* fix shapetracker bug

* stride priority

* touchups

* gate tensor cores

* tensor core conv

* cleanups

* bug fixes

* fix metal_matmul

* fast tensor cores

* more speed

* buffer selection bug fix

* fix CI maybe

* ugh, CI is set to true, not 1

* tc allowed

* add_gl_dimension

* split out padding conv tests

* does padding add fail

* test_padded_conv2d_1x1

* skip metal ci stuff

* more strict on yellow

* float2

* strip parens

* fix float2

* touch up

* dtype

* strip parens

* no alias

* bugfix

* cast float2 and test tensor core ops

* oops, don't hardcode 4
2023-07-09 09:06:00 -07:00
George Hotz
67e34b356a good stuff from tensor cores branch (#1199) 2023-07-08 16:58:26 -07:00
George Hotz
7151382364 Refactor load/store before tensor cores (#1193)
* minor cleanups

* render_const

* now that's a nice refactor

* clean up vload/vstore

* clean up render_load

* debugs there

* dumb

* err, this?

* const float4

* what's failing

* bugfix

* statement includes semicolon

* bugfix
2023-07-08 15:54:58 -07:00
fluffy χατγιρλ
ef1909500e remove superfluous parentheses (#1197) 2023-07-08 15:11:02 -07:00
fluffy χατγιρλ
628ee46627 Fix bug where Tensor.randn returns inf (#1192)
* fix randn inf bug

* add test

* more compact test

* clarify test purpose
2023-07-08 12:03:46 -07:00
George Hotz
d9c1d81e99 Revert "feat: cancel previous workflow runs on new commits (#1184)" (#1194)
This reverts commit d66a0c285d.
2023-07-08 11:26:13 -07:00
George Hotz
52600d532e add 20 minute timeout 2023-07-07 23:02:28 -07:00
wozeparrot
d66a0c285d feat: cancel previous workflow runs on new commits (#1184) 2023-07-07 22:55:35 -07:00
Jacky Lee
e0c2ae8984 Update file paths (#1179) 2023-07-07 18:41:58 -07:00
George Hotz
0ad99038ef Revert "Revert "Fix ShapeTracker mismatch in LazyBuffer.fromCPU (#1156)" (#1181)" + add test
This reverts commit a374b62bfe.
2023-07-07 18:37:04 -07:00
George Hotz
2952b8e7a8 Fix up abstractions.py to include the Linearizer (#1177)
* fix up docs

* remove pow, add sqrt
2023-07-07 18:33:51 -07:00
George Hotz
a374b62bfe Revert "Fix ShapeTracker mismatch in LazyBuffer.fromCPU (#1156)" (#1181)
This reverts commit 8ff7184b1b.
2023-07-07 18:29:05 -07:00
fluffy χατγιρλ
8ff7184b1b Fix ShapeTracker mismatch in LazyBuffer.fromCPU (#1156)
* init shape tracker with strides to fix mismatch

Author:    sekstini <sekstinilol@gmail.com>

* fix whitespace

* add tests
2023-07-07 18:28:21 -07:00
George Hotz
b8dfbba703 hip_matmul: f16 gemm 2048x2048 gets 36 TFLOPS 2023-07-08 00:35:45 +00:00
Stan
69d33cab0d Fix: auto create parent dir when downloading file (#1173)
* Fix: auto create parent dir when downloading file

also removed duplicate import `os`

* Added test for auto parent dir creation when downloading file
2023-07-07 13:40:29 -07:00
Stan
f40f8cd055 Initialise numpy arrays as float32 in DDPG (#1171)
float64 is not supported by tinygrad
2023-07-07 12:05:31 -07:00
cloud11665
884b5965de ops_cuda fix race condition on cubin file read when testing with multiple cores (#1172) 2023-07-07 12:05:16 -07:00
terafo
aa60feda48 Fix naming conflict with huggingface datasets (#1161)
* Rename in files

* Move files

* Moved to extra/datasets as suggested

* Changes to files

* Fixed stupid mistake

---------

Co-authored-by: terafo <terafo@protonmail.com>
2023-07-07 10:43:44 -07:00
Yahya Lmallas
fd66d1ca00 fix Tensor.manual_seed() default to wrong type (#1168)
* fix Tensor.manual_seed() default to wrong type None while it should be int

* remove that tests
2023-07-07 10:42:48 -07:00
Stan
9b6e57eccd helpers.py: improved test coverage + exception handling (#1165)
* Fixes + improved test coverage for helpers.py

- added exception handling in `proc`, if an exception was thrown, the thread would hang
- made `_early_exec_process` catch any Exception, before if an exception was thrown before the process was started, it would hand the thread

* Made `_early_exec_process` catch any Exception

 Otherwise, if an exception was thrown before the process was started, it would hang the thread. For example a type error for an argument passed to `subprocess.check_output`

* Fixed `from tinygrad.helpers import Timing` import

oops, for some reason my IDE cleaned that import from extra/helpers.

* Fixed import in llama.py

Another one that I skipped by accident, mybad

* Extracted a class for tests of early exec

* Normalize line endings, windows uses /r/n

* Made `cross_process` not a daemon
2023-07-07 10:26:05 -07:00
Kunwar Raj Singh
8391648822 Over 90% on CIFAR with examples/hlb_cifar10.py (#1073)
* fix eval, lr decay, best eval

* 82.27

* 82.64

* 82.79, reproducable

* add lr sched, 85.26

* 87.42

* 87.94

* 87.42

* tta with flip

* training flip aug

* refactor

* using Tensor for LR is faster

* 89.5

* refactor, flip only train set

* 90.01

* 90.64

* eval jit

* refactor

* only JIT model

* fix eval JIT

* fix eval JIT

* 90.82

* STEPS=900 reaches 90.22

* TTA envvar

* TTA default 0

* fully jit training

* refactor optim

* fix sched

* add label smoothing

* param changes

* patial gelu

* OneCycle with pause

* gelu maybe works

* 90.12

* remove pause lr

* maybe fix lr schedulers

* scheduler test passing

* comments

* try mixup

* shuffle!

* add back the missing last eval

* fix shuffle bugs

* add mixup prob

* fix mixup prob

* 90.19

* correct mixup

* correct mixup

* correct mixup

* 90.24

* 90.33

* refactor, add type hints

* add gradient clipping

* maybe fix test

* full JIT

* back to relu for now

* pass mixup prob as param

* add typehints

* maybe CI works

* try erf gelu

* CI, types

* remove useless import/

* refactor optim

* refactor optim

* try leakyrelu

* try celu

* gelu

* 90.67

* remove grad clip

* remove grad clip tests

* revert params

* add test for OneCycleLR

* 90.62

* fix eval timing

* fix eval timing again

* so where i calculate mixup_prob matters

---------

Co-authored-by: Kunwar Raj Singh <kunwar31@pop-os.localdomain>
2023-07-06 20:46:22 -07:00
Barath
c5aea13a65 Fix evaluation stage in examples/transformer.py when using CUDA (#1150)
* make test data as contiguous array

* standardise contiguous array for all input data in cuda ops

* swap to x.ravel
2023-07-06 18:07:10 -07:00
Rayan Hatout
9975f24452 Fold expand preceding reduce if the reduction is on the same axis as the expansion (#1134)
* fold expands that precede a reduce if the reduction is on the same axis as the expansion

* add deterministic test for SIMPLIFY_SUM_RESHAPE_EXPAND_SUM optimization

* add a test case to make sure we don't fold reduce-expand-reduce on different axes
2023-07-06 13:41:05 -07:00
cheeetoo
f109af3cbb Don't save parents unless needed (#1142)
* don't save parents unless requires grad

* keep del ctx since idk
2023-07-05 18:11:57 -07:00
Eli Frigo
801564f31b Remove POW llop and add SQRT llop (#1104)
* fixed division by zero for fast operations

* made et closer to 0

* replace POW llop with SQRT

* updated mlops to swap SQRT and POW llops

* updated hlops to swap POW and SQRT

* added sqrt llop to cpu runtime

* added sqrt llop to cstyle codegen

* added POW llop to llvm ir codegen

* added SQRT llop to torch runtime

* moved pow from mlops to hlops

* found a better way to do reverse pow

* fixed indentation

* added SQRT llop to triton

* update docs to match new llops

* removed POW operator from assembly codegen

* added sqrt and rsqrt to pow hlop

* rewrote pow function in tensor.py

* Adjust tolerance

* Adjust for adamw

* Reduce for Adam too

* removed accidental leftover code

* removed all of accidental code

* added rsqrt test

* removed pow from mlops again

it was added back when resolving merge conflicts

---------

Co-authored-by: Jacky Lee <jla524@sfu.ca>
2023-07-05 18:07:58 -07:00
cloud11665
b7369ffcff add ptx formatter + syntax highlighter (#1128) 2023-07-05 17:56:09 -07:00
Reza Rezvan
d1356cac27 Fix: Jacobian tests [WIP] (#1126)
* Fix: Jacobian tests; num_jacobian either bugged or not accurate enough;

* Fix: Jacobian tests;

* Fix: Gradcheck;
2023-07-05 15:36:22 -07:00
nimlgen
d363d25ee2 fix imports for examples/transformer.py (#1136) 2023-07-05 08:15:13 -07:00