Commit Graph

469 Commits

Author SHA1 Message Date
geohotstan
cf1ec90ad4 add inverse trig functions to Tensor (#7805)
* implement inverse trig functions

* guess we should still test nans?

* magnitude as variable name :D

* reorder onnx_ops ops

* approximation -> x for consistency

* address feedback

* simpler acos

* improvement?

* actually just have asin depend on atan

* actually this is nicer

* remove a comment

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-11-21 09:13:36 -05:00
geohotstan
66a069ee25 add replicate mode to Tensor.pad (#7802)
* base implementation

* add tests

* actually remove the assertionerror test

* good
2024-11-20 08:39:58 -05:00
geohotstan
8100109c9d Add replicate mode to Tensor.pad (#7608)
* base implementation

* add tests

* actually remove the assertionerror test

* actually only have reflect for this pr

* change the 4 if-else one liner

* maybe use a lambda

* fix

* maybe a lil cleaner

* fix tests

* complete

* small change

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-11-18 10:55:38 -05:00
chenyu
df817297b6 fix passing acc_dtype="" to Tensor.prod should fail (#7750)
similar to sum
2024-11-17 11:38:13 -05:00
chenyu
55707fd00d fix passing sum_acc_dtype="" to Tensor.sum should fail (#7748) 2024-11-17 10:58:41 -05:00
chenyu
a15a900415 fix Tensor.meshgrid for 1D input and check indexing (#7740) 2024-11-16 23:39:30 -05:00
geohotstan
72a41095bc add Tensor.meshgrid (#7714)
* initial implementation and test

* some other places that can use meshgrid

* revert the onnx_ops change

* add to docs

* revert interpolate too

* update

* improve edge case test

* might as well test grad

* add to test can improve docs

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
2024-11-16 23:06:47 -05:00
chenyu
f1efd84c92 fix repeat_interleave with negative dim (#7734) 2024-11-16 10:15:29 -05:00
chenyu
22da31b223 clean up Tensor.dot (#7728)
more docs (similar to numpy) and removed many confusing  `-min(n2, 2)`
2024-11-15 18:21:15 -05:00
chenyu
4338c450ac fix max_pool2d for int tensor with padding (#7726)
padding inf messed output dtype
2024-11-15 16:22:11 -05:00
chenyu
9fb396f660 test_ops maxpool2d -> max_pool2d (#7696)
and avgpool2d -> avg_pool2d for better grepping the tests
2024-11-14 10:39:12 -05:00
geohotstan
f8056a74d6 combine pad2d with pad (#7677)
* I have pad2d, I have pad, uuh~, pad2dpad~

* fix some small things

* strategically placed cast hack

* fix more

* fix more more

* tests

* periods
2024-11-14 17:56:02 +08:00
chenyu
333f5f9f8b Tensor.bitwise_not (#7688)
implemented with xor in tensor for now to not add another op. also used it in Tensor.min to fix dtype int on -2**31
2024-11-13 16:31:52 -05:00
chenyu
fb933b79a6 add test case for nll_loss with input > 2D (#7685)
* failed test case for nll_loss with input > 2D

* fixed

* add more
2024-11-13 14:34:07 -05:00
geohotstan
9c41c376d3 add Tensor.nll_loss (#7683)
* move nll_loss to new branch

* make nll_loss examples practical

* self *is*

* add to docs

* small
2024-11-13 13:12:13 -05:00
chenyu
3c6fe4b79a fix Tensor.bitwise_and and Tensor.bitwise_or to support bool (#7684) 2024-11-13 13:10:39 -05:00
James
d4e4a084a1 fix: Tensor min function for unsigned ints (#7675)
* add failing tests for uint8 `min()`

* fix unsigned data type min()

* fix test data

* fix whitespace

---------

Co-authored-by: rezaarezvan <reza@rezvan.xyz>
Co-authored-by: Jamesb <experimentallearning0@gmail.com>
2024-11-13 11:04:27 -05:00
Reza Rezvan
23363dee55 Add: failing tests for uint8 min() (#7669)
* add failing tests for uint8 `min()`

* mark as expected failure
2024-11-13 22:12:53 +08:00
chenyu
c06a5a9c72 Tensor.linspace raises for dtype.bool (#7649)
also fixed an assert when passing str dtype to randint
2024-11-11 23:05:14 -05:00
geohotstan
5eef59d732 add Tensor.linspace (#7609)
* add linspace

* shave off tests and forgot to add to docs crap

* WHOOPS

* better tests
2024-11-12 10:29:36 +08:00
George Hotz
745316493c hotfix: add test_simple_conv2d_bias 2024-11-10 18:36:42 +08:00
George Hotz
205befa788 move is_dtype_supported to device [pr] (#7575) 2024-11-07 20:38:03 +08:00
geohotstan
934fb73994 fix test_schedule conv2d bug (#7549)
* tests tests tests

* slap a resolve on it

* fix comment
2024-11-05 09:07:25 -05:00
Ahmed Harmouche
36488a2a43 Use is_dtype_supported in more places in tests (#7529) 2024-11-04 09:21:15 -05:00
geohotstan
b1866cbfd9 failure test case for pool ops (#7483)
* add failure test case

* minimum case
2024-11-02 12:13:38 -04:00
geohotstan
585f3a0f24 Add isinf and isnan ops to Tensor (#7484)
* move isinf and isnan to new branch

* sneak a roll documentation fix in

* add to docs

* update test coverage for detect_positive and detect_negative

* add types to isinf args
2024-11-02 12:12:52 -04:00
geohotstan
6513690223 Add Tensor.hardsigmoid (#7433)
* move hardsigmoid to new branch

* add to test

* add NOTE to mention differing values for alpha and beta that match torch

* shift from relu6

* correct shift implementation

* or we just use relu? no more 666
2024-11-01 08:36:52 -04:00
chenyu
fb694a63eb Tensor.erf (#7419)
the same one used in onnx and the one in bert.
2024-10-30 18:12:28 -04:00
George Hotz
f3bd5cbf78 simplest migration of indexing [pr] (#7402)
* simplest migration of indexing [pr]

* fix locals/barrier
2024-10-30 20:58:18 +08:00
chenyu
f389e1a8a0 test more special values for sin/cos/tan [pr] (#7386) 2024-10-29 21:13:37 -04:00
George Hotz
3989bd2682 idiv + reciprocal [pr] (#7354)
* idiv + reciprocal

* remove upcast from div

* fix docs
2024-10-29 15:54:19 +08:00
George Hotz
d9d4dd6756 faster ci [pr] (#7348) 2024-10-29 14:01:44 +08:00
chenyu
0843734927 clean up nan handling in transcendental (#7332)
* clean up nan handling in transcendental

* skip remu crash
2024-10-28 16:21:49 -04:00
chenyu
cb5702f170 tiny cleanup to transcendental xexp2 (#7326)
also added test for exp and log of nan and inf
2024-10-27 21:54:20 -04:00
George Hotz
3c31497f55 instant isn't actually used [pr] (#7299)
* instant isn't actually used [pr]

* tolerance bump
2024-10-25 21:01:29 +08:00
chenyu
13575f080a remove bitcast backward in function.py (#7031)
bitcast cannot backward
2024-10-13 10:08:27 -04:00
Markiian Novosad
8831c691e2 Add slice parameter type checking to disallow Tensor usage for slices (#6967)
* add support for single el tensors for slices

* rm trailing spaces

* cleanup long lines

* remove tensor in slice support, add comprehensive err msg

* cleanup getitem, add slice type check

* Edit err message
2024-10-11 16:20:21 -04:00
chenyu
e4c0743188 failed example for logcumsumexp (#6936)
need cummax for numerical stability
2024-10-07 10:55:45 -04:00
jeffzh4ng
19a7e41113 implement logcumsumexp (#6921)
* implement logcumsumexp

* change axis=None to axis=0
2024-10-06 10:45:36 -04:00
George Hotz
c178dc1071 faster uops ci [run_process_replay] (#6774) 2024-09-26 20:15:01 +08:00
George Hotz
e945fa9c5c put local on the PtrDtype [run_process_replay] (#6656)
* put local on the PtrDtype [run_process_replay]

* those are local too
2024-09-23 10:29:17 +08:00
Gaétan Lepage
f214bb140d test: relax tolerance of test_broadcastdot (#6560) 2024-09-17 03:26:39 -04:00
chenyu
b2c286f567 fix typing for test_ops (#6520)
mostly passed TYPED=1 python3 -m pytest -n=auto test/test_ops.py.

one last test specifically set an invalid value to test the exception, and to ignore that we need to import typeguard. And to get a working version of typeguard, we would need to get rid of dependency on tensorflow_addons because it requires a very old version of typeguard
2024-09-15 06:18:36 -04:00
chenyu
7df4373fd9 tensor reduction touchup (#6402)
- fixing spacing
- use get_args to get valid Literal values and raise ValueError to match, and a test for that
- use `Y` to be consistent
2024-09-08 03:55:51 -04:00
Irakli Salia
2e01efc35f tensor roll (#6375)
* tensor roll function and tests

* fix type annotations

* reduce line count

* more readable
2024-09-07 05:14:28 +08:00
Tim Becker
dfb818788e Support reduction parameter in more loss functions (#6302) 2024-09-07 05:11:20 +08:00
Oleg Rybalko
64f1384f5b Einsum ellipsis support (#6333)
* working ellipsis expansion

* refactor

* fix commas in output

* add capital letters

* refactor
2024-09-05 10:08:55 +08:00
nimlgen
326a77336e qcom remove some tests skips (#6353) 2024-09-04 15:38:18 +03:00
Vyacheslav Pachkov
4c33192a8b add qcom runtime (#5213)
* qcom: driver init

* autogen stubs for msm_kgsl also fixup ioctls to show numbers instead of _IOW macros

* autogen: add adreno commands and registers

* ops_qcom: QcomAllocator + signals

* fix EDEADLK in hwqueue, init timestamps, use opencl compiler for qcom

* qcom: we do not really need all these constants input/output is enough

* qcom: perfctr for CS (do not really need all the rest)

* qcom: HALFREGFOOTPRINT and FULLREGFOOTPRINT are set to be around max

* qcom: explicitly set instruction len based on the shader size

* ops_qcom: Program init

extracts shader from open cl binary
sets input/output buffers
allocates stack
sets cs mode
runs shader

* use data64_le from helpers

* ops_qcom: use fill_kernargs for filling i/o buffers

* ops_qcom: add QcomCopyQueue just for api & set kernargs_args_offset

* new signals & fix exec

* add QCOM to the list of supported devices

* correct QcomComputeQueue._wait using CP_WAIT_REG_MEM

* fix exec, synchronize before copyout

* correct setting num_units for ST_SHADER

* fix gpu hangs on sigs with CP_MEM_WRITE, it is uncached mem anyway

* extract offsets to kernel arguments from opencl binary

* extract constants values and offsets from opencl binary

* handle KGSL_MEMFLAGS_USE_CPU_MAP correctly

* align kernel name to 4 bytes when skipping kernel opencl struct

* skip to consts directly using an offset from opencl binary header

* fix alloc

* get halfreg and fullreg from opencl bin

* set unmultipled global sizes as kernel group in HLSQ_CS_NDRANGE

* parse prg offset from open cl binary

* save loc with HLSQ_CS_CNTL. set this with HLSQ_CONTROL_2_REG

* support for vals in _fill_kernargs

* support 16-bit constants

* use KGSL_CONTEXT_NO_FAULT_TOLERANCE for contexts

this helps to not fall down when executing big kernels

    /* Don't time out if the context has disabled it */
    if (drawobj->context->flags & KGSL_CONTEXT_NO_FAULT_TOLERANCE)
        return;

* minor changes of _exec

* QCOMRenderer

* disable HCQGraph for demo. TOOD: support HCQ update api

* support HCQ

- remove copy queue
- add updates
- add strides for buffs and vars for QCOM

* bufs_stride

* clean ups

* linter

* call super().__init__(value) in QcomSignal

* disable=unused-import

* mypy

* type ignore when queue is on the device

* fix

* query gpu_id.
Will be useful for selecting commands e.g. CP_EVENT_WRITE vs
CP_EVENT_WRITE7

* working timestamps

* free context after device is done

* move gpu stack to the device

* reserve some space with lib_gpu for gpu to write to

this fixes test_interpolate_bilinear

* exclude tests that fails with GPU=1 on qualcomm

* lint

* unmap mem in _gpu_free

* ctxt priority and preemtion policy

* remove old qcom

* pass size to self.device.allocator.free

* skip tests only on qcom

* use kgsl and adreno defines instead of numeric vals

* use allocator for allocating lib_gpu

* update to QcomArgsState from master

* intermediate commit while conquering images

* enable image tests on qcom

* fix shader disasm size, dump textures stuff

* working images

* allow signals to be 0

* set branchstack from OpenCL binary

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* set shared memory size from OpenCL binary

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* update images in QcomArgsState & less loc for images

* set stack sizes from OpenCL binary

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* stack allocation based on OpenCL binary

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* better autogen for kgsl and adreno. no more bitshifts

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* cleanup commit for parse cl lib

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* dont forget actual generated files

* refactor + less loc

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* device.py back

* lint

* ruff

* timestamp divisor

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* fix tex fmt & round global size

Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>

* dtypes

* 19.2MHz

* -1 loc in _update_exec

* remove noqa

---------

Co-authored-by: George Hotz <72895+geohot@users.noreply.github.com>
Co-authored-by: nimlgen <138685161+nimlgen@users.noreply.github.com>
2024-09-02 19:35:47 +03:00
pedro
7de4eac8f7 add support and tests for nearest modes in interpolate, adapt uint8 bilinear to torch implementation (#6308)
* add `nearest` mode to interpolate

matching pytorch `nearest` which is knowingly buggy

+ relevant TestsOps

* add `nearest-exact` mode to interpolate

matching pytorch `nearest-exact`

+ relevant TestOps

* fix uint8 bilinear interpolation

by matching custom torch implementation

* implement uint8 lerp with torch interpolation trick

without converting it to float
2024-08-28 21:59:51 -07:00