* quick math: 0 + x = x.
* gradient w.r.t. x using cherry for conv
* gradient w.r.t. w for conv on cherry but doing vector dot products
* small optimization
* [cherry] optimize conv backpass for large channel count
* get rid of numpy einsum
* added resnets
* fix minor
* fix minor
* resnet in models
* added resnet test
* added resnet train test
* added linear, conv2d nn tests
* fix minor in extra/training
* resnet in models
* fix minor
* fix tolerance for linear in nn test
* fix eval, this causes cpu and gpu UT failing
* revert transformer test
* fix minor for CPU test
* improved model get_params for sequential layer
* fix minor for params counting
* commented broken ops tests
* improved train for resnet
* ops_risk
* risk sim
* guessing is for winners
* minor
* better
* matmal with risk
* conv doesn't work
* closer
* conv2d works
* ops_risk
* opt2 works
* opt1 may not be possible
* opt1 is a mulacc
* arty
* attosoc example building on mac
* minor
* riscv assembler
* gucci gang
* we got C code
* not a scam
* hello
* make risk mergeable into master
* unop support
* Split tests
Split tests into "Test CPU" and "Test GPU".
Add test flag "TEST_DEVICES" which is a comma separated list of devices:
CPU,GPU,ANE
* Run tests based on provided TEST_DEVICES flag
By default will run all "CPU,GPU,ANE"
* fix bad quote
* Revert changes and use GPU=1
This is done through setting the default Tensor Device to Device.CPU of
GPU=1 is set.
Run GPU tests: GPU=1 pytest -s -v
* 2serious
* load/save
* fixing GPU
* added DEBUG
* needs BatchNorm or doesn't learn anything
* old file not needed
* added conv biases
* added extra/training.py and checkpoint
* assert in test only
* save
* padding
* num_classes
* checkpoint
* checkpoints for padding
* training was broken
* merge
* rotation augmentation
* more aug
* needs testing
* streamline augment, augment is fast thus bicubic
* tidying up
* transformer eval
* axis=-1
* transpose
* test for permutation using torch.movedims
* another test
* line
* Update all devices to be tested
ANE, CPU and OCL all now support all tests.
However tests are not currently passing on GPU and I cannot test on CPU.
Failing GPU test are not an issue caused by this update. Tests have not
been passing due to a missing "six" required installation.
OpenCL Tests have not been run since commit: 1a1c63a08b
devices have 3 types and are handle by a new DeviceTypes enum. (The goal
is to revert to Tensor.<type>, but this current setup allows for keyword
argument defaults: `device=DeviceType.CPU`)
All references to Tensor.GPU/CPU/ANE as been converted to the
corresponding `DeviceTypes` enum.
Refactor of the conversion code to allow for any device to any device
conversion.
* Add six dependency in requirements.txt
* Resolve failure to run tests
Move six into gpu required installs. Remove six from standard
installation.
* Remove repeated data conversion
* Refactor method names
Also reduce code with .to and .to_
* Dynamic device handlers
* Refactor DeviceTypes -> Device
* Add mem copy profiling back
* test_backward_pass_diamond_model passing
* Resolve Sum issue on GPU
* Revert batchnorm2d tests
* Update README with upadated API
* ANE testing with
* Last minute line gains
* Consistent GPU classes
Convert the existing GPU classes into one standard format.
Remove duplicated functions in `test_mnist` and create a TestMNISTGPU
class. This reduces line count and ensures consistency.
Use `@unittest.skipUnless(GPU, "Requires GPU")` instead of `if GPU:` to
skip GPU testing. This will ensure that skipped tests are displayed
accordingly in the pytest output.
* Optim Testing now supports GPU
* Tensor testing now supports GPU
jacobian and gradcheck auto skipped until GPU float64 support added.
* GPU support for custom constructor methods
* Remove GPU flag from Model constructors
It was requested that the `gpu` kwarg be removed from the model
constructor. GPU conversion is now handled in the train function.
This also required the conversion of Optimizer parameters as they are
constructed prior to execution of the `train` function and are dependant
on the model GPU state.
* Fix typo: float32->float64
* Clean `get_parameters` utility
Just a quick refactor w/ the new support for optimizers.
* Remove GPU kwarg from TinyNet
Remove `gpu` kwarg from tiny net to match test_mnist `train` function.
* streamlined numerical_jacobian
* Got rid of the g loop in Conv2D.forward
* ereased stupid line
* nothing
* no loops in Conv2D forward
* Conv2D backprop improved
* stupid things in examples
* alternative to einsum
* Conv2D backward einsum alternative
* tidying up
* tidied up
* no ravel
* got rid of print
* Update efficientnet.py
* Update efficientnet.py
* Update efficientnet.py
* only tensordot
* 255.0
* whitespace
* aspect ratio error in efficientnet
* noprint
* efficient net wrong strides
* broadcasting for backward ops
* Update ops.py
* Update ops.py
- was wrong
* broadcast test for backward enabled
* function adBC + not summing over already 1 axis
* spacing
Co-authored-by: Marcel Bischoff <marcel@Marcels-iMac.local>
* allow for general broadcasting of binary operations. can handle any situation where corresponding dimensions between the tensors match, or at least one of them is of size 1. if a tensor has fewer dimensions than the other, then its size is padded with 1s until they match have the same number. also refactored buffer_zeros() by creating a function buff() that makes a buffer from a numpy array
* remove extra tabs
Co-authored-by: phillip <phillip_bement@reedbement.com>
* Pad2d backward pass on GPU
* Faster Pad2D GPU backward pass (no zeroing needed)
* Fix out of bounds error
* Don't save prg
* Let compiler optimize division by 1
* More generic broadcasting (1s at the start)
* Bug fix
* Add comment
* Try to fix flaky test with other method
* Add mixed broadcast support
* 1kernel
* Separate broadcast tests
Co-authored-by: holonomicjl <58403584+holonomicjl@users.noreply.github.com>