* added SPPF module from yolov8
* added conv_block, bottleneck modules
* cleaned modules
* c2f example
* spf changes
* C2f
* fixed and tested bottleneck
* improved detect class
* tested spf and conv
* checked c2f
* DFL structure
* fixed dfl
* added dist2bbox function
* added dist2bbox function
* added and tested make_anchors function for the head
* keeping functions above
* creating the detection head
* fixing head
* untested blocks a. scale_boxes b. clip_boxes c. xywh2xyxy d. box_iou
* head works
* structure fixx
* added darknet (backbone)
* yolov8 neck, and intialize bias function while detection
* fixed spacing
* yolov8 class, init bias, and fixed c2f
* forward pass almost working
* fixed net structure
* init bias not needed, forward pass working
* load weights boilerplate
* load weights done?
* all variants loading!
* post process: clip_boxes, scale_boxes, xywh2xyxy, and box_iou(untested)
* fix scale_boxes
* box_iou fixed and tested
* created the pre nms function
* fix nms
* fixed load weights, apparently the latest commit broke something, excluding num_batches_tracked
* added letterbox and pre_tranform for pre_process function
* fixed letterbox, pre_transform and added preprocess function
* custom NMS done, integrated prepare_boxes and nms, improved box_iou
* added postprocess function till parsing
* added draw_bounding_boxes_and_save function
* testing full flow
* using fetch for class names
* fixed make_anchors + all tinygrad now
* added command line arguments, weight downloading
* single image for now only
* made draw boxes more efficient
* made NMS functions efficient
* made compute_transform better
* v8 working now, inference is done
* prints objects detected in console now
* fixed image loading (pre processing)
* batch post processing
* created initial tests
* fixes bounding box thickness AND added get_detected_classes_with_frequency function
* cleaning for testing
* two tests
* added url option for image, removed need for specifiying arguments
* tests complete, but lots on things are printed on screen by ultralytics
* remove parse arguments
* fixed weight location
* fixed colours of classes, and black font when high brightness
* minor changes
* TODOs for later
* removed use of torch, using .npz weights
* fixed tests
* one path for fetch
* preprocess now in tinygrad, plus test fix for that
* updated tests
* fix tests
* no class labels needed
* Add files via upload
* Update showcase.md
* Update showcase.md
* added safe tensors as weights, and tests fix for that
* safe tensors test
* using safe_load
* using tinygrad functions now to load weights
* update tests
---------
Co-authored-by: r3sist-uniq <amanmatreja@gmail.com>
Co-authored-by: r3sist <72573738+r3sist-uniq@users.noreply.github.com>
* Revert "Revert "ops rdna""
This reverts commit 0400315078.
* Revert "Revert "writing 2""
This reverts commit 325a3bf2cf.
* no dump
* 2x 2
* simple asm
* local size
* sub
* lil work
* support args != 3
* assembler work
* generate that
* ptx assembler
* begin index renderer
* max
* ptx loops
* gemms work
* valid works
* asm working a bit more
* close
* passing all ops tests
* ptx is a codegen only, not a backend
* ptx
* float16 support
* rdna goes here
* install types
* make amd disassemble
* ansilen for pretty print
* fix ptx log2/exp2
* assemblyinstruction
* new asm
* working gemm
* fix cmp
* more passing
* mod
* ptx works again
* rdan3 add works
* log exp
* sin is sin 2pi
* fix types
* progress
* loops work
* rdna xyz
* better addressing
* cleanups
* handle exception in early process
* div support
* rdna float4
* locals work
* fix neg index
* cast
* smaller diff
* yaml
* import only if selected
* fromimport
* types
* this all needs rewriting
* a few more
* initial commit
* added osx check for opencl
* added llvm f64 conversions
* typo in llvmir
* more tests and modified unsupported error
* fixed linting error
* added pragma fp64
* simplified exclusion for OSX
* fixed device check and also added it to cast func
* added ifdef check for fp16 in ops_gpu
* Revert "added ifdef check for fp16 in ops_gpu"
This reverts commit 92de754d48.
* f64 prekernel signature match f16
* moved condition to buffer init
* resolved some slice test errors and added some more debugging logs
* use same device in cumsum
* increased float priority
* onnx debug ouput match input
* add cumsum with n-dim inputs, over arbitrary axis + relevant tests
* increased rtol for cumsum test
* move test_cumsum into test_ops
* skip arange test for images as relies on cumsum
* Fix typo
* rewrite cumsum to work with images
* safetensors test
* safe_save
* load back with real safetensors
* bugfix in device name. add simple torch_load
* it works for llama, but it's slower...
* mmap
* no intermediate
* load mmaped
* readinto speed
* not ready yet
* revert that
* add and reorganize test_slice_* tests
* refactor Tensor.__getitem__()
* preliminary tests for 1) 0D tensors and 2) varargs for Tensor.zeros and Tensor.ones
* always compare shapes of the numpy arrays obtained from tinygrad and torch tensors
* add more tests for 0D support
* remove test_tensor.test_slicing(). All slicing tests at test/test_ops.py
* add zero-dim support
* make test_end2end.py consistent with 0dim support
* add test for tensor with zero in shape
* don't simplify ones if shape is ()
* skip tests that need zero-size tensor support.
- zero-size tensor support not related to 0dim tensors.
* add tests for __getitem__() supporting strides >= 1
* refactor __getitem__: support for strides >= 1
* minor refactors and add comments to __getitem__
* add tests for slices with negative steps
* add support for slices with negative strides
* Added few missing return typehints for tensor.py
* added test for empty tensor for Tensor.numel()
* fixed missing numel call in test_numel
---------
Co-authored-by: deefi <dee7ine@gmail.com>
* added metal int64 and some simple tests
* removed bool return type def
* typo in test
* also missing in clang and gpu runtimes
* switched order for opencl
* increased atol and removed new line in kernel prefix
* added kaiming_uniform init for conv2d and linear layers
* fix: set getattr
* up
* fix: set getattr
* fix comments
* better does not mean it is good
* more nonlinearities
* added test
checks the distribution of default relu option
* prettier
* fix kernel size
* edit distribution of returned tensor
* complete tests and fix fan_mode
* added higher dim test
* prettier test
* fix silly blank
* just leaky_relu mode
* default fan in and leaky relu
* update params
* fix test
* shorter
* generalize Tensor.uniform and adjust kaiming init
- added low and high parameters to Tensor.uniform function, so it can have a specific range (default is 0 to 1)
- adjusted return line of kaiming_uniform
* range from -1 to 1
* delete comment
* adjusted test_uniform
* fixed
* delete comment
* use tensor dtype for zeros_like()
* add tests for zeros_like dtype
* iterate over dtypes
* remove space
* remove print
* fix test, iterate over a list
* feat: int8 support
* feat: uint8 support
* feat: int8 tests
* fix: fix uint8 on clang
* feat: test casting between int8/uint8/float16/float32
* clean: way cleaner dtype tests
* feat: preprocess_imagenet using the correct dtype
* feat: add test for overflow between uint8 and int8
* Add ResNet inference test and cannon
* Test with ResNet50
* test_car works with resnet fix
* Add KiTS19 dataset
* KiTS19: Implement iterate
* No batch load for this dataset
* Save results on iterate
* Implement dice score
* Add data prep and eval functions
* Resolve shape issue
* Conversion works but wrong values
* Segfaults when load_from_pretrained is called
* Fix segfault and assign properly
* Final result generated, though very slow
* Store and load final result to save time
* Fix typo in finalize
* Score computes
* More bug fixes, dice score is very low
* Working broken code
* Assign output values to result
* Getting a much higher score now
* Fix dataset preprocessing
* Mean DICE score of 88.5
* Ugh, typo
* Attempt to reimplement model
* Rename layers
* Tiny model works, kinda
* Accuracy? gone
* Implement InstanceNorm and match torch
* Test instance norm 2d and 3d
* Combined input block with downsample block
* Tiny model works, support strided convtranspose
* Commands to download dataset
* Clean up a bit
* unet3d_v2 -> unet3d
* Remove duplicated code
* Oops, put tests back
* lr schedulers + test
* lr scheduler test moved + integration test
* integration test for all lr scheduler
* lr scheduler test now deterministic
* changed optimizer + parameters for lr sched test
* optimizations in symbolic.py
* fix infinite recursion when expanding sums
* add test case to make sure NumNodes are hoisted up in cases where MulNodes cancel eachother out