Commit Graph

4147 Commits

Author SHA1 Message Date
nmarwell26
12ce68c1ee Renamed examples/yolo to examples/vgg7_helpers because that directory contains no yolo-related code and only helper code for vgg7. This was confusing to a new user when trying to understand the examples. (#1086) 2023-07-01 12:04:28 -07:00
Rob Grossman
2533a992e7 remove unused imports in models (#1088) 2023-07-01 12:04:19 -07:00
geohotstan
575f75f613 hello (#1084) 2023-07-01 01:29:35 -07:00
foreign-sub
574cbda979 Quickstart (#1015)
* fix quickstart md

* add quickstart to ci
2023-06-29 13:26:58 -07:00
Roelof van Dijk
542b2d93a5 Perf/cache string ops (#1078)
* perf: remove extra function, include in cached getitem

* perf: only calculate hash once per node

---------

Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-06-29 13:23:11 -07:00
George Hotz
e234bf2298 hip matmul : add K support 2023-06-28 19:54:33 +00:00
George Hotz
0e93b9642a hip matmul 2023-06-28 19:21:01 +00:00
Jacky Lee
754e54ebb9 Fix Tensor ceil and floor for whole numbers (#1071)
* Works on non-special numbers

* Test different cases
2023-06-27 23:22:17 -07:00
George Hotz
1f5d45ca8c imagenet loader minor cleanups 2023-06-28 05:08:09 +00:00
George Hotz
6ec0a24706 imagenet eval in 1 min 28 sec 2023-06-28 04:23:26 +00:00
George Hotz
9fabdbd054 speed (#1070) 2023-06-27 20:28:57 -07:00
George Hotz
d16c16ec28 new upcast works (#1066)
* new upcast works

* float4 try

* fix unaligned float4

* disallow unaligned access

* upcast dim

* maybe good now

* fix gpu half

* vstore_half4

* fix deep image bugs

* improve symbolic to fix issues

* fix symbolic

* cl test

* this maybe

* gcd of 1 is 1

* real fix for old python

* improve fuzzer
2023-06-27 19:34:53 -07:00
ernie
4d703be6d7 fix typo (#1065) 2023-06-27 10:56:54 -07:00
George Hotz
70c07dfea5 5k line max (#1064) 2023-06-27 10:53:18 -07:00
George Hotz
c8d87eb8d4 strip whitespace 2023-06-27 10:11:43 -07:00
Rayan Hatout
23648538fa fix folding of float4 add/mul (#1060) 2023-06-26 20:59:29 -07:00
George Hotz
a98e361da0 torch speed test, add add 2023-06-26 18:55:27 -07:00
George Hotz
3e33befc1d realize hotspots (#1059)
* realize hotspots

* no str check

* minor changes

* make this an assert

* faster and more readable

* nicer self.buffers

* tests for weak op + LAZYCACHE=0
2023-06-26 18:31:18 -07:00
George Hotz
2977fb17f6 various touchups (#1058)
* op isn't optional

* barrier + named local buffers

* end global and local loop together to avoid useless if statement

* better comments
2023-06-26 15:41:23 -07:00
George Hotz
f265e8523a movement ops aren't really ops (#1056) 2023-06-26 15:01:28 -07:00
Rayan Hatout
65cbaa3429 no need to slice A and B twice in LLaMa complex multiplication (#1054) 2023-06-26 14:42:58 -07:00
George Hotz
571089f10e Back off minor speed stuff for simplicity (#1053)
* passing in buffers doesn't increase speed

* functools.reduce

* no more get_buffers
2023-06-26 14:42:17 -07:00
Rayan Hatout
dedbd970aa Optimizations in lazy.py (#987)
* optimizations in lazy.py

* make mypy happy with stubs and fix the graph import hack

* merge conflict in helpers.py
2023-06-26 13:55:42 -07:00
Roelof van Dijk
8bea6b6d35 perf/refactor_weakops (#1052)
Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-06-26 10:13:33 -07:00
Roelof van Dijk
8c65f9324c refactor: print formatting for llama timing (#1050)
* refactor: print formatting for llama timing, report median and individual runs

* feat: back to mean

* fix: whitespace

* fix: add mean to print

---------

Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-06-26 09:49:31 -07:00
Roelof van Dijk
c604ef4beb symbolic.py: faster Node.sum, faster SumNode.div (#1014)
* refactor: replace isinstance with class check where possible

* refactor: faster partition

* fix; flake8

* feat: rework node.sum, correct list typing

* fix: typo

* feat: refactor sum

* fix: pylint

* refactor: simpler sum and factorize

* feat; clean up sumnode div, all cpu tests pass

* feat: simplify floordiv, cache factorization

* don't factor numnodes at all

* python 3.8 functools does not yet have @cache

* fix: restore assert

* refactor, fix failing tests

* fix: address review comments

* feat: rework, add specialization, remove cache

* fix: remove specialization

* feat: no tuple conversion, faster loop

---------

Co-authored-by: Roelof van Dijk <roelof.van.dijk@vitestro.com>
2023-06-26 09:47:17 -07:00
Casey Primozic
52b7105f87 Dedup params in Optimizer (#1047)
* Dedup params in optimizer

 * Passing the same tensor multiple times in the set of learnable params passed to optimizers can result in models completely failing to learn, but no errors are produced.  This dedups tensors to avoid the problem.

* Fix types

* Use new variable to satisfy linter

* Use `helpers.dedup` instead of `set()` to dedup params

* Add test for duped params in optimizers
2023-06-26 00:49:23 -07:00
Kunwar Raj Singh
5d3310ce56 MaskRCNN Inference (#884)
* MaskRCNN weights loading

* backbone maybe works

* backbone works, but resnet body atol 1e-3

* RPN Call, but veryy wrong output

* fixed topk

* RPN maybe works, not sure about nms

* Fix cursed modules

* add back editorconfig

* Full call, wrong output

* Full call works

* fix mask

* use NMS from retinanet

* Removing extra funcs

* refactor

* readable

* Add example to run model

* remove filter

* Fix split, batched inference is worse

* Fix image sizes

* Matching reference

* merge master

* add filter on top detections

* cuda backend fixed

* add model eval and spec

* convert images to rgb

* fix eval

* simplify examples code

* remove extra code

* meshgrid using tinygrad

* removing numpy

* roi align, floor, ceil

* remove numpy from level_mapper

* remove numpy from pooler

* Revert "Merge branch 'master' of github.com:kunwar31/tinygrad into mrcnn-inference"

This reverts commit 4b95a3cb49, reversing
changes made to 98f2b1fa2e.

* roi align gather

* fix master merge

* revert to old floor, ceil as ints present in domain

* use log2 op

* fix indexes

* weird bug with ints and gpu

* weird bug with ints and gpu

* refactors, add env var for gather

* floor with contiguous, where

* refactor topk, sort

* remove staticmethod

* refactor stride

* remove log2 mlop

* realize -> contiguous

* refactor forward

* remove num_classes, stride_in_1x1 from state

* refactor forward

* refactoring

* flake8

* removing numpy in anchor gen, use numpy for gather, nonzero, optimize topk

* keep using tinygrad for smaller gathers

* fix empty tensors

* comms

* move from tensor.py

* resnet test passing

* add coco dataset back

* fix spaces

* add test for log2

* no need to create Tensors

* no need to create Tensors

---------

Co-authored-by: Kunwar Raj Singh <kunwar31@pop-os.localdomain>
2023-06-25 15:37:51 -07:00
George Hotz
0f281e7b18 touchups 2023-06-25 15:24:26 -07:00
George Hotz
c8fbdeb48e test speed llama (#1046)
* test speed llama

* oops, put it back

* uses the real device codegen

* just do it on the mac

* pp

* is faster?

* Revert "is faster?"

This reverts commit 42db542010.

* disable docker again for less load on CI
2023-06-25 15:22:56 -07:00
Jacky Lee
5d16cc283f Docker fix (#1039)
* Docker test

* Remove extra installs

* Don't run full test

* No need for testing dependencies
2023-06-25 10:38:58 -07:00
Francesco Castelli
6ff720103e Reduce tensor dot line count and fixed 1d tensor dot (#1045)
* fixed tensor.dot

* no 1d dot for image=1

* shorter lines

* add 3d dot tests
2023-06-25 10:32:45 -07:00
George Hotz
9c6e507518 move accel into extra 2023-06-23 16:38:15 -07:00
Yair Lifshitz
7f73d6a4da Fix input path in examples/compile_efficientnet.py, examples/efficientnet.py. (#1034) 2023-06-23 16:34:33 -07:00
兰天游
0222ee7bd2 feat: fix shell alias on readme (#1022)
* feat: fix shell alias on readme

* feat: edit the install command
2023-06-23 00:00:34 -07:00
cloud11665
264b1e5f48 cache gpuocelot build in cuda CI (#1032) 2023-06-22 17:42:12 -07:00
cloud11665
2407690d82 add cuda on cpu tests (#1020) 2023-06-22 14:15:50 -07:00
Eli Frigo
e09219df0f fixed division by zero for fast kernels (#1021)
* fixed division by zero for fast operations

* made et closer to 0
2023-06-22 14:02:53 -07:00
George Hotz
18892242b0 global -> group (#1007)
* global -> group

* allow None for local_size in custom function

* lil local

* comment on shape

* fix cuda

* smart local cast

* better local heuristic

* fix ptx, and work_dim cleanup

* fix metal

* fix ops test

* fix openpilot jit

* no more optlocal

* might fix metal tests

* try metal now

* see generated metal code

* test free removal. REVERT THIS

* mergable
2023-06-21 11:50:43 -07:00
Casey Primozic
aab9ee0fca Add RDNA3 assembler UOps.CAST partial support + other fixes/improvements (#1012)
* Add support for one case of `UOps.CAST` for RDNA3 assembler

 * Adds support for casting from `bool` -> `float32`.  Seems like a very common operation that is required in many places.
 * Fix bool register definition for vector operations
   * Use `vcc_lo` instead of `vcc` which seems to be required since it's configured to use wavefront_size=32
 * Add vector support for some places that were scalar only in register definition and comparison ops
 * Fix some issues in what seems to be defunct `external_test_image.py`
   * Some tests still don't pass for other reasons, but it at least runs now and one broken test is now fixed

* Refactor RDNA3 assembler register definition

 * Unify multi-registor code between dtypes and combine with single-register allocation since they're all untyped registers at the end of the day
2023-06-20 11:34:10 -07:00
Diogo
57d3aa76a5 Windows & Ubuntu CLANG CI support (#1011)
* matrix strategy

* push env to GITHUB_ENV

* use printf instead of echo

* use temp helper function for cross os paths

* use path join

* switched to using temp helper function

* skip test on windows due to memory limit

* small fix

* removed semi

* touchups

* clean up

* seperate tests

* test changes to test_utils on windows

* small refactor

* more cleanups

* undo helpers change

* only skip if in CI and WINDOWS
2023-06-19 09:33:24 -07:00
George Hotz
0d4c4f4e9e metal ci attempt (#1010)
* metal ci attempt

* skip failing ops tests

* skip in the ops test

* no dtype test
2023-06-19 09:23:55 -07:00
George Hotz
0ac84d5e94 exclude a few more onnx tests 2023-06-19 08:51:29 -07:00
George Hotz
0fd648dff4 exclude more dumb onnx tests 2023-06-19 08:51:29 -07:00
Pasan Perera
b6102ba4ac added CUDA and PTX to env_vars.md (#1009) 2023-06-19 08:47:44 -07:00
Sayantan Das
e829e0e718 Update CONTRIBUTING.md (#1008) 2023-06-18 22:09:03 -07:00
George Hotz
d84c600e5d contibuting 2023-06-18 21:48:18 -07:00
Casey Primozic
651d6ea457 Minor improvements + cleanup to ops_gpu.py (#1006)
* Minor improvements + cleanup to `ops_gpu.py`

 * Add some previously undocumented environment variables from `ops_gpu.py` to `env_vars.md`
 * Update debug print for OpenCL to print the devices that will be used post-filtering with `CL_EXCLUDE`
 * Remove a couple unused or superfluous variables and assignments
 * Use `fromimport` shorthand to shave off a couple precious LOC
 * Couple small whitespace changes to clean things up

* Revert change to ordering of OpenCL devices

* Small refactor for OpenCL context creation
2023-06-18 21:26:40 -07:00
George Hotz
5428b5d774 good changes from tensor_cores branch (#1005)
* good changes from tensor_cores branch

* touchups

* real_strides fixup

* refactor merge_views
2023-06-18 20:28:06 -07:00
Yann Huynh
ccb51ff5b0 "Fixed argument passing in example yolov8" (#1004)
"Fixed argument passing in example yolov8"
2023-06-18 14:29:39 -07:00