Bruce Wayne
0955ca4ee0
kernel count not relevant if speed is good
2025-10-17 14:35:40 -07:00
Bruce Wayne
075efea4ca
use later models
2025-10-17 14:35:40 -07:00
Bruce Wayne
d3c33917aa
absurd tolerance
2025-10-17 14:35:40 -07:00
Bruce Wayne
577537845d
allow test disable
2025-10-17 14:35:40 -07:00
Bruce Wayne
b9450f811d
self-test FP16 too
2025-10-17 14:35:40 -07:00
Bruce Wayne
4df51166e3
onnx file is still fp16
2025-10-17 14:35:40 -07:00
Bruce Wayne
23fb4633e8
Simpler compile3
2025-10-17 14:35:40 -07:00
nimlgen
658c566e22
vars in gated_read_image_count ( #12486 )
...
* vars in gated_read_image_count
* nc
2025-10-09 14:54:15 +08:00
chenyu
be05028419
move ASSERT_MIN_STEP_TIME to compile3 ( #12535 )
...
threshold is current time +20%
2025-10-08 22:16:59 -04:00
George Hotz
0f25b4b289
move frontend dir to nn [pr] ( #12470 )
2025-10-07 10:42:22 +08:00
chenyu
85ddd72038
simpler grouptop in hcopt ( #11219 )
...
* simpler grouptop in hcopt
keep the only perf relevant conditions and the rest is handled by try except
* update openpilot read image count
2025-07-13 16:06:09 -04:00
geohotstan
5ce278b245
OnnxRunner file as input ( #10789 )
...
* file path as input and have parse be in OnnxRunner.__init__
* modelproto_to_onnxrunner -> modelproto_to_runner
* whoops, fix import
* oh flakiness again, is it because it's getting gc-ed?
* small changes
* CI flaky so just move compile4 fix in
* copy typing of onnx_load
* actually can just import onnx_load instead of onnx.load
* fix external_benchmark_openpilot
* fix onnx_runner test to use onnx_helper
* rerun CI
* try run_modelproto
* spam CI a few times
* revert run_modelproto since that's flaky also
* no external onnx_load usage except onnx.py
* cursor tab complete is evil. Snuck a darn sorted in. But does order change result? Why?
* model_benchmark 193s -> 80s, add OnnxRunner.to()...
* minimize diff and clean up
* device can be None, weird but eh
---------
Co-authored-by: chenyu <chenyu@fastmail.com >
2025-07-12 14:27:46 -04:00
b1tg
24d328e313
onnx parser ( #10435 )
...
* onnx parser
* fix compile, lint
* onnx.load -> onnx_load
* compatible with ModelProto
* fix test external_test_onnx_ops.py
* fix tests
* fix signed int
* reduce to 261 lines
* fix TypeProto.Optional
* debug for _parse_message, add TypeProto.Sequence, cleanup
* onnx_load from Tensor
* remove BufferedReader
* 174 lines and reduce tensor copy
* cleanup
* use onnx_load in external_model_benchmark.py
* fix qcom test
* [onnx] parser support external data
---------
Co-authored-by: b1tg <b1tg@users.noreply.github.com >
Co-authored-by: chenyu <chenyu@fastmail.com >
2025-06-09 12:44:28 -04:00
George Hotz
74d98eafb8
add onnx frontend stub [pr] ( #9558 )
2025-03-24 12:24:34 +08:00
ZwX1616
c977781b3c
no numpy change if no NPY ( #9281 )
...
* skip np change check if no NPY
* use any
2025-02-28 09:32:35 +08:00
George Hotz
8b16c65bca
add compile3 benchmark [pr] ( #8929 )
2025-02-06 22:49:31 +08:00
geohotstan
dd82b4c913
make onnx runner a class ( #8647 )
...
* this
* clean up
* more clean ups and improve debug msg
* more correct training toggler
* remove manual training toggling
* change some variable names
* actually just add the training toggle for LIMIT envvar too
* more refinement
* __call__ and OnnxRunner
* fix half pylint, other half is importing from onnx while this file is onnx.py, figure out later
* ahhhh found another mistake
* remove limit from __call__
---------
Co-authored-by: chenyu <chenyu@fastmail.com >
2025-01-20 10:11:05 -08:00
Harald Schäfer
7059459648
Openpilot compile: fix for openpilot use ( #8338 )
...
* compile3 changes
* merge conflict
* merge conflict
* give dm npy for now
* Revert "give dm npy for now"
This reverts commit bfd980da7d2c2bab5b073127442c361922032ba1.
* updates
* Always float32 floats
* Update compile3.py
* Update compile3.py
---------
Co-authored-by: ZwX1616 <zwx1616@gmail.com >
2024-12-19 19:43:15 -05:00
chenyu
26e049ab40
add ALLOWED_READ_IMAGE=2131 to openpilot ( #8166 )
...
added as exact number check now as it's not clear if more/less than allowed is any better
2024-12-11 12:14:17 -08:00
George Hotz
f83d715f41
move checks into compile3, delete compile2 [pr] ( #8127 )
...
* move checks into compile3 [pr]
* test_vs_onnx
* test v torch works
* float16 won't compile on compile3
* actually delete compile2
2024-12-09 14:21:42 -08:00
George Hotz
00ac0db9d4
np tensors have the memory from numpy in compile3 [pr] ( #8098 )
2024-12-07 14:01:51 +08:00
George Hotz
22feb3a2f1
move copy into the JIT for openpilot compile3 ( #7937 )
...
* move copy into the JIT, test fails
* ahh, prune was the issue
2024-12-07 13:26:26 +08:00
George Hotz
fbb4099b3c
add test for compile3 [pr] ( #7783 )
...
Co-authored-by: qazal <77887910+Qazalin@users.noreply.github.com >
2024-11-19 19:26:51 +08:00
Harald Schäfer
e7cbc29f48
openpilot benchmark: add cast from numpy to benchmark ( #7593 )
...
* openpilot benchmark: add cast from numpy to benchmark
* whitespace
* comment
2024-11-08 19:31:00 +08:00
George Hotz
72a9ac27e9
support image dtype in cloud [pr] ( #7482 )
...
* support image dtype in cloud [pr]
* remove outdated osx hack
* unused imports
2024-11-02 23:54:27 +08:00
George Hotz
5c9f76e274
hotfix: openpilot compile3 compare to i==1
2024-10-12 09:44:24 +08:00
George Hotz
f45d178a55
hotfix: support JIT_BATCH_SIZE=0, make that the default
2024-09-25 10:36:04 +08:00
George Hotz
b9e6d42a1f
Revert "gated native math in OpenCL ( #6683 )" ( #6691 )
...
This reverts commit 2fe3eeed17 .
2024-09-24 08:48:10 +08:00
George Hotz
2fe3eeed17
gated native math in OpenCL ( #6683 )
...
* gated native math
* Update cstyle.py
2024-09-23 19:22:13 +08:00
George Hotz
d02bb270b7
add copyin copyout for image on GPU [run_process_replay] ( #6580 )
...
* add copyin copyout for image on GPU [run_process_replay]
* add timing
* enqueue vs total run
* it's failing but that's fine
2024-09-18 16:06:20 +08:00
George Hotz
d4b662c318
new openpilot compile ( #6573 )
...
* new openpilot compile
* note, copyout doesn't work for images
2024-09-18 14:22:50 +08:00