OnnxRunner file as input (#10789)

* file path as input and have parse be in OnnxRunner.__init__

* modelproto_to_onnxrunner -> modelproto_to_runner

* whoops, fix import

* oh flakiness again, is it because it's getting gc-ed?

* small changes

* CI flaky so just move compile4 fix in

* copy typing of onnx_load

* actually can just import onnx_load instead of onnx.load

* fix external_benchmark_openpilot

* fix onnx_runner test to use onnx_helper

* rerun CI

* try run_modelproto

* spam CI a few times

* revert run_modelproto since that's flaky also

* no external onnx_load usage except onnx.py

* cursor tab complete is evil. Snuck a darn sorted in. But does order change result? Why?

* model_benchmark 193s -> 80s, add OnnxRunner.to()...

* minimize diff and clean up

* device can be None, weird but eh

---------

Co-authored-by: chenyu <chenyu@fastmail.com>
This commit is contained in:
geohotstan
2025-07-13 02:27:46 +08:00
committed by GitHub
parent 110cff3f2e
commit 5ce278b245
16 changed files with 71 additions and 85 deletions

View File

@@ -1,6 +1,6 @@
from tinygrad import Tensor
from tinygrad.tensor import _to_np_dtype
from tinygrad.frontend.onnx import OnnxRunner, onnx_load
from tinygrad.frontend.onnx import OnnxRunner
from extra.onnx import OnnxValue
import numpy as np
import onnxruntime as ort
@@ -46,7 +46,7 @@ def get_example_inputs(graph_inputs:dict[str, OnnxValue], config={}):
return ret
def validate(onnx_file, inputs, rtol=1e-5, atol=1e-5):
run_onnx = OnnxRunner(onnx_load(onnx_file))
run_onnx = OnnxRunner(onnx_file)
ort_options = ort.SessionOptions()
ort_options.log_severity_level = 3