mirror of
https://github.com/nod-ai/AMD-SHARK-Studio.git
synced 2026-02-19 11:56:43 -05:00
* Fix generation of MiniLM artifacts. * Fix miniLM output for validation. Xfail numerics failure on mpnet. * Update distilbert-base-uncased_tf_test.py * try-except for transition of minilm model
Running SharkInference on CPUs, GPUs and MAC.
Run the binary sequence_classification.
The models supported are: hugging face sequence classification
./seq_classification.py --hf_model_name="hf_model" --device="cpu" # Use gpu | vulkan
Once the model is compiled to run on the device mentioned, we can pass in text and get the logits.