mirror of
https://github.com/tinygrad/tinygrad.git
synced 2026-01-13 08:58:05 -05:00
* fp8 train * clean * lint * test fix from #13439 * skip first/last layer * rm __init__, restore unroll <=32 check * tests * clean test, remove unused * multi-gpu test, clean quantize_to_fp8 * remove bert contiguous * run script * test: better check * run script search * add seed in bert data shuffle * move script to mi350x folder --------- Co-authored-by: chenyu <chenyu@fastmail.com>
Each model should be a clean single file. They are imported from the top level `models` directory It should be capable of loading weights from the reference imp. We will focus on these 5 models: # Resnet50-v1.5 (classic) -- 8.2 GOPS/input # Retinanet # 3D UNET (upconvs) # RNNT # BERT-large (transformer) They are used in both the training and inference benchmark: https://mlcommons.org/en/training-normal-21/ https://mlcommons.org/en/inference-edge-30/ And we will submit to both. NOTE: we are Edge since we don't have ECC RAM