Phaneesh Barwaria
831f206cd0
Revert "Add target triple selection for multiple cards" ( #655 )
...
This reverts commit acb905f0cc .
2022-12-16 15:01:45 -08:00
Gaurav Shukla
72648aa9f2
Revert "[SD][WEB] Deduce vulkan-target-triple in the presence of multiple cards"
...
This reverts commit 35e623deaf .
2022-12-17 04:28:18 +05:30
Gaurav Shukla
35e623deaf
[SD][WEB] Deduce vulkan-target-triple in the presence of multiple cards
...
1. Get the correct vulkan-target-triple for a specified device in the
presence of multiple cards.
2. Use tuned unet model for rdna3 cards.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com >
2022-12-17 03:04:47 +05:30
Anush Elangovan
6263636738
Fix more lints
2022-12-16 13:26:15 -08:00
yzhang93
c73eed2e51
Add VAE winograd tuned model ( #647 )
2022-12-16 13:01:45 -08:00
Anush Elangovan
30fdc99f37
Set to enable llpc
...
Use an env var to enable llpc
2022-12-16 12:57:30 -08:00
PhaneeshB
acb905f0cc
Add target triple selection for multiple cards
2022-12-17 02:24:37 +05:30
Gaurav Shukla
bba06d0142
[SD][WEB] Avoid passing args to utils APIs
...
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com >
2022-12-17 01:41:33 +05:30
Ean Garvey
a14a47af12
Move most xfails to entries in tank/all_models.csv and temporarily remove multiprocessing and TF gpu support. ( #646 )
...
-Adds date variable back to nightly.yml so shark_tank uploads are dated again
-added specification for nightly pytests to not run tests on metal (vulkan is sufficient)
-added some paths/filetypes to be ignored when triggering workflow runs. (no test-models on changes to .md files or anything in the shark/examples/ directory or its subdirectories.
-pytest only picks up tank/test_models.py, so no need to specify which file to run when running pytest from SHARK base directory.
-Cleaned up xfails so that they can be added to models as csv entries. Columns 7-9 in all_models.csv trigger xfails with cpu, cuda, vulkan, respectively, and row 10 can be populated with a reason for the xfails.
-Fixed a few defaults for shark_args and pytest args (defined in conftest.py)
-Fixes --update_tank option in shark_downloader
removes some multiprocessing in pytest / TF+CUDA support because it breaks pytest and false passes, leaving regressions at large.
-Adds xfails for and removes albert torch from gen_sharktank list (tank/torch_model_list.csv).
-Cleans up xfails for cpu, cuda, vulkan (removing old ones)
2022-12-16 12:56:32 +05:30
Phaneesh Barwaria
73457336bc
add flag for toggling vulkan validation layers ( #624 )
...
* add vulkan_validation_layers flag
* categorize SD flags
* stringify true and false for flag
2022-12-15 20:40:59 -06:00
nirvedhmeshram
2928179331
Add more NVIDIA targets ( #640 )
2022-12-15 11:24:38 -06:00
Stanley Winata
24a16a4cfe
[Stable Diffusion] Disable binding fusion to work with moltenVK on mac. ( #639 )
...
Co-authored-by: Stanley <stanley@MacStudio.lan >
2022-12-16 00:22:49 +07:00
yzhang93
6508e3fcc9
Update tuned model SD v2.1base ( #634 )
2022-12-14 16:02:35 -05:00
Prashant Kumar
898bc9e009
Add the stable diffusion v2.1 version.
2022-12-14 20:19:41 +05:30
Gaurav Shukla
e67ea31ee2
[SHARK][SD] Add --local_tank_cache flag in the stable diffusion
...
This flag can be used to set local shark_tank cache directory.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com >
2022-12-14 20:00:25 +05:30
Gaurav Shukla
986c126a5c
[SHARK][SD] Add support for negative prompts
...
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com >
2022-12-14 18:20:09 +05:30
powderluv
5ddce749b8
lint fix
2022-12-13 22:02:32 -08:00
powderluv
d946cffabc
Revert "Move most xfails to entries in tank/all_models.csv and temporarily remove multiprocessing and TF gpu support. ( #602 )" ( #622 )
...
This reverts commit fe618811ee .
2022-12-13 21:49:46 -08:00
Ean Garvey
fe618811ee
Move most xfails to entries in tank/all_models.csv and temporarily remove multiprocessing and TF gpu support. ( #602 )
...
* Move most xfails to entries in tank/all_models.csv
* enable usage of pytest without specifying tank/test_models.py
* add dict_configs.py to gitignore.
* Pin versions for runtimes and torch-mlir for setup.
2022-12-13 18:11:17 -08:00
powderluv
09c45bfb80
clean up cache printf
2022-12-13 14:11:14 -08:00
Boian Petkantchin
e9e9ccd379
Add stress test
2022-12-13 13:21:51 -08:00
Boian Petkantchin
a9b27c78a3
Return dynamic model if specified when downloading from the tank
2022-12-13 13:21:51 -08:00
Boian Petkantchin
bc17c29b2e
In get_iree_runtime_config get the specific device instead of the default
2022-12-13 13:21:51 -08:00
Boian Petkantchin
aaf60bdee6
Simplify iree_device_map
2022-12-13 13:21:51 -08:00
Gaurav Shukla
d913453e57
[WEB] Update models to 8dec and also default values ( #620 )
...
1. Update the models to 8 dec.
2. precision is default to `fp16` in CLI.
3. version is default to `v2.1base` in CLI as well as web.
4. The default scheduler is set to `EulerDiscrete` now.
Signed-Off-by: Gaurav Shukla <gaurav@nod-labs.com >
Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com >
2022-12-13 13:08:33 -08:00
powderluv
08e373aef4
Update stable_diffusion_amd.md
2022-12-13 11:47:29 -08:00
Prashant Kumar
4cb50a3d06
Update the models to 8th Dec version.
2022-12-14 00:01:46 +05:30
Prashant Kumar
8ae76d18b5
Add euler scheduler. Also, make it default for sd2.1.
2022-12-13 00:03:45 +05:30
Prashant Kumar
e5be1790e5
Enable the v2.1 base version with --version="v2.1base". ( #611 )
2022-12-12 07:02:01 -08:00
Ean Garvey
616ee9b824
Don't include baseline benchmarks if setup without IMPORTER=1. ( #607 )
2022-12-10 14:58:29 -06:00
Stanley Winata
57c94f8f80
[vulkan] Add "radeon" check to the default AMD triple ( #604 )
2022-12-10 09:05:48 -08:00
powderluv
2a59c4f670
Update stable_diffusion_amd.md
2022-12-09 16:54:47 -08:00
Ean Garvey
0225292a44
Remove print statements from compile utils ( #593 )
2022-12-08 13:40:47 -08:00
Ean Garvey
589a7ed02f
Print a message when a model is downloaded via shark_downloader. ( #595 )
2022-12-08 15:27:58 -06:00
Quinn Dawkins
b3a42cd0b1
Don't do nchw-to-nhwc transpose for stable diffusion models ( #592 )
2022-12-08 12:19:23 -05:00
Ean Garvey
1699db79b5
Disable SHARK-Runtime flags if USE_IREE=1 specified during setup. ( #588 )
...
* Disable SHARK-Runtime flags if USE_IREE=1 specified during setup.
* Update setup_venv.sh
* Autodetect cpu count for runtime flags.
2022-12-08 02:31:31 -06:00
Ean Garvey
40eea21863
Enable conv nchw-to-nhwc flag by default for most models + minor fixes ( #584 )
2022-12-07 16:24:02 -08:00
Ean Garvey
d2475ec169
Add mnasnet to torch models and minor fixes. ( #577 )
...
* Minor fixes to benchmark runner
* Add Mnasnet to tank.
2022-12-07 22:30:58 +05:30
Stanley Winata
6049f86bc4
[Vulkan][Utils] Automatic platform/OS detection ( #569 )
...
To enable AMD gpus on macOS, we need this detection to let the compiler know that we would be needing moltenVK to use this GPU.
2022-12-07 12:05:00 +07:00
Phaneesh Barwaria
1096936a15
Enable f32 path for SD ( #567 )
2022-12-06 19:29:12 +05:30
Stanley Winata
c4444ff695
[vulkan][utils] Add rdna3 detection ( #565 )
2022-12-05 23:56:06 -08:00
powderluv
2b8d784660
update latest sd build
2022-12-05 22:16:13 -08:00
Daniel Garvey
18f447d8d8
fix hash comparison ( #563 )
...
Co-authored-by: dan <dan@nod-labs.com >
2022-12-05 21:43:05 -08:00
Daniel Garvey
d7e1078d68
remove nodcloud from client ( #562 )
...
Co-authored-by: dan <dan@nod-labs.com >
2022-12-05 23:13:19 -06:00
Daniel Garvey
6be592653f
remove gsutil_flags and fix download ( #559 )
2022-12-05 20:29:00 -08:00
Daniel Garvey
8859853b41
Revert "Revert "find gsutil on linux ( #557 )" ( #560 )" ( #561 )
...
This reverts commit 3c46021102 .
2022-12-05 20:27:43 -08:00
Daniel Garvey
3c46021102
Revert "find gsutil on linux ( #557 )" ( #560 )
...
This reverts commit bba8646669 .
2022-12-05 21:53:47 -06:00
Daniel Garvey
bba8646669
find gsutil on linux ( #557 )
...
* find gsutil on linux
* cleaned up downloader and ditched gsutil
Co-authored-by: dan <dan@nod-labs.com >
2022-12-05 19:03:48 -08:00
Daniel Garvey
b0dc19a910
revert parallel downloads to 1 ( #555 )
...
Co-authored-by: dan <dan@nod-labs.com >
2022-12-05 15:42:42 -08:00
Daniel Garvey
df79ebd0f2
replace gsutil with variable path for pyinstaller ( #541 )
...
Co-authored-by: dan <dan@nod-labs.com >
2022-12-05 15:08:57 -08:00