powderluv
3a24cff901
change binary names
20230707.804
2023-07-06 23:59:14 -07:00
powderluv
1f72907886
Fix the pyinstaller for chatbots ( #1631 )
2023-07-06 23:30:01 -07:00
Daniel Garvey
06c8aabd01
remove local-sync from webui ( #1629 )
20230706.802
20230706.801
2023-07-06 13:58:59 -07:00
Phaneesh Barwaria
55a12cc0c4
cpu name in device ( #1628 )
...
* show cpu name in devices
* change device order for chatbot
2023-07-06 12:00:09 -07:00
Ean Garvey
7dcbbde523
Xfail models for data tiling flag changes ( #1624 )
2023-07-06 06:57:17 -07:00
Abhishek Varma
1b62dc4529
[Vicuna] Revert the formatting for Brevitas op ( #1626 )
...
-- This commit reverts the formatting for Brevitas op.
-- It also excludes vicuna.py script from `black` formatter.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com >
2023-07-06 06:56:17 -07:00
Daniel Garvey
c5a47887f4
Revert revert negative prompt change ( #1625 )
...
* revert default flag changes
* revert revert negative prompt change
* revert revert negative prompt change
2023-07-05 22:09:06 -07:00
Daniel Garvey
c72d0eaf87
revert default flag changes ( #1622 )
20230705.800
2023-07-05 15:43:26 -05:00
powderluv
c41f58042a
Update compile_utils.py ( #1617 )
...
* Update compile_utils.py
* Update compile_utils.py
* Update compile_utils.py
2023-07-05 10:06:48 -07:00
xzuyn
043e5a5c7a
fix a mistake I made, and more formatting changes, and add ++/Karras ( #1619 )
...
* fixed missing line break in `stablelm_ui.py` `start_message`
- also more formatting changes
* fix variable spelling mistake
* revert some formatting cause black wants it different
* one less line, still less than 79
* add ++, karras, and karras++ types of dpmsolver.
* black line length 79
---------
Co-authored-by: powderluv <powderluv@users.noreply.github.com >
2023-07-05 09:00:16 -07:00
Abhishek Varma
a1b1ce935c
int8 e2e for WebUI ( #1620 )
2023-07-05 07:08:36 -07:00
jinchen62
bc6fee1a0c
Add int4/int8 vicuna ( #1598 )
2023-07-05 07:01:51 -07:00
xzuyn
91ab594744
minor fix, some changes, some additions, and cleaning up ( #1618 )
...
* - fix overflowing text (a janky fix)
- add DEISMultistep scheduler as an option
- set default scheduler to DEISMultistep
- set default CFG to 3.5
- set default steps to 16
- add `xzuyn/PhotoMerge` as a model option
- add 3 new example prompts (which work nicely with PhotoMerge)
- formatting
* Set DEISMultistep in the cpu_only list instead
* formatting
* formatting
* modify prompts
* resize window to 81% & 85% monitor resolution instead of (WxH / 1.0625).
* increase steps to 32 after some testing. somewhere in between 16 and 32 is best compromise on speed/quality for DEIS, so 32 steps to play it safe.
* black line length 79
* revert settings DEIS as default scheduler.
* add more schedulers & revert accidental DDIM change
- add DPMSolverSingleStep, KDPM2AncestralDiscrete, & HeunDiscrete.
- did not add `DPMSolverMultistepInverse` or `DDIMInverse` as they only output latent noise, there are a few I did not try adding yet.
- accidentally set `upscaler_ui.py` to EulerDiscrete by default last commit while reverting DEIS changes.
- add `xzuyn/PhotoMerge-inpainting` as an in or out painting model.
* black line length 79
* add help section stuff and some other changes
- list the rest of the schedulers in argument help section.
- replace mutable default arguments.
- increased default window height to 91% to remove any scrolling for the main txt2img page (tested on a 1920x1080 monitor). width is the same as its just enough to have the image output on the side instead of the bottom.
- cleanup
20230704.799
2023-07-04 18:51:23 -07:00
Eliasj42
4015793f84
changed method of compiling vicuna to remove first and second vicuna ( #1611 )
...
Co-authored-by: Elias Joseph <elias@nod-labs.com >
Co-authored-by: powderluv <powderluv@users.noreply.github.com >
20230703.798
2023-07-03 12:12:43 -07:00
Ean Garvey
d63ce76dd8
Use sortable image filenames for SD outputs. ( #1528 )
2023-07-03 10:30:47 -07:00
Prashant Kumar
1c32915570
Add the shark compile downstream due to https://github.com/pytorch/pytorch/pull/104185#issuecomment-1615110613 ( #1615 )
20230702.797
20230701.796
2023-07-01 08:30:58 -07:00
Ean Garvey
6d286c0609
Enable tuning for rectangle sizes on rdna2. ( #1608 )
2023-06-30 22:28:24 -07:00
Stefan Kapusniak
7392b22731
UI/Web Reduce animation of default --progress_bars ( #1613 )
20230630.795
2023-06-30 21:12:10 -07:00
jinchen62
534de05791
Update precision check for vicuna ( #1610 )
20230629.794
2023-06-29 16:16:33 -05:00
Daniel Garvey
5779e8c039
int4/int8 vicuna download support ( #1609 )
...
* set task_topology_max_group to cpu_count
by default. Can be overriden with a flag of the same str
* add download for int4/int8 mlir
2023-06-29 13:35:51 -07:00
Abhishek Varma
d496053590
[SHARK] Add a compile API to use for quick testing of inference ( #1606 )
20230628.793
2023-06-28 08:40:28 -07:00
gpetters94
6274a813c9
Add unet512 support for the other StableDiffusion pipelines ( #1602 )
20230627.792
2023-06-27 12:28:57 -07:00
Gaurav Shukla
1d6a1f9f8a
[vicuna] Add tokens streaming(step=3) ( #1600 )
...
Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com >
20230627.791
2023-06-27 08:59:27 -07:00
Daniel Garvey
75672c0e28
set task_topology_max_group to cpu_count ( #1594 )
...
by default. Can be overriden with a flag of the same str
20230626.790
2023-06-26 14:54:06 -07:00
Prashant Kumar
74a7202173
Make the tensors contiguous.
2023-06-26 17:29:54 +05:30
Prashant Kumar
27a08735db
Add the shark backend for torch.compile API. ( #1596 )
2023-06-26 03:53:32 -07:00
Stefan Kapusniak
eaa49cce17
UI/App - Allow text selection ( #1593 )
...
* When run in app mode on windows, allows selection of text from
non-input controls, which is the same behaviour as web mode.
2023-06-26 02:16:53 -07:00
powderluv
10657d6fb1
Disable upx
20230625.789
20230625.788
2023-06-25 07:28:52 -07:00
Stefan Kapusniak
e3ab844cd1
Fix output gallery for csv format inc. VAE & LoRA ( #1591 )
20230624.787
2023-06-24 06:20:53 -07:00
powderluv
5ce6001b41
Update stablelm_ui.py to default to fp16
20230623.786
2023-06-23 22:55:47 -07:00
powderluv
501d0ca52e
Add sentencepiece to webui for pyinstaller
2023-06-23 22:52:06 -07:00
powderluv
b444528715
Pin torch-mlir for windows too
20230623.785
20230623.784
2023-06-23 19:19:28 -07:00
Ean Garvey
6e6c90f62b
Pin torch-mlir and use local-task in OPT. ( #1592 )
2023-06-23 19:17:05 -07:00
AyaanShah2204
8cdb38496e
Final REST API Fixes ( #1590 )
...
* fixed outpaint api and added tests
* fixed text2img api
* more elegant generator to subscriptable conversion
* final fixes
2023-06-23 16:46:47 -07:00
powderluv
726d73d6ba
Revert "[vicuna] Add streaming of tokens ( #1587 )" ( #1588 )
...
This reverts commit 4d55e51d46 .
2023-06-23 10:29:00 -07:00
Gaurav Shukla
4d55e51d46
[vicuna] Add streaming of tokens ( #1587 )
...
Signed-off-by: Gaurav Shukla <gaurav@nod-labs.com >
2023-06-23 08:20:46 -07:00
Prashant Kumar
6ef78ee7ba
Add cpu compile time flags. ( #1585 )
2023-06-23 07:23:26 -07:00
jinchen62
4002da7161
Add int4/int8 options to chatbot webui ( #1586 )
2023-06-23 07:18:34 -07:00
powderluv
ecb5e8e5d8
Update txt2img_ui.py
2023-06-23 06:42:12 -07:00
PhaneeshB
28e0919321
Add AMD cpu device
2023-06-23 18:47:04 +05:30
Daniel Garvey
28f4d44a6b
downloader was double downloading ( #1580 )
20230622.782
2023-06-22 18:30:27 -07:00
AyaanShah2204
97f7e79391
[Blender Integration] Fixed Inpainting REST API ( #1577 )
...
* fixed inpaint api
* added inpainting test
* fixed linter errors
---------
Co-authored-by: powderluv <powderluv@users.noreply.github.com >
20230622.781
2023-06-22 16:08:26 -07:00
Nelson Sharpe
44a8f2f8db
Include VAE & LoRA data into PNG metadata ( #1573 )
...
* include custom lora and vae data in png metadata
* include pycharm settings
* lint with black
2023-06-22 16:05:54 -07:00
Eliasj42
8822b9acd7
added ability to use config file to shard vicuna ( #1565 )
...
Co-authored-by: Elias Joseph <elias@nod-labs.com >
2023-06-22 17:40:35 -05:00
Daniel Garvey
0ca3b9fce3
fix some mmap and vicuna bugs ( #1576 )
2023-06-22 17:39:55 -05:00
Nithin Meganathan
045f2bb147
Add dispatch-level config file generator for manual annotation ( #1566 )
2023-06-22 15:11:41 -07:00
Prashant Kumar
a811b867b9
Add shark_eager mode.
...
-- Eager mode with step by step op compilation and execution.
2023-06-22 22:59:14 +05:30
Abhishek Varma
cdd505e2dd
[SharkInference-SharkRuntime] Adds capability to mmap vmfbs
...
-- This commit is based on [VmModule.mmap() API](https://github.com/openxla/iree/pull/14124 ).
-- It thereby adds capability to mmap vmfbs in SHARK.
Signed-off-by: Abhishek Varma <abhishek@nod-labs.com >
2023-06-22 20:43:40 +05:30
powderluv
1b0f39107c
Move torch_mlir import to the top ( #1574 )
2023-06-21 22:31:35 -07:00
powderluv
b9b8955f74
exclude vulkan on macos
2023-06-21 22:22:27 -07:00