David Burnett
|
afc9d3b98f
|
more ruff formating
|
2025-01-07 20:18:19 -05:00 |
|
David Burnett
|
7ddc757bdb
|
ruff format changes
|
2025-01-07 20:18:19 -05:00 |
|
David Burnett
|
d8da9b45cc
|
Fix for DEIS / DPM clash
|
2025-01-07 20:18:19 -05:00 |
|
Ryan Dick
|
607d19f4dd
|
We should not trust the value of since the model could be partially-loaded.
|
2025-01-07 19:22:31 -05:00 |
|
Ryan Dick
|
85eb4f0312
|
Fix an edge case with model offloading from VRAM to RAM. If a GGML-quantized model is offloaded from VRAM inside of a torch.inference_mode() context manager, this will cause the following error: 'RuntimeError: Cannot set version_counter for inference tensor'.
|
2025-01-07 15:59:50 +00:00 |
|
Ryan Dick
|
71b97ce7be
|
Reduce the likelihood of encountering https://github.com/invoke-ai/InvokeAI/issues/7513 by elminating places where the door was left open for this to happen.
|
2025-01-07 01:20:15 +00:00 |
|
Ryan Dick
|
4abfb35321
|
Tune SD3 VAE decode working memory estimate.
|
2025-01-07 01:20:15 +00:00 |
|
Ryan Dick
|
cba6528ea7
|
Add a 20% buffer to all VAE decode working memory estimates.
|
2025-01-07 01:20:15 +00:00 |
|
Ryan Dick
|
6a5cee61be
|
Tune the working memory estimate for FLUX VAE decoding.
|
2025-01-07 01:20:15 +00:00 |
|
Ryan Dick
|
bd8017ecd5
|
Update working memory estimate for VAE decoding when tiling is being applied.
|
2025-01-07 01:20:15 +00:00 |
|
Ryan Dick
|
299eb94a05
|
Estimate the working memory required for VAE decoding, since this operations tends to be memory intensive.
|
2025-01-07 01:20:15 +00:00 |
|
Ryan Dick
|
bcd29c5d74
|
Remove all cases where we check the 'model.device'. This is no longer trustworthy now that partial loading is permitted.
|
2025-01-07 00:31:00 +00:00 |
|
Riku
|
f4f7415a3b
|
fix(app): remove obsolete DEFAULT_PRECISION variable
|
2025-01-06 11:14:58 +11:00 |
|
Ryan Dick
|
477d87ec31
|
Fix layer patch dtype selection for CLIP text encoder models.
|
2024-12-29 21:48:51 +00:00 |
|
Ryan Dick
|
6d7314ac0a
|
Consolidate the LayerPatching patching modes into a single implementation.
|
2024-12-24 15:57:54 +00:00 |
|
Ryan Dick
|
80db9537ff
|
Rename model_patcher.py -> layer_patcher.py.
|
2024-12-24 15:57:54 +00:00 |
|
Ryan Dick
|
61253b91f1
|
Enable LoRAPatcher.apply_smart_lora_patches(...) throughout the stack.
|
2024-12-24 15:57:54 +00:00 |
|
Riku
|
525cb38c71
|
fix(app): fixed InputField default values
|
2024-12-20 09:30:56 +11:00 |
|
Mary Hipp
|
c154d833b9
|
raise error if control lora used with schnell
|
2024-12-18 10:19:28 -05:00 |
|
Ryan Dick
|
a463e97269
|
Bump FluxControlLoRALoaderInvocation version.
|
2024-12-17 13:36:10 +00:00 |
|
Ryan Dick
|
b272d46056
|
Enable ability to control the weight of FLUX Control LoRAs.
|
2024-12-17 13:36:10 +00:00 |
|
Ryan Dick
|
dd09509dbd
|
Rename ModelPatcher -> LayerPatcher to avoid conflicts with another ModelPatcher definition.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
7fad4c9491
|
Rename LoRAModelRaw to ModelPatchRaw.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
b820862eab
|
Rename ModelPatcher methods to reflect that they are general model patching methods and are not LoRA-specific.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
c604a0956e
|
Rename LoRAPatcher -> ModelPatcher.
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
41664f88db
|
Rename backend/patches/conversions/ to backend/patches/lora_conversions/
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
42f8d6aa11
|
Rename backend/lora/ to backend/patches
|
2024-12-17 13:20:19 +00:00 |
|
Ryan Dick
|
a4bed7aee3
|
Minor tidy of FLUX control LoRA implementation. (mostly documentation)
|
2024-12-17 07:28:45 -05:00 |
|
Ryan Dick
|
d84adfd39f
|
Clean up FLUX control LoRA pre-processing logic.
|
2024-12-17 07:28:45 -05:00 |
|
Ryan Dick
|
ac82f73dbe
|
Make FluxControlLoRALoaderOutput.control_lora non-optional.
|
2024-12-17 07:28:45 -05:00 |
|
Brandon Rising
|
046d19446c
|
Rename Structural Lora to Control Lora
|
2024-12-17 07:28:45 -05:00 |
|
Brandon Rising
|
f53da60b84
|
Lots of updates centered around using the lora patcher rather than changing the modules in the transformer model
|
2024-12-17 07:28:45 -05:00 |
|
Brandon Rising
|
5a035dd19f
|
Support bnb quantized nf4 flux models, Use controlnet vae, only support 1 structural lora per transformer. various other refractors and bugfixes
|
2024-12-17 07:28:45 -05:00 |
|
Brandon Rising
|
f3b253987f
|
Initial setup for flux tools control loras
|
2024-12-17 07:28:45 -05:00 |
|
Eugene Brodsky
|
d06232d9ba
|
(config) ensure legacy model configs and node template are writable by the user even if the source files are read-only
|
2024-12-04 17:02:08 +00:00 |
|
Jonathan
|
6012b0f912
|
Update flux_text_encoder.py
Updated version number for FLUX Text Encoding.
|
2024-11-30 08:29:21 -05:00 |
|
Jonathan
|
bb0ed5dc8a
|
Update flux_denoise.py
Updated node version for FLUX Denoise.
|
2024-11-30 08:29:21 -05:00 |
|
psychedelicious
|
5910892c33
|
Merge remote-tracking branch 'origin/main' into ryan/flux-regional-prompting
|
2024-11-29 15:19:39 +10:00 |
|
Ryan Dick
|
8cfb032051
|
Add utility ImagePanelLayoutInvocation for working with In-Context LoRA workflows.
|
2024-11-26 20:58:31 -08:00 |
|
Ryan Dick
|
06a9d4e2b2
|
Use a Textarea component for the FluxTextEncoderInvocation prompt field.
|
2024-11-26 20:58:31 -08:00 |
|
Ryan Dick
|
53abdde242
|
Update Flux RegionalPromptingExtension to prepare both a mask with restricted image self-attention and a mask with unrestricted image self attention.
|
2024-11-25 22:04:23 +00:00 |
|
Ryan Dick
|
3741a6f5e0
|
Fix device handling for regional masks and apply the attention mask in the FLUX double stream block.
|
2024-11-25 16:02:03 +00:00 |
|
Ryan Dick
|
20356c0746
|
Fixup the logic for preparing FLUX regional prompt attention masks.
|
2024-11-21 22:46:25 +00:00 |
|
Ryan Dick
|
fda7aaa7ca
|
Pass RegionalPromptingExtension down to the CustomDoubleStreamBlockProcessor in FLUX.
|
2024-11-20 19:48:04 +00:00 |
|
Ryan Dick
|
85c616fa34
|
WIP - Pass prompt masks to FLUX model during denoising.
|
2024-11-20 18:51:43 +00:00 |
|
psychedelicious
|
f01210861b
|
chore: ruff
|
2024-11-19 07:02:37 -08:00 |
|
psychedelicious
|
872a6ef209
|
tidy(nodes): extract slerp from lblend to util fn
|
2024-11-19 07:02:37 -08:00 |
|
psychedelicious
|
4267e5ffc4
|
tidy(nodes): bring masked blend latents masking logic into invoke core
|
2024-11-19 07:02:37 -08:00 |
|
Brandon Rising
|
1fd80d54a4
|
Run Ruff
|
2024-11-19 07:02:37 -08:00 |
|
Brandon Rising
|
991f63e455
|
Store CIELab_to_UPLab.icc within the repo
|
2024-11-19 07:02:37 -08:00 |
|