Commit Graph

1931 Commits

Author SHA1 Message Date
Brandon Rising
2bfb0ddff5 Initial GGUF support for flux models 2024-10-02 18:33:05 -04:00
Ryan Dick
807f458f13 Move FLUX_LORA_TRANSFORMER_PREFIX and FLUX_LORA_CLIP_PREFIX to a shared location. 2024-10-01 10:22:11 -04:00
Ryan Dick
68dbe45315 Fix regression with FLUX diffusers LoRA models where lora keys were not given the expected prefix. 2024-10-01 10:22:11 -04:00
Mary Hipp
c224971cb4 feat(ui,api): add guidance as a default setting option for FLUX models 2024-09-30 17:15:33 -04:00
Ryan Dick
c256826015 Whoops, the 'lora_te1' prefix in FLUX kohya models refers to the CLIP text encoder - not the T5 as previously assumed. Update everything accordingly. 2024-09-30 07:59:14 -04:00
Ryan Dick
7d38a9b7fb Add prefix to distinguish FLUX LoRA submodels. 2024-09-30 07:59:14 -04:00
Ryan Dick
d332d81866 Add ability to load FLUX kohya LoRA models that include patches for both the transformer and T5 models. 2024-09-30 07:59:14 -04:00
Ryan Dick
bdeec54886 Remove FLUX TrajectoryGuidanceExtension and revert to the InpaintExtension. Keep the improved inpaint gradient mask adjustment behaviour. 2024-09-26 19:54:28 -04:00
Ryan Dick
8d50ecdfc3 Update docs explaining inpainting trajectory guidance. 2024-09-26 19:54:28 -04:00
Ryan Dick
ba07e255f5 Add support for fractional denoise start and end with FLUX. 2024-09-26 19:54:28 -04:00
Ryan Dick
fae96f3b9f Remove trajectory_guidance_strength parameter. 2024-09-26 19:54:28 -04:00
psychedelicious
dc10197615 fix(app): step callbacks for SD, FLUX, MultiDiffusion
Each of these was a bit off:
- The SD callback started at `-1` and ended at `i`. Combined w/ the weird math on the previous `calc_percentage` util, this caused the progress bar to never finish.
- The MultiDiffusion callback had the same problems as SD.
- The FLUX callback didn't emit a pre-denoising step 0 image. It also reported total_steps as 1 higher than the actual step count.

Each of these now emit the expected events to the frontend:
- The initial latents at 0%
- Progress at each step, ending at 100%
2024-09-22 21:20:32 +03:00
Ryan Dick
a43a045b04 Fix preview image to work well with FLUX trajectory guidance. 2024-09-20 21:08:41 +00:00
Ryan Dick
cd3a7bdb5e Assert that change_ratio is in the expected range in TrajectoryGuidanceExtension. 2024-09-20 20:34:49 +00:00
Ryan Dick
16ca540ece Pre-compute trajectory guidance schedule params rather than calculating on each step. 2024-09-20 20:18:06 +00:00
Ryan Dick
2f82171dff Tidy up the logic for inpainting mask adjustment in FLUX TrajectoryGuidanceExtension. 2024-09-20 14:48:06 +00:00
Ryan Dick
b6748fb1e1 Fix typo 2024-09-20 14:15:59 +00:00
Ryan Dick
f0aad5882d Fixup docs in the TrajectoryGuidanceExtension. 2024-09-20 14:04:53 +00:00
Ryan Dick
e8357afd3a Add traj_guidance_strength to FluxDenoiseInvocation. 2024-09-20 02:41:52 +00:00
Ryan Dick
93c15c9958 Rough draft of TrajectoryGuidanceExtension. 2024-09-20 02:21:47 +00:00
Ryan Dick
97de521c70 Add build_line(...) util function. 2024-09-20 01:01:37 +00:00
Ryan Dick
3d6f60f63e Merge branch 'main' into ryan/flux-lora-quantized 2024-09-18 13:22:39 -04:00
maryhipp
8916036ed3 fix progress image for FLUX inpainting 2024-09-17 06:41:32 +03:00
psychedelicious
0fd430fc20 fix(nodes): add thresholding to lineart & lineart anime nodes
The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold, eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better, for example stretching the contrast or removing noise.

I tried:
- Simple thresholding (as implemented here) - works fine.
- Adaptive thresholding - doesn't work, because the thresholding is done in the context of small blocks, while we want thresholding in the context of the whole image.
- Gamma adjustment - alters the white values too much. Hard to tuen.
- Contrast stretching, with and without pre-simple-thresholding - this allows us to treshold out the noise, then stretch everything above the threshold down to almost-zero. So you have a smoother gradient of lightness near zero. It works but it also stretches contrast near white down a bit, which is probably undesired.

In the end, simple thresholding works fine and is very simple.
2024-09-17 04:04:11 +03:00
Ryan Dick
2934e31620 Fix bug when applying multiple LoRA models via apply_lora_sidecar_patches(), and add unit tests for the stacked LoRA case. 2024-09-16 14:48:39 +00:00
Ryan Dick
e88d3cf2f7 Assume alpha=rank for FLUX diffusers PEFT LoRA models. 2024-09-16 13:57:07 +00:00
Ryan Dick
d51f2c5e00 Add bias to LoRA sidecar layer unit tests. 2024-09-15 04:39:56 +03:00
Ryan Dick
78efed4499 Revert change of make all LoRA layers torch.nn.Module's. While the code is uglier, it turns out that the Module implementation of some ops like .to(...) is noticeably slower. 2024-09-15 04:39:56 +03:00
Ryan Dick
02f27c750a Add unit tests for LoRAPatcher.apply_lora_sidecar_patches(...) and fixup dtype handling in the sidecar layers. 2024-09-15 04:39:56 +03:00
Ryan Dick
ba3ba3c23a Add unit tests for LoRALinearSidecarLayer and ConcatenatedLoRALinearSidecarLayer. 2024-09-15 04:39:56 +03:00
Ryan Dick
61d3d566de Minor cleanup and documentation updates. 2024-09-15 04:39:56 +03:00
Ryan Dick
ae41651346 Remove LoRA conv sidecar layers until they are needed and properly tested. 2024-09-15 04:39:56 +03:00
Ryan Dick
5bb0c79c14 Add links to test models for loha, lokr, ia3. 2024-09-15 04:39:56 +03:00
Ryan Dick
9438ea608c Update all lycoris layer types to use the new torch.nn.Module base class. 2024-09-15 04:39:56 +03:00
Ryan Dick
81fbaf2b8b Assume LoRA alpha=8 for FLUX diffusers PEFT LoRAs. 2024-09-15 04:39:56 +03:00
Ryan Dick
10c3c61cb2 Get diffusers FLUX LoRA working as sidecar patch on quantized model. 2024-09-15 04:39:56 +03:00
Ryan Dick
45bc8fcd7f WIP - Implement sidecar LoRA layers using functional API. 2024-09-15 04:39:56 +03:00
Ryan Dick
f5f894437c Bug fixes to get LoRA sidecar patching working for the first time. 2024-09-15 04:39:56 +03:00
Ryan Dick
3e12ac9740 WIP - LoRA sidecar layers. 2024-09-15 04:39:56 +03:00
Ryan Dick
049ce1826c WIP - adding LoRA sidecar layers 2024-09-15 04:39:56 +03:00
Ryan Dick
2ff4dae5ce Add util functions calc_tensor_size(...) and calc_tensors_size(...). 2024-09-15 04:39:56 +03:00
Ryan Dick
705173b575 Remove unused layer_key property from LoRALayerBase. 2024-09-15 04:39:56 +03:00
Ryan Dick
fef26a5f2f Consolidate all LoRA patching logic in the LoRAPatcher. 2024-09-15 04:39:56 +03:00
Ryan Dick
ee5d8f6caf lora_layer_from_state_dict(...) -> any_lora_layer_from_state_dict(...) 2024-09-15 04:39:56 +03:00
Ryan Dick
ddda60c1a2 Rename peft/ -> lora/ 2024-09-15 04:39:56 +03:00
Ryan Dick
aac97e105a Genera cleanup/documentation. 2024-09-15 04:39:56 +03:00
Ryan Dick
552a5b06a4 Add a check that all keys are handled in the FLUX Diffusers LoRA loading code. 2024-09-15 04:39:56 +03:00
Ryan Dick
5800e60b06 Add model probe support for FLUX LoRA models in Diffusers format. 2024-09-15 04:39:56 +03:00
Ryan Dick
31a8757e6b Add is_state_dict_likely_in_flux_diffusers_format(...) function with unit test. 2024-09-15 04:39:56 +03:00
Ryan Dick
534e938a62 Add unit test for lora_model_from_flux_diffusers_state_dict(...). 2024-09-15 04:39:56 +03:00