Brandon Rising
d328eaf743
Remove no longer used dequantize_tensor function
2024-10-02 18:33:05 -04:00
Brandon Rising
0f333388bb
Add comment describing why we're not using the meta device during probing of gguf files
2024-10-02 18:33:05 -04:00
Ryan Dick
bc63e2acc5
Add workaround for FLUX GGUF models with incorrect img_in.weight shape.
2024-10-02 18:33:05 -04:00
Ryan Dick
ec7e771942
Add a compute_dtype field to GGMLTensor.
2024-10-02 18:33:05 -04:00
Ryan Dick
fe84013392
Add unit tests for GGMLTensor.
2024-10-02 18:33:05 -04:00
Ryan Dick
710f81266b
Fix type errors in GGMLTensor.
2024-10-02 18:33:05 -04:00
Brandon Rising
446e2884bc
Remove no longer used code paths, general cleanup of new dequantization code, update probe
2024-10-02 18:33:05 -04:00
Brandon Rising
7d9f125232
Run ruff and update imports
2024-10-02 18:33:05 -04:00
Brandon Rising
66bbd62758
Run ruff and fix typing in torch patcher
2024-10-02 18:33:05 -04:00
Brandon Rising
0875e861f5
Various updates to gguf performance
2024-10-02 18:33:05 -04:00
Brandon
0267d73dfc
Update invokeai/backend/model_manager/load/model_loaders/flux.py
...
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-10-02 18:33:05 -04:00
Ryan Dick
f06765dfba
Get alternative GGUF implementation working... barely.
2024-10-02 18:33:05 -04:00
Ryan Dick
f347b26999
Initial experimentation with Tensor-like extension for GGUF.
2024-10-02 18:33:05 -04:00
Lincoln Stein
c665cf3525
recognize .gguf files when scanning a folder for import
2024-10-02 18:33:05 -04:00
Brandon Rising
2bfb0ddff5
Initial GGUF support for flux models
2024-10-02 18:33:05 -04:00
Ryan Dick
807f458f13
Move FLUX_LORA_TRANSFORMER_PREFIX and FLUX_LORA_CLIP_PREFIX to a shared location.
2024-10-01 10:22:11 -04:00
Ryan Dick
68dbe45315
Fix regression with FLUX diffusers LoRA models where lora keys were not given the expected prefix.
2024-10-01 10:22:11 -04:00
Mary Hipp
c224971cb4
feat(ui,api): add guidance as a default setting option for FLUX models
2024-09-30 17:15:33 -04:00
Ryan Dick
c256826015
Whoops, the 'lora_te1' prefix in FLUX kohya models refers to the CLIP text encoder - not the T5 as previously assumed. Update everything accordingly.
2024-09-30 07:59:14 -04:00
Ryan Dick
7d38a9b7fb
Add prefix to distinguish FLUX LoRA submodels.
2024-09-30 07:59:14 -04:00
Ryan Dick
d332d81866
Add ability to load FLUX kohya LoRA models that include patches for both the transformer and T5 models.
2024-09-30 07:59:14 -04:00
Ryan Dick
bdeec54886
Remove FLUX TrajectoryGuidanceExtension and revert to the InpaintExtension. Keep the improved inpaint gradient mask adjustment behaviour.
2024-09-26 19:54:28 -04:00
Ryan Dick
8d50ecdfc3
Update docs explaining inpainting trajectory guidance.
2024-09-26 19:54:28 -04:00
Ryan Dick
ba07e255f5
Add support for fractional denoise start and end with FLUX.
2024-09-26 19:54:28 -04:00
Ryan Dick
fae96f3b9f
Remove trajectory_guidance_strength parameter.
2024-09-26 19:54:28 -04:00
psychedelicious
dc10197615
fix(app): step callbacks for SD, FLUX, MultiDiffusion
...
Each of these was a bit off:
- The SD callback started at `-1` and ended at `i`. Combined w/ the weird math on the previous `calc_percentage` util, this caused the progress bar to never finish.
- The MultiDiffusion callback had the same problems as SD.
- The FLUX callback didn't emit a pre-denoising step 0 image. It also reported total_steps as 1 higher than the actual step count.
Each of these now emit the expected events to the frontend:
- The initial latents at 0%
- Progress at each step, ending at 100%
2024-09-22 21:20:32 +03:00
Ryan Dick
a43a045b04
Fix preview image to work well with FLUX trajectory guidance.
2024-09-20 21:08:41 +00:00
Ryan Dick
cd3a7bdb5e
Assert that change_ratio is in the expected range in TrajectoryGuidanceExtension.
2024-09-20 20:34:49 +00:00
Ryan Dick
16ca540ece
Pre-compute trajectory guidance schedule params rather than calculating on each step.
2024-09-20 20:18:06 +00:00
Ryan Dick
2f82171dff
Tidy up the logic for inpainting mask adjustment in FLUX TrajectoryGuidanceExtension.
2024-09-20 14:48:06 +00:00
Ryan Dick
b6748fb1e1
Fix typo
2024-09-20 14:15:59 +00:00
Ryan Dick
f0aad5882d
Fixup docs in the TrajectoryGuidanceExtension.
2024-09-20 14:04:53 +00:00
Ryan Dick
e8357afd3a
Add traj_guidance_strength to FluxDenoiseInvocation.
2024-09-20 02:41:52 +00:00
Ryan Dick
93c15c9958
Rough draft of TrajectoryGuidanceExtension.
2024-09-20 02:21:47 +00:00
Ryan Dick
97de521c70
Add build_line(...) util function.
2024-09-20 01:01:37 +00:00
Ryan Dick
3d6f60f63e
Merge branch 'main' into ryan/flux-lora-quantized
2024-09-18 13:22:39 -04:00
maryhipp
8916036ed3
fix progress image for FLUX inpainting
2024-09-17 06:41:32 +03:00
psychedelicious
0fd430fc20
fix(nodes): add thresholding to lineart & lineart anime nodes
...
The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold, eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better, for example stretching the contrast or removing noise.
I tried:
- Simple thresholding (as implemented here) - works fine.
- Adaptive thresholding - doesn't work, because the thresholding is done in the context of small blocks, while we want thresholding in the context of the whole image.
- Gamma adjustment - alters the white values too much. Hard to tuen.
- Contrast stretching, with and without pre-simple-thresholding - this allows us to treshold out the noise, then stretch everything above the threshold down to almost-zero. So you have a smoother gradient of lightness near zero. It works but it also stretches contrast near white down a bit, which is probably undesired.
In the end, simple thresholding works fine and is very simple.
2024-09-17 04:04:11 +03:00
Ryan Dick
2934e31620
Fix bug when applying multiple LoRA models via apply_lora_sidecar_patches(), and add unit tests for the stacked LoRA case.
2024-09-16 14:48:39 +00:00
Ryan Dick
e88d3cf2f7
Assume alpha=rank for FLUX diffusers PEFT LoRA models.
2024-09-16 13:57:07 +00:00
Ryan Dick
d51f2c5e00
Add bias to LoRA sidecar layer unit tests.
2024-09-15 04:39:56 +03:00
Ryan Dick
78efed4499
Revert change of make all LoRA layers torch.nn.Module's. While the code is uglier, it turns out that the Module implementation of some ops like .to(...) is noticeably slower.
2024-09-15 04:39:56 +03:00
Ryan Dick
02f27c750a
Add unit tests for LoRAPatcher.apply_lora_sidecar_patches(...) and fixup dtype handling in the sidecar layers.
2024-09-15 04:39:56 +03:00
Ryan Dick
ba3ba3c23a
Add unit tests for LoRALinearSidecarLayer and ConcatenatedLoRALinearSidecarLayer.
2024-09-15 04:39:56 +03:00
Ryan Dick
61d3d566de
Minor cleanup and documentation updates.
2024-09-15 04:39:56 +03:00
Ryan Dick
ae41651346
Remove LoRA conv sidecar layers until they are needed and properly tested.
2024-09-15 04:39:56 +03:00
Ryan Dick
5bb0c79c14
Add links to test models for loha, lokr, ia3.
2024-09-15 04:39:56 +03:00
Ryan Dick
9438ea608c
Update all lycoris layer types to use the new torch.nn.Module base class.
2024-09-15 04:39:56 +03:00
Ryan Dick
81fbaf2b8b
Assume LoRA alpha=8 for FLUX diffusers PEFT LoRAs.
2024-09-15 04:39:56 +03:00
Ryan Dick
10c3c61cb2
Get diffusers FLUX LoRA working as sidecar patch on quantized model.
2024-09-15 04:39:56 +03:00