Ryan Dick
0b84f567f1
Fix type errors and imporve docs for ControlNetFlux.
2024-10-10 07:59:29 -04:00
Ryan Dick
69c0d7dcc9
Remove gradient checkpointing from ControlNetFlux.
2024-10-10 07:59:29 -04:00
Ryan Dick
5307248fcf
Remove ControlNetFlux logic related to attn processor overrides.
2024-10-10 07:59:29 -04:00
Ryan Dick
2efaea8f79
Remove duplicate FluxParams class.
2024-10-10 07:59:29 -04:00
Ryan Dick
c1dfd9b7d9
Fix FLUX module imports for ControlNetFlux.
2024-10-10 07:59:29 -04:00
Ryan Dick
c594ef89d2
Copy ControlNetFlux model from 47495425db/src/flux/controlnet.py.
2024-10-10 07:59:29 -04:00
Eugene Brodsky
b6c7949bb7
feat(backend): prefer xformers based on cuda compute capability
2024-10-09 22:46:18 -04:00
Kent Keirsey
969f8b8e8d
ruff update
2024-10-08 08:56:26 -04:00
David Burnett
ccb5f90556
Get Flux working on MPS when torch 2.5.0 test or nightlies are installed.
2024-10-08 08:56:26 -04:00
Ryan Dick
3d4bd71098
Update test_probe_handles_state_dict_with_integer_keys() to make sure that it is still testing what it's intended to test. Previously, we were skipping an important part of the test by using a fake file path.
2024-10-02 18:33:05 -04:00
Brandon Rising
814be44cd7
Ignore paths that don't exist in probe for unit tests
2024-10-02 18:33:05 -04:00
Brandon Rising
d328eaf743
Remove no longer used dequantize_tensor function
2024-10-02 18:33:05 -04:00
Brandon Rising
0f333388bb
Add comment describing why we're not using the meta device during probing of gguf files
2024-10-02 18:33:05 -04:00
Ryan Dick
bc63e2acc5
Add workaround for FLUX GGUF models with incorrect img_in.weight shape.
2024-10-02 18:33:05 -04:00
Ryan Dick
ec7e771942
Add a compute_dtype field to GGMLTensor.
2024-10-02 18:33:05 -04:00
Ryan Dick
fe84013392
Add unit tests for GGMLTensor.
2024-10-02 18:33:05 -04:00
Ryan Dick
710f81266b
Fix type errors in GGMLTensor.
2024-10-02 18:33:05 -04:00
Brandon Rising
446e2884bc
Remove no longer used code paths, general cleanup of new dequantization code, update probe
2024-10-02 18:33:05 -04:00
Brandon Rising
7d9f125232
Run ruff and update imports
2024-10-02 18:33:05 -04:00
Brandon Rising
66bbd62758
Run ruff and fix typing in torch patcher
2024-10-02 18:33:05 -04:00
Brandon Rising
0875e861f5
Various updates to gguf performance
2024-10-02 18:33:05 -04:00
Brandon
0267d73dfc
Update invokeai/backend/model_manager/load/model_loaders/flux.py
...
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com >
2024-10-02 18:33:05 -04:00
Ryan Dick
f06765dfba
Get alternative GGUF implementation working... barely.
2024-10-02 18:33:05 -04:00
Ryan Dick
f347b26999
Initial experimentation with Tensor-like extension for GGUF.
2024-10-02 18:33:05 -04:00
Lincoln Stein
c665cf3525
recognize .gguf files when scanning a folder for import
2024-10-02 18:33:05 -04:00
Brandon Rising
2bfb0ddff5
Initial GGUF support for flux models
2024-10-02 18:33:05 -04:00
Ryan Dick
807f458f13
Move FLUX_LORA_TRANSFORMER_PREFIX and FLUX_LORA_CLIP_PREFIX to a shared location.
2024-10-01 10:22:11 -04:00
Ryan Dick
68dbe45315
Fix regression with FLUX diffusers LoRA models where lora keys were not given the expected prefix.
2024-10-01 10:22:11 -04:00
Mary Hipp
c224971cb4
feat(ui,api): add guidance as a default setting option for FLUX models
2024-09-30 17:15:33 -04:00
Ryan Dick
c256826015
Whoops, the 'lora_te1' prefix in FLUX kohya models refers to the CLIP text encoder - not the T5 as previously assumed. Update everything accordingly.
2024-09-30 07:59:14 -04:00
Ryan Dick
7d38a9b7fb
Add prefix to distinguish FLUX LoRA submodels.
2024-09-30 07:59:14 -04:00
Ryan Dick
d332d81866
Add ability to load FLUX kohya LoRA models that include patches for both the transformer and T5 models.
2024-09-30 07:59:14 -04:00
Ryan Dick
bdeec54886
Remove FLUX TrajectoryGuidanceExtension and revert to the InpaintExtension. Keep the improved inpaint gradient mask adjustment behaviour.
2024-09-26 19:54:28 -04:00
Ryan Dick
8d50ecdfc3
Update docs explaining inpainting trajectory guidance.
2024-09-26 19:54:28 -04:00
Ryan Dick
ba07e255f5
Add support for fractional denoise start and end with FLUX.
2024-09-26 19:54:28 -04:00
Ryan Dick
fae96f3b9f
Remove trajectory_guidance_strength parameter.
2024-09-26 19:54:28 -04:00
psychedelicious
dc10197615
fix(app): step callbacks for SD, FLUX, MultiDiffusion
...
Each of these was a bit off:
- The SD callback started at `-1` and ended at `i`. Combined w/ the weird math on the previous `calc_percentage` util, this caused the progress bar to never finish.
- The MultiDiffusion callback had the same problems as SD.
- The FLUX callback didn't emit a pre-denoising step 0 image. It also reported total_steps as 1 higher than the actual step count.
Each of these now emit the expected events to the frontend:
- The initial latents at 0%
- Progress at each step, ending at 100%
2024-09-22 21:20:32 +03:00
Ryan Dick
a43a045b04
Fix preview image to work well with FLUX trajectory guidance.
2024-09-20 21:08:41 +00:00
Ryan Dick
cd3a7bdb5e
Assert that change_ratio is in the expected range in TrajectoryGuidanceExtension.
2024-09-20 20:34:49 +00:00
Ryan Dick
16ca540ece
Pre-compute trajectory guidance schedule params rather than calculating on each step.
2024-09-20 20:18:06 +00:00
Ryan Dick
2f82171dff
Tidy up the logic for inpainting mask adjustment in FLUX TrajectoryGuidanceExtension.
2024-09-20 14:48:06 +00:00
Ryan Dick
b6748fb1e1
Fix typo
2024-09-20 14:15:59 +00:00
Ryan Dick
f0aad5882d
Fixup docs in the TrajectoryGuidanceExtension.
2024-09-20 14:04:53 +00:00
Ryan Dick
e8357afd3a
Add traj_guidance_strength to FluxDenoiseInvocation.
2024-09-20 02:41:52 +00:00
Ryan Dick
93c15c9958
Rough draft of TrajectoryGuidanceExtension.
2024-09-20 02:21:47 +00:00
Ryan Dick
97de521c70
Add build_line(...) util function.
2024-09-20 01:01:37 +00:00
Ryan Dick
3d6f60f63e
Merge branch 'main' into ryan/flux-lora-quantized
2024-09-18 13:22:39 -04:00
maryhipp
8916036ed3
fix progress image for FLUX inpainting
2024-09-17 06:41:32 +03:00
psychedelicious
0fd430fc20
fix(nodes): add thresholding to lineart & lineart anime nodes
...
The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold, eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better, for example stretching the contrast or removing noise.
I tried:
- Simple thresholding (as implemented here) - works fine.
- Adaptive thresholding - doesn't work, because the thresholding is done in the context of small blocks, while we want thresholding in the context of the whole image.
- Gamma adjustment - alters the white values too much. Hard to tuen.
- Contrast stretching, with and without pre-simple-thresholding - this allows us to treshold out the noise, then stretch everything above the threshold down to almost-zero. So you have a smoother gradient of lightness near zero. It works but it also stretches contrast near white down a bit, which is probably undesired.
In the end, simple thresholding works fine and is very simple.
2024-09-17 04:04:11 +03:00
Ryan Dick
2934e31620
Fix bug when applying multiple LoRA models via apply_lora_sidecar_patches(), and add unit tests for the stacked LoRA case.
2024-09-16 14:48:39 +00:00