mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-04-23 03:00:31 -04:00
* WIP: Add FLUX.2 Klein LoRA support (BFL PEFT format) Initial implementation for loading and applying LoRA models trained with BFL's PEFT format for FLUX.2 Klein transformers. Changes: - Add LoRA_Diffusers_Flux2_Config and LoRA_LyCORIS_Flux2_Config - Add BflPeft format to FluxLoRAFormat taxonomy - Add flux_bfl_peft_lora_conversion_utils for weight conversion - Add Flux2KleinLoraLoaderInvocation node Status: Work in progress - not yet fully tested Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * feat(flux2): add LoRA support for FLUX.2 Klein models Add BFL PEFT LoRA support for FLUX.2 Klein, including runtime conversion of BFL-format keys to diffusers format with fused QKV splitting, improved detection of Klein 4B LoRAs via MLP ratio check, and frontend graph wiring. * feat(flux2): detect Klein LoRA variant (4B/9B) and filter by compatibility Auto-detect FLUX.2 Klein LoRA variant from tensor dimensions during model probe, warn on variant mismatch at load time, and filter the LoRA picker to only show variant-compatible LoRAs. * Chore Ruff * Chore pnpm * Fix detection and loading of 3 unrecognized Flux.2 Klein LoRA formats Three Flux.2 Klein LoRAs were either unrecognized or misclassified due to format detection gaps: 1. PEFT-wrapped BFL format (base_model.model.* prefix) was not recognized because the detector only accepted the diffusion_model.* prefix. 2. Klein 4B LoRAs with hidden_size=3072 were misidentified as Flux.1 due to a break statement exiting the detection loop before txt_in/vector_in dimensions could be checked. 3. Flux2 native diffusers format (to_qkv_mlp_proj, ff.linear_in) was not detected because the detector only checked for Flux.1 diffusers keys. Also handles mixed PEFT/standard LoRA suffix formats within the same file. --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>