Currently translated at 24.1% (351 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 17.9% (261 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 17.8% (259 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 17.5% (255 of 1452 strings)
translationBot(ui): update translation (French)
Currently translated at 10.3% (150 of 1452 strings)
Co-authored-by: Thomas Bolteau <thomas.bolteau50@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
Currently translated at 62.0% (901 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 56.4% (819 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 53.8% (782 of 1452 strings)
Co-authored-by: Ettore Atalan <atalanttore@googlemail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
Currently translated at 56.4% (819 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 53.8% (782 of 1452 strings)
translationBot(ui): update translation (German)
Currently translated at 45.3% (658 of 1451 strings)
Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
## Summary
This PR add support for FLUX LoRA models in kohya format with `lora_te1`
layers (i.e. CLIP LoRA layers). Previously, only transformer LoRA layers
were supported.
Example LoRA model in this format:
https://huggingface.co/cocktailpeanut/optimus
### Example
Prompt: `optimus is playing tennis in a tennis court`
Seed: 0
Without LoRA:

With LoRA:

## QA Instructions
I tested the following:
- [x] The optimus LoRA (with CLIP layers) can be applied.
- [x] FLUX LoRAs without CLIP layers still work
- [x] Loading the optimus LoRA, but applying it to the transformer
_only_ produces a different result. I.e. verified that patching the CLIP
layers is doing _something_. Ironically, the results seem better without
applying the CLIP layers. The CLIP layers seem to pull in more
background concepts. Regardless, it works.
- [x] The optimus LoRA can be applied via the Linear UI, and the output
matches results from manually constructing the workflow graph.
- [x] FLUX LoRAs without CLIP layers still work via the Linear UI.
## Checklist
- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_