Compare commits

...

12 Commits

Author SHA1 Message Date
Jonathan
7a6760acad Update presets.py (#8846) 2026-02-06 08:52:24 -05:00
Jonathan
91c1e64f0b Add dype area option (#8844)
* Add DyPE area option

* Added tests and fixed frontend build

* Made more pythonic
2026-02-06 08:50:24 -05:00
Lincoln Stein
cbe528eef7 chore(release): prep for 6.11.1 bugfix release 2026-02-06 08:10:22 -05:00
Alexander Eichhorn
4081f8701e fix(flux2): Fix FLUX.2 Klein image generation quality (#8838)
* fix(flux2): Fix image quality degradation at resolutions > 1024x1024

This commit addresses severe quality degradation and artifacts when
generating images larger than 1024x1024 with FLUX.2 Klein models.

Root causes fixed:

1. Dynamic max_image_seq_len in scheduler (flux2_denoise.py)
   - Previously hardcoded to 4096 (1024x1024 only)
   - Now dynamically calculated based on actual resolution
   - Allows proper schedule shifting at all resolutions

2. Smoothed mu calculation discontinuity (sampling_utils.py)
   - Eliminated 40-50% mu value drop at seq_len 4300 threshold
   - Implemented smooth cosine interpolation (4096-4500 transition zone)
   - Gradual blend between low-res and high-res formulas

Impact:
- FLUX.2 Klein 9B: Major quality improvement at high resolutions
- FLUX.2 Klein 4B: Improved quality at high resolutions
- Baseline 1024x1024: Unchanged (no regression)
- All generation modes: T2I and Kontext (reference images)

Fixes: Community-reported quality degradation issue
See: Discord discussions in #garbage-bin and #devchat

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(flux2): Fix high-resolution quality degradation for FLUX.2 Klein

  Fixes grid/diamond artifacts and color loss at resolutions > 1024x1024.

  Root causes identified and fixed:
  - BN normalization was incorrectly applied to random noise input
    (diffusers only normalizes image latents from VAE.encode)
  - BN denormalization must be applied to output before VAE decode
  - mu parameter was resolution-dependent causing over-shifted schedules
    at high resolutions (now fixed to 2.02, matching ComfyUI)

  Changes:
  - Remove BN normalization on noise input (not needed for N(0,1) noise)
  - Preserve BN denormalization on denoised output (required for VAE)
  - Fix mu to constant 2.02 for all resolutions (matches ComfyUI)

  Tested at 2048x2048 with FLUX.2 Klein 4B

* Chore Ruff

---------

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2026-02-06 08:04:03 -05:00
Lincoln Stein
5649b60672 chore(release): bump version to 6.11.1
This is a bugfix release that contains fixes for bugs in 6.11.0, as
well as updates to the Russian language translation.

No new user-facing features are included in this release.
2026-02-04 15:29:26 -05:00
Weblate (bot)
714eeed74d translationBot(ui): update translation (Russian) (#8830)
Currently translated at 59.7% (1344 of 2249 strings)


Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI

Co-authored-by: DustyShoe <warukeichi@gmail.com>
2026-02-04 15:28:27 -05:00
Alexander Eichhorn
656b50e6ad fix(ui): remove duplicate DyPE preset dropdown in generation settings (#8831)
The ParamFluxDypePreset component was rendered twice in the FLUX
generation settings accordion, causing the DyPE dropdown to appear
both after the scheduler and after the guidance slider.

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-02-04 15:27:24 -05:00
Alexander Eichhorn
0263f4032c fix(ui): reset seed variance toggle when recalling images without that metadata (#8829)
When recalling an image that lacks `z_image_seed_variance_enabled` metadata
   (e.g. older images), the toggle now defaults to off instead of retaining the
   previous state.
2026-02-04 15:27:03 -05:00
Alexander Eichhorn
dd87e0a946 The FLUX.2 Klein PR (b92c6ae63) replaced the user's denoising strength (#8828)
setting with hardcoded full denoising (start=0, end=1) in addOutpaint.
   This caused denoising strength to be completely ignored whenever the
   canvas bbox extended beyond the raster layer content, triggering outpaint
   mode. The issue affected all model types (SDXL, SD1.5, FLUX, etc.).

   Restore the original behavior by reading denoising_start/end from the
   user's img2imgStrength setting via getDenoisingStartAndEnd().

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-02-04 15:26:39 -05:00
Alexander Eichhorn
438eea1159 fix(flux2): support Heun scheduler for FLUX.2 Klein models (#8794)
* fix(flux2): support Heun scheduler for FLUX.2 Klein models

FlowMatchHeunDiscreteScheduler does not support dynamic shifting parameters
(use_dynamic_shifting, base_shift, max_shift, etc.) or sigmas/mu in set_timesteps.
This caused FLUX.2 Klein to fail when using Heun scheduler.

- Create Heun scheduler with only num_train_timesteps and shift parameters
- Use num_inference_steps instead of sigmas for Heun's set_timesteps call
- Euler and LCM schedulers continue to use full dynamic shifting support

* fix(flux2): fix Heun scheduler detection using inspect.signature

The previous hasattr check for state_in_first_order failed because
the attribute doesn't exist before set_timesteps() is called. Now
using inspect.signature to check for sigmas parameter support,
matching the FLUX1 implementation.

---------

Co-authored-by: Jonathan <34005131+JPPhoto@users.noreply.github.com>
2026-02-04 15:26:12 -05:00
Alexander Eichhorn
d93e451831 fix(ui): only show FLUX.1 VAEs when a FLUX.1 main model is selected (#8821)
Use useFlux1VAEModels() instead of useFluxVAEModels() in the FLUX VAE
selector, which was incorrectly returning both FLUX.1 and FLUX.2 VAEs.
Remove the now-unused useFluxVAEModels hook.

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2026-02-04 15:25:48 -05:00
Lincoln Stein
efc7a262b7 chore(release): bump version to 6.11.0 2026-01-31 17:27:56 -05:00
20 changed files with 211 additions and 112 deletions

View File

@@ -329,15 +329,13 @@ class Flux2DenoiseInvocation(BaseInvocation):
noise_packed = pack_flux2(noise)
x = pack_flux2(x)
# Apply BN normalization BEFORE denoising (as per diffusers Flux2KleinPipeline)
# BN normalization: y = (x - mean) / std
# This transforms latents to normalized space for the transformer
# IMPORTANT: Also normalize init_latents and noise for inpainting to maintain consistency
if bn_mean is not None and bn_std is not None:
x = self._bn_normalize(x, bn_mean, bn_std)
if init_latents_packed is not None:
init_latents_packed = self._bn_normalize(init_latents_packed, bn_mean, bn_std)
noise_packed = self._bn_normalize(noise_packed, bn_mean, bn_std)
# BN normalization for txt2img:
# - DO NOT normalize random noise (it's already N(0,1) distributed)
# - Diffusers only normalizes image latents from VAE (for img2img/kontext)
# - Output MUST be denormalized after denoising before VAE decode
#
# For img2img with init_latents, we should normalize init_latents on unpacked
# shape (B, 128, H/16, W/16) - this is handled by _bn_normalize_unpacked below
# Verify packed dimensions
assert packed_h * packed_w == x.shape[1]
@@ -366,16 +364,24 @@ class Flux2DenoiseInvocation(BaseInvocation):
if self.scheduler in FLUX_SCHEDULER_MAP and not is_inpainting:
# Only use scheduler for txt2img - use manual Euler for inpainting to preserve exact timesteps
scheduler_class = FLUX_SCHEDULER_MAP[self.scheduler]
scheduler = scheduler_class(
num_train_timesteps=1000,
shift=3.0,
use_dynamic_shifting=True,
base_shift=0.5,
max_shift=1.15,
base_image_seq_len=256,
max_image_seq_len=4096,
time_shift_type="exponential",
)
# FlowMatchHeunDiscreteScheduler only supports num_train_timesteps and shift parameters
# FlowMatchEulerDiscreteScheduler and FlowMatchLCMScheduler support dynamic shifting
if self.scheduler == "heun":
scheduler = scheduler_class(
num_train_timesteps=1000,
shift=3.0,
)
else:
scheduler = scheduler_class(
num_train_timesteps=1000,
shift=3.0,
use_dynamic_shifting=True,
base_shift=0.5,
max_shift=1.15,
base_image_seq_len=256,
max_image_seq_len=4096,
time_shift_type="exponential",
)
# Prepare reference image extension for FLUX.2 Klein built-in editing
ref_image_extension = None

View File

@@ -57,20 +57,6 @@ class Flux2VaeDecodeInvocation(BaseInvocation, WithMetadata, WithBoard):
# Decode using diffusers API
decoded = vae.decode(latents, return_dict=False)[0]
# Debug: Log decoded output statistics
print(
f"[FLUX.2 VAE] Decoded output: shape={decoded.shape}, "
f"min={decoded.min().item():.4f}, max={decoded.max().item():.4f}, "
f"mean={decoded.mean().item():.4f}"
)
# Check per-channel statistics to diagnose color issues
for c in range(min(3, decoded.shape[1])):
ch = decoded[0, c]
print(
f"[FLUX.2 VAE] Channel {c}: min={ch.min().item():.4f}, "
f"max={ch.max().item():.4f}, mean={ch.mean().item():.4f}"
)
# Convert from [-1, 1] to [0, 1] then to [0, 255] PIL image
img = (decoded / 2 + 0.5).clamp(0, 1)
img = rearrange(img[0], "c h w -> h w c")

View File

@@ -71,7 +71,7 @@ from invokeai.backend.util.devices import TorchDevice
title="FLUX Denoise",
tags=["image", "flux"],
category="image",
version="4.5.0",
version="4.5.1",
)
class FluxDenoiseInvocation(BaseInvocation):
"""Run denoising process with a FLUX transformer model."""
@@ -176,7 +176,10 @@ class FluxDenoiseInvocation(BaseInvocation):
# DyPE (Dynamic Position Extrapolation) for high-resolution generation
dype_preset: DyPEPreset = InputField(
default=DYPE_PRESET_OFF,
description="DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. '4k' uses optimized settings for 4K output.",
description=(
"DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. "
"'area' enables automatically based on image area. '4k' uses optimized settings for 4K output."
),
ui_order=100,
ui_choice_labels=DYPE_PRESET_LABELS,
)

View File

@@ -10,11 +10,13 @@ from invokeai.backend.flux.dype.base import DyPEConfig
from invokeai.backend.flux.dype.embed import DyPEEmbedND
from invokeai.backend.flux.dype.presets import (
DYPE_PRESET_4K,
DYPE_PRESET_AREA,
DYPE_PRESET_AUTO,
DYPE_PRESET_LABELS,
DYPE_PRESET_MANUAL,
DYPE_PRESET_OFF,
DyPEPreset,
get_dype_config_for_area,
get_dype_config_for_resolution,
)
@@ -25,7 +27,9 @@ __all__ = [
"DYPE_PRESET_OFF",
"DYPE_PRESET_MANUAL",
"DYPE_PRESET_AUTO",
"DYPE_PRESET_AREA",
"DYPE_PRESET_4K",
"DYPE_PRESET_LABELS",
"get_dype_config_for_area",
"get_dype_config_for_resolution",
]

View File

@@ -1,17 +1,19 @@
"""DyPE presets and automatic configuration."""
import math
from dataclasses import dataclass
from typing import Literal
from invokeai.backend.flux.dype.base import DyPEConfig
# DyPE preset type - using Literal for proper frontend dropdown support
DyPEPreset = Literal["off", "manual", "auto", "4k"]
DyPEPreset = Literal["off", "manual", "auto", "area", "4k"]
# Constants for preset values
DYPE_PRESET_OFF: DyPEPreset = "off"
DYPE_PRESET_MANUAL: DyPEPreset = "manual"
DYPE_PRESET_AUTO: DyPEPreset = "auto"
DYPE_PRESET_AREA: DyPEPreset = "area"
DYPE_PRESET_4K: DyPEPreset = "4k"
# Human-readable labels for the UI
@@ -19,6 +21,7 @@ DYPE_PRESET_LABELS: dict[str, str] = {
"off": "Off",
"manual": "Manual",
"auto": "Auto (>1536px)",
"area": "Area (auto)",
"4k": "4K Optimized",
}
@@ -88,6 +91,50 @@ def get_dype_config_for_resolution(
)
def get_dype_config_for_area(
width: int,
height: int,
base_resolution: int = 1024,
) -> DyPEConfig | None:
"""Automatically determine DyPE config based on target area.
Uses sqrt(area/base_area) as an effective side-length ratio.
DyPE is enabled only when target area exceeds base area.
Returns:
DyPEConfig if DyPE should be enabled, None otherwise
"""
area = width * height
base_area = base_resolution**2
if area <= base_area:
return None
area_ratio = area / base_area
effective_side_ratio = math.sqrt(area_ratio) # 1.0 at base, 2.0 at 2K (if base is 1K)
# Strength: 0 at base area, 8 at sat_area, clamped thereafter.
sat_area = 2027520 # Determined by experimentation where a vertical line appears
sat_side_ratio = math.sqrt(sat_area / base_area)
dynamic_dype_scale = 8.0 * (effective_side_ratio - 1.0) / (sat_side_ratio - 1.0)
dynamic_dype_scale = max(0.0, min(dynamic_dype_scale, 8.0))
# Continuous exponent schedule:
# r=1 -> 0.5, r=2 -> 1.0, r=4 -> 2.0 (exact), smoothly varying in between.
x = math.log2(effective_side_ratio)
dype_exponent = 0.25 * (x**2) + 0.25 * x + 0.5
dype_exponent = max(0.5, min(dype_exponent, 2.0))
return DyPEConfig(
enable_dype=True,
base_resolution=base_resolution,
method="vision_yarn",
dype_scale=dynamic_dype_scale,
dype_exponent=dype_exponent,
dype_start_sigma=1.0,
)
def get_dype_config_from_preset(
preset: DyPEPreset,
width: int,
@@ -133,6 +180,14 @@ def get_dype_config_from_preset(
activation_threshold=1536,
)
if preset == DYPE_PRESET_AREA:
# Area-based preset - custom values are ignored
return get_dype_config_for_area(
width=width,
height=height,
base_resolution=1024,
)
# Use preset configuration (4K etc.) - custom values are ignored
preset_config = DYPE_PRESETS.get(preset)
if preset_config is None:

View File

@@ -4,6 +4,7 @@ This module provides the denoising function for FLUX.2 Klein models,
which use Qwen3 as the text encoder instead of CLIP+T5.
"""
import inspect
import math
from typing import Any, Callable
@@ -87,11 +88,18 @@ def denoise(
# The scheduler will apply dynamic shifting internally using mu (if enabled in scheduler config)
sigmas = np.array(timesteps[:-1], dtype=np.float32) # Exclude final 0.0
# Pass mu if provided - it will only be used if scheduler has use_dynamic_shifting=True
if mu is not None:
# Check if scheduler supports sigmas parameter using inspect.signature
# FlowMatchHeunDiscreteScheduler and FlowMatchLCMScheduler don't support sigmas
set_timesteps_sig = inspect.signature(scheduler.set_timesteps)
supports_sigmas = "sigmas" in set_timesteps_sig.parameters
if supports_sigmas and mu is not None:
# Pass mu if provided - it will only be used if scheduler has use_dynamic_shifting=True
scheduler.set_timesteps(sigmas=sigmas.tolist(), mu=mu, device=img.device)
else:
elif supports_sigmas:
scheduler.set_timesteps(sigmas=sigmas.tolist(), device=img.device)
else:
# Scheduler doesn't support sigmas (e.g., Heun, LCM) - use num_inference_steps
scheduler.set_timesteps(num_inference_steps=len(sigmas), device=img.device)
num_scheduler_steps = len(scheduler.timesteps)
is_heun = hasattr(scheduler, "state_in_first_order")
user_step = 0

View File

@@ -108,33 +108,27 @@ def unpack_flux2(x: torch.Tensor, height: int, width: int) -> torch.Tensor:
def compute_empirical_mu(image_seq_len: int, num_steps: int) -> float:
"""Compute empirical mu for FLUX.2 schedule shifting.
"""Compute mu for FLUX.2 schedule shifting.
This matches the diffusers Flux2Pipeline implementation.
The mu value controls how much the schedule is shifted towards higher timesteps.
Uses a fixed mu value of 2.02, matching ComfyUI's proven FLUX.2 configuration.
The previous implementation (from diffusers' FLUX.1 pipeline) computed mu as a
linear function of image_seq_len, which produced excessively high values at
high resolutions (e.g., mu=3.23 at 2048x2048). This over-shifted the sigma
schedule, compressing almost all values above 0.9 and forcing the model to
denoise everything in the final 1-2 steps, causing severe grid/diamond artifacts.
ComfyUI uses a fixed shift=2.02 for FLUX.2 Klein at all resolutions and produces
artifact-free images even at 2048x2048.
Args:
image_seq_len: Number of image tokens (packed_h * packed_w).
num_steps: Number of denoising steps.
image_seq_len: Number of image tokens (packed_h * packed_w). Currently unused.
num_steps: Number of denoising steps. Currently unused.
Returns:
The empirical mu value.
The mu value (fixed at 2.02).
"""
a1, b1 = 8.73809524e-05, 1.89833333
a2, b2 = 0.00016927, 0.45666666
if image_seq_len > 4300:
mu = a2 * image_seq_len + b2
return float(mu)
m_200 = a2 * image_seq_len + b2
m_10 = a1 * image_seq_len + b1
a = (m_200 - m_10) / 190.0
b = m_200 - 200.0 * a
mu = a * num_steps + b
return float(mu)
return 2.02
def get_schedule_flux2(
@@ -169,11 +163,14 @@ def get_schedule_flux2(
def generate_img_ids_flux2(h: int, w: int, batch_size: int, device: torch.device) -> torch.Tensor:
"""Generate tensor of image position ids for FLUX.2.
"""Generate tensor of image position ids for FLUX.2 with RoPE scaling.
FLUX.2 uses 4D position coordinates (T, H, W, L) for its rotary position embeddings.
This is different from FLUX.1 which uses 3D coordinates.
RoPE Scaling: For resolutions >1536x1536, position IDs are scaled down using
Position Interpolation to prevent RoPE degradation and diamond/grid artifacts.
IMPORTANT: Position IDs must use int64 (long) dtype like diffusers, not bfloat16.
Using floating point dtype for position IDs can cause NaN in rotary embeddings.

View File

@@ -16799,6 +16799,12 @@
"title": "DownloadStartedEvent",
"type": "object"
},
"DyPEPreset": {
"description": "Predefined DyPE configurations.",
"enum": ["off", "auto", "area", "4k"],
"title": "DyPEPreset",
"type": "string"
},
"DynamicPromptInvocation": {
"category": "prompt",
"class": "invocation",

View File

@@ -5,7 +5,7 @@
"reportBugLabel": "Сообщить об ошибке",
"settingsLabel": "Настройки",
"img2img": "Изображение в изображение (img2img)",
"nodes": "Рабочие процессы",
"nodes": "Схемы",
"upload": "Загрузить",
"load": "Загрузить",
"statusDisconnected": "Отключен",
@@ -25,7 +25,7 @@
"communityLabel": "Сообщество",
"batch": "Пакетный менеджер",
"modelManager": "Менеджер моделей",
"controlNet": "Controlnet",
"controlNet": "ControlNet",
"advanced": "Расширенные",
"t2iAdapter": "T2I Adapter",
"checkpoint": "Checkpoint",
@@ -34,7 +34,7 @@
"folder": "Папка",
"inpaint": "Перерисовать",
"updated": "Обновлен",
"on": "На",
"on": "Вкл",
"save": "Сохранить",
"created": "Создано",
"error": "Ошибка",
@@ -60,24 +60,24 @@
"input": "Вход",
"details": "Детали",
"or": "или",
"aboutHeading": "Владей своей творческой силой",
"aboutHeading": "Управляй своей творческой силой",
"red": "Красный",
"green": "Зеленый",
"blue": "Синий",
"alpha": "Альфа",
"toResolve": "Чтоб решить",
"copy": "Копировать",
"aboutDesc": "Используя Invoke для работы? Проверьте это:",
"aboutDesc": "Используете Invoke в работе? Ознакомьтесь:",
"add": "Добавить",
"beta": "Бета",
"selected": "Выбрано",
"positivePrompt": "Позитивный запрос",
"negativePrompt": "Негативный запрос",
"positivePrompt": "Позитивный промпт",
"negativePrompt": "Негативный промпт",
"editor": "Редактор",
"tab": "Вкладка",
"enabled": "Включено",
"disabled": "Отключено",
"dontShowMeThese": "Не показывай мне это",
"dontShowMeThese": "Больше не показывать",
"apply": "Применить",
"loadingImage": "Загрузка изображения",
"off": "Выкл",
@@ -761,8 +761,8 @@
"createIssue": "Сообщить о проблеме",
"about": "О программе",
"submitSupportTicket": "Отправить тикет в службу поддержки",
"toggleRightPanel": ереключить правую панель (G)",
"toggleLeftPanel": ереключить левую панель (T)",
"toggleRightPanel": оказать / скрыть правую панель (G)",
"toggleLeftPanel": оказать / скрыть левую панель (T)",
"uploadImages": "Загрузить изображения"
},
"nodes": {
@@ -896,46 +896,46 @@
},
"boards": {
"autoAddBoard": "Коллекция для автодобавления",
"topMessage": "Эта доска содержит изображения, используемые в следующих функциях:",
"topMessage": "Этот выбор содержит изображения, используемые в следующих функциях:",
"move": "Перемещение",
"menuItemAutoAdd": "Авто добавление на эту доску",
"myBoard": "Моя Доска",
"searchBoard": "Поиск Доски...",
"noMatching": "Нет подходящих Досок",
"selectBoard": "Выбрать Доску",
"menuItemAutoAdd": "Авто добавление в эту коллекцию",
"myBoard": "Моя коллекция",
"searchBoard": "Поиск коллекции...",
"noMatching": "Нет подходящих коллекций",
"selectBoard": "Выбрать коллекцию",
"cancel": "Отменить",
"addBoard": "Добавить коллекцию",
"bottomMessage": "Удаление этой доски и ее изображений приведет к сбросу всех функций, использующихся их в данный момент.",
"bottomMessage": "Удаление изображений приведёт к сбросу всех функций, которые их используют.",
"uncategorized": "Без категории",
"changeBoard": "Сменить коллекцию",
"loading": "Загрузка...",
"clearSearch": "Очистить поиск",
"deleteBoardOnly": "Удалить только коллекцию",
"movingImagesToBoard_one": "Перемещение {{count}} изображения на доску:",
"movingImagesToBoard_few": "Перемещение {{count}} изображений на доску:",
"movingImagesToBoard_many": "Перемещение {{count}} изображений на доску:",
"downloadBoard": "Скачать доску",
"movingImagesToBoard_one": "Перемещение {{count}} изображения в коллекцию:",
"movingImagesToBoard_few": "Перемещение {{count}} изображений в коллекцию:",
"movingImagesToBoard_many": "Перемещение {{count}} изображений в коллекцию:",
"downloadBoard": "Скачать коллекцию",
"deleteBoard": "Удалить коллекцию",
"deleteBoardAndImages": "Удалить коллекцию и изображения",
"deletedBoardsCannotbeRestored": "Удаленные доски не могут быть восстановлены. Выбор «Удалить только доску» переведет изображения в состояние без категории.",
"assetsWithCount_one": "{{count}} актив",
"assetsWithCount_few": "{{count}} актива",
"assetsWithCount_many": "{{count}} активов",
"deletedBoardsCannotbeRestored": "Удалённые коллекции и изображения нельзя восстановить. При выборе «Удалить только коллекцию» изображения будут перемещены в раздел «Без категории».",
"assetsWithCount_one": "{{count}} ресурс",
"assetsWithCount_few": "{{count}} ресурса",
"assetsWithCount_many": "{{count}} ресурсов",
"imagesWithCount_one": "{{count}} изображение",
"imagesWithCount_few": "{{count}} изображения",
"imagesWithCount_many": "{{count}} изображений",
"archiveBoard": "Архивировать коллекцию",
"archived": "Заархивировано",
"unarchiveBoard": "Разархивировать доску",
"unarchiveBoard": "Разархивировать коллекцию",
"selectedForAutoAdd": "Выбрано для автодобавления",
"addSharedBoard": "Добавить общую коллекцию",
"boards": "Коллекции",
"addPrivateBoard": "Добавить личную коллекцию",
"private": "Личные доски",
"shared": "Общие доски",
"noBoards": "Нет досок {{boardType}}",
"deletedPrivateBoardsCannotbeRestored": "Удаленные доски не могут быть восстановлены. Выбор «Удалить только доску» переведет изображения в приватное состояние без категории для создателя изображения.",
"updateBoardError": "Ошибка обновления доски"
"private": "Личные коллекции",
"shared": "Общие коллекции",
"noBoards": "Нет коллекций {{boardType}}",
"deletedPrivateBoardsCannotbeRestored": "Удалённые коллекции и изображения нельзя восстановить. При выборе «Удалить только коллекцию» изображения будут перемещены в личный раздел «Без категории» автора изображения.",
"updateBoardError": "Ошибка обновления коллекции"
},
"dynamicPrompts": {
"seedBehaviour": {

View File

@@ -29,6 +29,7 @@ import type {
ParameterCLIPGEmbedModel,
ParameterCLIPLEmbedModel,
ParameterControlLoRAModel,
ParameterFluxDypePreset,
ParameterGuidance,
ParameterModel,
ParameterNegativePrompt,
@@ -72,7 +73,7 @@ const slice = createSlice({
setFluxScheduler: (state, action: PayloadAction<'euler' | 'heun' | 'lcm'>) => {
state.fluxScheduler = action.payload;
},
setFluxDypePreset: (state, action: PayloadAction<'off' | 'manual' | 'auto' | '4k'>) => {
setFluxDypePreset: (state, action: PayloadAction<ParameterFluxDypePreset>) => {
state.fluxDypePreset = action.payload;
},
setFluxDypeScale: (state, action: PayloadAction<number>) => {

View File

@@ -623,9 +623,14 @@ const ZImageSeedVarianceEnabled: SingleMetadataHandler<boolean> = {
[SingleMetadataKey]: true,
type: 'ZImageSeedVarianceEnabled',
parse: (metadata, _store) => {
const raw = getProperty(metadata, 'z_image_seed_variance_enabled');
const parsed = z.boolean().parse(raw);
return Promise.resolve(parsed);
try {
const raw = getProperty(metadata, 'z_image_seed_variance_enabled');
const parsed = z.boolean().parse(raw);
return Promise.resolve(parsed);
} catch {
// Default to false when metadata doesn't contain this field (e.g. older images)
return Promise.resolve(false);
}
},
recall: (value, store) => {
store.dispatch(setZImageSeedVarianceEnabled(value));

View File

@@ -72,7 +72,7 @@ export const zFluxSchedulerField = z.enum(['euler', 'heun', 'lcm']);
export const zZImageSchedulerField = z.enum(['euler', 'heun', 'lcm']);
// Flux DyPE (Dynamic Position Extrapolation) preset options for high-resolution generation
export const zFluxDypePresetField = z.enum(['off', 'manual', 'auto', '4k']);
export const zFluxDypePresetField = z.enum(['off', 'manual', 'auto', 'area', '4k']);
// Flux DyPE scale (magnitude λs) - 0.0-8.0, default 2.0
export const zFluxDypeScaleField = z.number().min(0).max(8);

View File

@@ -6,6 +6,7 @@ import { selectCanvasSettingsSlice } from 'features/controlLayers/store/canvasSe
import { selectParamsSlice } from 'features/controlLayers/store/paramsSlice';
import type { Graph } from 'features/nodes/util/graph/generation/Graph';
import {
getDenoisingStartAndEnd,
getInfill,
getOriginalAndScaledSizesForOtherModes,
isMainModelWithoutUnet,
@@ -45,12 +46,9 @@ export const addOutpaint = async ({
modelLoader,
seed,
}: AddOutpaintArg): Promise<Invocation<'invokeai_img_blend' | 'apply_mask_to_image'>> => {
// For outpainting, always use full denoising (from pure noise) because:
// - New areas should be fully generated
// - Existing areas are preserved by the inpaint mask
// The strength setting doesn't make sense for outpainting.
denoise.denoising_start = 0;
denoise.denoising_end = 1;
const { denoising_start, denoising_end } = getDenoisingStartAndEnd(state);
denoise.denoising_start = denoising_start;
denoise.denoising_end = denoising_end;
const params = selectParamsSlice(state);
const canvasSettings = selectCanvasSettingsSlice(state);

View File

@@ -12,6 +12,7 @@ const FLUX_DYPE_PRESET_OPTIONS: ComboboxOption[] = [
{ value: 'off', label: 'Off' },
{ value: 'manual', label: 'Manual' },
{ value: 'auto', label: 'Auto (> 1536px)' },
{ value: 'area', label: 'Area (auto)' },
{ value: '4k', label: '4K Optimized' },
];

View File

@@ -6,14 +6,14 @@ import { fluxVAESelected, selectFLUXVAE } from 'features/controlLayers/store/par
import { zModelIdentifierField } from 'features/nodes/types/common';
import { memo, useCallback } from 'react';
import { useTranslation } from 'react-i18next';
import { useFluxVAEModels } from 'services/api/hooks/modelsByType';
import { useFlux1VAEModels } from 'services/api/hooks/modelsByType';
import type { VAEModelConfig } from 'services/api/types';
const ParamFLUXVAEModelSelect = () => {
const dispatch = useAppDispatch();
const { t } = useTranslation();
const vae = useAppSelector(selectFLUXVAE);
const [modelConfigs, { isLoading }] = useFluxVAEModels();
const [modelConfigs, { isLoading }] = useFlux1VAEModels();
const _onChange = useCallback(
(vae: VAEModelConfig | null) => {

View File

@@ -84,7 +84,6 @@ export const GenerationSettingsAccordion = memo(() => {
<FormControlGroup formLabelProps={formLabelProps}>
{!isFLUX && !isFlux2 && !isSD3 && !isCogView4 && !isZImage && <ParamScheduler />}
{isFLUX && <ParamFluxScheduler />}
{isFLUX && <ParamFluxDypePreset />}
{isZImage && <ParamZImageScheduler />}
<ParamSteps />
{(isFLUX || isFlux2) && modelConfig && !isFluxFillMainModelModelConfig(modelConfig) && <ParamGuidance />}

View File

@@ -56,7 +56,6 @@ export const useCLIPEmbedModels = () => buildModelsHook(isCLIPEmbedModelConfigOr
export const useSpandrelImageToImageModels = buildModelsHook(isSpandrelImageToImageModelConfig);
export const useEmbeddingModels = buildModelsHook(isTIModelConfig);
export const useVAEModels = () => buildModelsHook(isVAEModelConfigOrSubmodel)();
export const useFluxVAEModels = () => buildModelsHook(isFluxVAEModelConfig)();
export const useFlux1VAEModels = () => buildModelsHook(isFlux1VAEModelConfig)();
export const useFlux2VAEModels = () => buildModelsHook(isFlux2VAEModelConfig)();
export const useZImageDiffusersModels = () => buildModelsHook(isZImageDiffusersMainModelConfig)();

View File

@@ -8743,11 +8743,11 @@ export type components = {
kontext_conditioning?: components["schemas"]["FluxKontextConditioningField"] | components["schemas"]["FluxKontextConditioningField"][] | null;
/**
* Dype Preset
* @description DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. '4k' uses optimized settings for 4K output.
* @description DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. 'area' enables automatically based on image area. '4k' uses optimized settings for 4K output.
* @default off
* @enum {string}
*/
dype_preset?: "off" | "manual" | "auto" | "4k";
dype_preset?: "off" | "manual" | "auto" | "area" | "4k";
/**
* Dype Scale
* @description DyPE magnitude (λs). Higher values = stronger extrapolation. Only used when dype_preset is not 'off'.
@@ -8937,11 +8937,11 @@ export type components = {
kontext_conditioning?: components["schemas"]["FluxKontextConditioningField"] | components["schemas"]["FluxKontextConditioningField"][] | null;
/**
* Dype Preset
* @description DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. '4k' uses optimized settings for 4K output.
* @description DyPE preset for high-resolution generation. 'auto' enables automatically for resolutions > 1536px. 'area' enables automatically based on image area. '4k' uses optimized settings for 4K output.
* @default off
* @enum {string}
*/
dype_preset?: "off" | "manual" | "auto" | "4k";
dype_preset?: "off" | "manual" | "auto" | "area" | "4k";
/**
* Dype Scale
* @description DyPE magnitude (λs). Higher values = stronger extrapolation. Only used when dype_preset is not 'off'.

View File

@@ -1 +1 @@
__version__ = "6.11.0.rc1"
__version__ = "6.11.1"

View File

@@ -13,10 +13,12 @@ from invokeai.backend.flux.dype.base import (
from invokeai.backend.flux.dype.embed import DyPEEmbedND
from invokeai.backend.flux.dype.presets import (
DYPE_PRESET_4K,
DYPE_PRESET_AREA,
DYPE_PRESET_AUTO,
DYPE_PRESET_MANUAL,
DYPE_PRESET_OFF,
DYPE_PRESETS,
get_dype_config_for_area,
get_dype_config_for_resolution,
get_dype_config_from_preset,
)
@@ -264,6 +266,35 @@ class TestDyPEPresets:
assert config_4k is not None
assert config_4k.dype_scale > config_2k.dype_scale
def test_get_dype_config_for_area_below_threshold(self):
"""When area is below threshold area, should return None."""
config = get_dype_config_for_area(
width=1024,
height=1024,
)
assert config is None
def test_get_dype_config_for_area_above_threshold(self):
"""When area is above threshold area, should return config."""
config = get_dype_config_for_area(
width=2048,
height=1536,
base_resolution=1024,
)
assert config is not None
assert config.enable_dype is True
assert config.method == "vision_yarn"
def test_get_dype_config_from_preset_area(self):
"""Preset AREA should use area-based config."""
config = get_dype_config_from_preset(
preset=DYPE_PRESET_AREA,
width=2048,
height=1536,
)
assert config is not None
assert config.enable_dype is True
def test_get_dype_config_from_preset_off(self):
"""Preset OFF should return None."""
config = get_dype_config_from_preset(