mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-04-23 03:00:31 -04:00
* feat: initial external model support * feat: support reference images for external models * fix: sorting lint error * chore: hide Reidentify button for external models * review: enable auto-install/remove fro external models * feat: show external mode name during install * review: model descriptions * review: implemented review comments * review: added optional seed control for external models * chore: fix linter warning * review: save api keys to a seperate file * docs: updated external model docs * chore: fix linter errors * fix: sync configured external starter models on startup * feat(ui): add provider-specific external generation nodes * feat: expose external panel schemas in model configs * feat(ui): drive external panels from panel schema * docs: sync app config docstring order * feat: add gemini 3.1 flash image preview starter model * feat: update gemini image model limits * fix: resolve TypeScript errors and move external provider config to api_keys.yaml Add 'external', 'external_image_generator', and 'external_api' to Zod enum schemas (zBaseModelType, zModelType, zModelFormat) to match the generated OpenAPI types. Remove redundant union workarounds from component prop types and Record definitions. Fix type errors in ModelEdit (react-hook-form Control invariance), parsing.tsx (model identifier narrowing), buildExternalGraph (edge typing), and ModelSettings import/export buttons. Move external_gemini_base_url and external_openai_base_url into api_keys.yaml alongside the API keys so all external provider config lives in one dedicated file, separate from invokeai.yaml. * feat: add resolution presets and imageConfig support for Gemini 3 models Add combined resolution preset selector for external models that maps aspect ratio + image size to fixed dimensions. Gemini 3 Pro and 3.1 Flash now send imageConfig (aspectRatio + imageSize) via generationConfig instead of text-based aspect ratio hints used by Gemini 2.5 Flash. Backend: ExternalResolutionPreset model, resolution_presets capability field, image_size on ExternalGenerationRequest, and Gemini provider imageConfig logic. Frontend: ExternalSettingsAccordion with combo resolution select, dimension slider disabling for fixed-size models, and panel schema constraint wiring for Steps/Guidance/Seed controls. * Remove unused external model fields and add provider-specific parameters - Remove negative_prompt, steps, guidance, reference_image_weights, reference_image_modes from external model nodes (unused by any provider) - Remove supports_negative_prompt, supports_steps, supports_guidance from ExternalModelCapabilities - Add provider_options dict to ExternalGenerationRequest for provider-specific parameters - Add OpenAI-specific fields: quality, background, input_fidelity - Add Gemini-specific fields: temperature, thinking_level - Add new OpenAI starter models: GPT Image 1.5, GPT Image 1 Mini, DALL-E 3, DALL-E 2 - Fix OpenAI provider to use output_format (GPT Image) vs response_format (DALL-E) and send model ID in requests - Add fixed aspect ratio sizes for OpenAI models (bucketing) - Add ExternalProviderRateLimitError with retry logic for 429 responses - Add provider-specific UI components in ExternalSettingsAccordion - Simplify ParamSteps/ParamGuidance by removing dead external overrides - Update all backend and frontend tests * Chore Ruff check & format * Chore typegen * feat: full canvas workflow integration for external models - Add missing aspect ratios (4:5, 5:4, 8:1, 4:1, 1:4, 1:8) to type system for external model support - Sync canvas bbox when external model resolution preset is selected - Use params preset dimensions in buildExternalGraph to prevent "unsupported aspect ratio" errors - Lock all bbox controls (resize handles, aspect ratio select, width/height sliders, swap/optimal buttons) for external models with fixed dimension presets - Disable denoise strength slider for external models (not applicable) - Sync bbox aspect ratio changes back to paramsSlice for external models - Initialize bbox dimensions when switching to an external model * Chore typegen Linux seperator * feat: full canvas workflow integration for external models - Update buildExternalGraph test to include dimensions in mock params * Merge remote-tracking branch 'upstream/main' into external-models * Chore pnpm fix * add missing parameter * docs: add External Models guide with Gemini and OpenAI provider pages * fix(external-models): address PR review feedback - Gemini recall: write temperature, thinking_level, image_size to image metadata; wire external graph as metadata receiver; add recall handlers. - Canvas: gate regional guidance, inpaint mask, and control layer for external models. - Canvas: throw a clear error on outpainting for external models (was falling back to inpaint and hitting an API-side mask/image size mismatch). - Workflow editor: add ui_model_provider_id filter so OpenAI and Gemini nodes only list their own provider's models. - Workflow editor: silently drop seed when the selected model does not support it instead of raising a capability error. - Remove the legacy external_image_generation invocation and the graph-builder fallback; providers must register a dedicated node. - Regenerate schema.ts. - remove Gemini debug dumps to outputs/external_debug * fix(external-models): resolve TSC errors in metadata parsing and external graph - Export imageSizeChanged from paramsSlice (required by the new ImageSize recall handler). - Emit the external graph's metadata model entry via zModelIdentifierField since ExternalApiModelConfig is not part of the AnyModelConfig union. * chore: prettier format ModelIdentifierFieldInputComponent * fix: remove unsupported thinkingConfig from Gemini image models and restrict GPT Image models to txt2img * chore typegen * chore(docs): regenerate settings.json for external provider fields * fix(external): fix mask handling and mode support for external providers - Remove img2img and inpaint modes from Gemini models (Gemini has no bitmap mask or dedicated edit API; image editing works via reference images in the UI) - Fix DALL-E 2 inpainting: convert grayscale mask to RGBA with alpha channel transparency (OpenAI expects transparent=edit area) and convert init image to RGBA when mask is present * fix(external): update mode support and UI for external providers - Remove DALL-E 2 from starter models (deprecated, shutdown May 12 2026) - Enable img2img for GPT Image 1/1.5/1-mini (supports edits endpoint) - Set Gemini models to txt2img only (no mask/edit API; editing via ref images) - Hide mode/init_image/mask_image fields on Gemini node (not usable) - Hide mask_image field on OpenAI node (no model supports inpaint) * Chore typegen * fix(external): improve OpenAI node UX and disable cache by default - Hide OpenAI node's mode and init_image fields: OpenAI's API has no img2img/inpaint distinction (the edits endpoint is invoked automatically when reference images are provided). init_image is functionally identical to a reference image and was misleading users. - Default use_cache to False for external image generation nodes: external API calls are non-deterministic and incur usage costs. Cache hits returned stale image references that did not produce new gallery entries on repeat invokes. * fix(external): duplicate cached images on cache hit instead of skipping External image generation nodes use the standard invocation cache, but returning the cached output (with stale image_name references) on cache hits resulted in no new gallery entries — the Invoke button would spin indefinitely on repeat invokes with identical parameters. Override invoke_internal so that on cache hit, the cached images are loaded and re-saved as new gallery entries. The expensive API call is still skipped (cost saving), but the user sees a new image as expected. * Chore typegen + ruff * CHore ruff format * fix(external): restore OpenAI advanced settings on Remix recall Remix recall iterates through ImageMetadataHandlers but only Gemini's temperature handler was wired up — OpenAI's quality, background, and input_fidelity were stored in image metadata but never parsed back into the params slice. Add the three missing handlers so Remix restores these settings as expected. --------- Co-authored-by: Alexander Eichhorn <alex@eichhorn.dev> Co-authored-by: Alexander Eichhorn <alex@code-with.us> Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
189 lines
7.8 KiB
Python
189 lines
7.8 KiB
Python
"""Utilities for processing images with ControlNet processors."""
|
|
|
|
from datetime import datetime
|
|
from typing import Any, Optional
|
|
|
|
from invokeai.app.invocations.fields import ImageField
|
|
from invokeai.app.services.invoker import InvocationServices
|
|
from invokeai.app.services.session_queue.session_queue_common import SessionQueueItem
|
|
from invokeai.app.services.shared.graph import Graph, GraphExecutionState
|
|
from invokeai.app.services.shared.invocation_context import InvocationContextData, build_invocation_context
|
|
|
|
|
|
def _get_processor_invocation_class(processor_type: str):
|
|
"""Get the invocation class for a processor type."""
|
|
# Import processor invocation classes on demand
|
|
processor_class_map = {
|
|
"canny_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.canny", fromlist=["CannyEdgeDetectionInvocation"]
|
|
).CannyEdgeDetectionInvocation
|
|
),
|
|
"hed_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.hed", fromlist=["HEDEdgeDetectionInvocation"]
|
|
).HEDEdgeDetectionInvocation
|
|
),
|
|
"mlsd_image_processor": lambda: (
|
|
__import__("invokeai.app.invocations.mlsd", fromlist=["MLSDDetectionInvocation"]).MLSDDetectionInvocation
|
|
),
|
|
"depth_anything_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.depth_anything", fromlist=["DepthAnythingDepthEstimationInvocation"]
|
|
).DepthAnythingDepthEstimationInvocation
|
|
),
|
|
"normalbae_image_processor": lambda: (
|
|
__import__("invokeai.app.invocations.normal_bae", fromlist=["NormalMapInvocation"]).NormalMapInvocation
|
|
),
|
|
"pidi_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.pidi", fromlist=["PiDiNetEdgeDetectionInvocation"]
|
|
).PiDiNetEdgeDetectionInvocation
|
|
),
|
|
"lineart_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.lineart", fromlist=["LineartEdgeDetectionInvocation"]
|
|
).LineartEdgeDetectionInvocation
|
|
),
|
|
"lineart_anime_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.lineart_anime", fromlist=["LineartAnimeEdgeDetectionInvocation"]
|
|
).LineartAnimeEdgeDetectionInvocation
|
|
),
|
|
"content_shuffle_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.content_shuffle", fromlist=["ContentShuffleInvocation"]
|
|
).ContentShuffleInvocation
|
|
),
|
|
"dw_openpose_image_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.dw_openpose", fromlist=["DWOpenposeDetectionInvocation"]
|
|
).DWOpenposeDetectionInvocation
|
|
),
|
|
"mediapipe_face_processor": lambda: (
|
|
__import__(
|
|
"invokeai.app.invocations.mediapipe_face", fromlist=["MediaPipeFaceDetectionInvocation"]
|
|
).MediaPipeFaceDetectionInvocation
|
|
),
|
|
# Note: zoe_depth_image_processor doesn't have a processor invocation implementation
|
|
"color_map_image_processor": lambda: (
|
|
__import__("invokeai.app.invocations.color_map", fromlist=["ColorMapInvocation"]).ColorMapInvocation
|
|
),
|
|
}
|
|
|
|
if processor_type in processor_class_map:
|
|
return processor_class_map[processor_type]()
|
|
return None
|
|
|
|
|
|
# Map processor type names to their default parameters
|
|
PROCESSOR_DEFAULT_PARAMS = {
|
|
"canny_image_processor": {"low_threshold": 100, "high_threshold": 200},
|
|
"hed_image_processor": {"scribble": False},
|
|
"mlsd_image_processor": {"detect_resolution": 512, "thr_v": 0.1, "thr_d": 0.1},
|
|
"depth_anything_image_processor": {"model_size": "small"},
|
|
"normalbae_image_processor": {"detect_resolution": 512},
|
|
"pidi_image_processor": {"detect_resolution": 512, "safe": False},
|
|
"lineart_image_processor": {"detect_resolution": 512, "coarse": False},
|
|
"lineart_anime_image_processor": {"detect_resolution": 512},
|
|
"content_shuffle": {},
|
|
"dw_openpose_image_processor": {"draw_body": True, "draw_face": True, "draw_hands": True},
|
|
"mediapipe_face_processor": {"max_faces": 1, "min_confidence": 0.5},
|
|
"zoe_depth_image_processor": {},
|
|
"color_map_image_processor": {"color_map_tile_size": 64},
|
|
}
|
|
|
|
|
|
def process_controlnet_image(image_name: str, model_key: str, services: InvocationServices) -> Optional[dict[str, Any]]:
|
|
"""
|
|
Process a controlnet image using the appropriate processor based on the model's default settings.
|
|
|
|
Args:
|
|
image_name: The filename of the image to process
|
|
model_key: The model key to look up default processor settings
|
|
services: The invocation services providing access to models and images
|
|
|
|
Returns:
|
|
A dictionary with the processed image data (image_name, width, height) or None if processing fails
|
|
"""
|
|
logger = services.logger
|
|
|
|
try:
|
|
# Get model config to find default processor
|
|
model_record = services.model_manager.store.get_model(model_key)
|
|
if not model_record or not model_record.default_settings:
|
|
logger.info(f"No default processor settings found for model {model_key}")
|
|
return None
|
|
|
|
preprocessor = model_record.default_settings.preprocessor
|
|
if not preprocessor:
|
|
logger.info(f"No preprocessor configured for model {model_key}")
|
|
return None
|
|
|
|
# Get the invocation class for this processor
|
|
invocation_class = _get_processor_invocation_class(preprocessor)
|
|
if not invocation_class:
|
|
logger.info(f"No processor mapping found for preprocessor '{preprocessor}'")
|
|
return None
|
|
|
|
# Get default parameters for this processor
|
|
default_params = PROCESSOR_DEFAULT_PARAMS.get(preprocessor, {})
|
|
logger.info(f"Processing image {image_name} with processor {preprocessor}")
|
|
|
|
# Create a minimal context to run the invocation
|
|
# We need a fake queue item and session for the context
|
|
fake_session = GraphExecutionState(graph=Graph())
|
|
now = datetime.now()
|
|
|
|
# Create invocation instance first so we have its ID
|
|
invocation_params = {"image": ImageField(image_name=image_name), **default_params}
|
|
invocation = invocation_class(**invocation_params)
|
|
|
|
# Add the invocation ID to the session's prepared_source_mapping
|
|
# This is required for the invocation context to emit progress events
|
|
fake_session.prepared_source_mapping[invocation.id] = invocation.id
|
|
|
|
fake_queue_item = SessionQueueItem(
|
|
item_id=0,
|
|
session_id=fake_session.id,
|
|
queue_id="default",
|
|
batch_id="recall_processor",
|
|
field_values=None,
|
|
session=fake_session,
|
|
status="in_progress",
|
|
created_at=now,
|
|
updated_at=now,
|
|
started_at=now,
|
|
completed_at=None,
|
|
)
|
|
|
|
context_data = InvocationContextData(
|
|
invocation=invocation,
|
|
source_invocation_id=invocation.id,
|
|
queue_item=fake_queue_item,
|
|
)
|
|
|
|
context = build_invocation_context(
|
|
data=context_data,
|
|
services=services,
|
|
is_canceled=lambda: False,
|
|
)
|
|
|
|
# Invoke the processor
|
|
output = invocation.invoke(context)
|
|
|
|
# Get the processed image DTO
|
|
processed_image_dto = services.images.get_dto(output.image.image_name)
|
|
|
|
logger.info(f"Successfully processed image {image_name} -> {processed_image_dto.image_name}")
|
|
|
|
return {
|
|
"image_name": processed_image_dto.image_name,
|
|
"width": processed_image_dto.width,
|
|
"height": processed_image_dto.height,
|
|
}
|
|
|
|
except Exception as e:
|
|
logger.error(f"Error processing controlnet image {image_name}: {e}", exc_info=True)
|
|
return None
|