Compare commits

...

23 Commits

Author SHA1 Message Date
psychedelicious
0cfd713b93 fix(ui): typo 2025-03-06 08:52:10 +11:00
psychedelicious
45f5d7617a chore: bump version to v5.7.0 2025-03-06 08:38:59 +11:00
psychedelicious
f49df7d327 chore(ui): update whats new 2025-03-06 08:38:59 +11:00
Linos
87ed0ed48a translationBot(ui): update translation (Vietnamese)
Currently translated at 100.0% (1802 of 1802 strings)

Co-authored-by: Linos <linos.coding@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/vi/
Translation: InvokeAI/Web UI
2025-03-06 08:00:35 +11:00
Riccardo Giovanetti
d445c88e4c translationBot(ui): update translation (Italian)
Currently translated at 98.8% (1782 of 1802 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.8% (1782 of 1802 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2025-03-06 08:00:35 +11:00
Riku
c15c43ed2a translationBot(ui): update translation (German)
Currently translated at 67.2% (1212 of 1802 strings)

Co-authored-by: Riku <riku.block@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2025-03-06 08:00:35 +11:00
psychedelicious
d2f8db9745 tidy: remove unused utils 2025-03-06 07:49:35 +11:00
psychedelicious
c1cf01a038 tests: use dangerously_run_function_in_subprocess to fix configure_torch_cuda_allocator tests 2025-03-06 07:49:35 +11:00
psychedelicious
2bfb4fc79c tests: add util to run a function in separate process
This allows our tests to run in an isolated environment. For tests taht implicitly depend on import behaviour, this can prevent side-effects.

The function should only be used for tests.
2025-03-06 07:49:35 +11:00
psychedelicious
d037d8f9aa tests: update tests for configure_torch_cuda_allocator 2025-03-06 07:49:35 +11:00
psychedelicious
d5401e8443 tests: add testing utils to set/unset env var 2025-03-06 07:49:35 +11:00
psychedelicious
d193e4f02a feat(app): log warning instead of raising if PYTORCH_CUDA_ALLOC_CONF is already set 2025-03-06 07:49:35 +11:00
psychedelicious
ec493e30ee feat(app): make logger a required arg in configure_torch_cuda_allocator 2025-03-06 07:49:35 +11:00
Jonathan
081b931edf Update util.py
Changed string to a literal
2025-03-05 14:39:17 +11:00
Jonathan
8cd7035494 Fixed validation of begin and end steps
Fixed logic to match the error message - begin should be <= end.
2025-03-05 14:39:17 +11:00
Eugene Brodsky
4de6fd3ae6 chore(docker): reduce size between docker builds (#7571)
by adding a layer with all the pytorch dependencies that don't change
most of the time.

## Summary

Every time the [`main` docker
images](https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai)
rebuild and I pull `main-cuda`, it gets another 3+ GB, which seems like
about a zillion times too much since most things don't change from one
commit on `main` to the next.

This is an attempt to follow the guidance in [Using uv in Docker:
Intermediate
Layers](https://docs.astral.sh/uv/guides/integration/docker/#intermediate-layers)
so there's one layer that installs all the dependencies—including
PyTorch with its bundled nvidia libraries—_before_ the project's own
frequently-changing files are copied in to the image.


## Related Issues / Discussions

- [Improved docker layer cache with
uv](https://discord.com/channels/1020123559063990373/1329975172022927370)
- [astral: Can `uv pip install` torch, but not `uv sync`
it](https://discord.com/channels/1039017663004942429/1329986610770612347)


## QA Instructions

Hopefully the CI system building the docker images is sufficient.

But there is one change to `pyproject.toml` related to xformers, so it'd
be worth checking that `python -m xformers.info` still says it has
triton on the platforms that expect it.


## Merge Plan

I don't expect this to be a disruptive merge.

(An earlier revision of this PR moved the venv, but I've reverted that
change at ebr's recommendation.)


## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
- [ ] _Updated `What's New` copy (if doing a release after this PR)_
2025-03-04 20:42:28 -05:00
Eugene Brodsky
3feb1a6600 Merge branch 'main' into build/docker-dependency-layer 2025-03-04 20:33:24 -05:00
psychedelicious
ea2320c57b feat(ui): add button ref image layer empty state to pull bbox 2025-03-05 08:00:20 +11:00
Eugene Brodsky
0362bd5a06 Merge branch 'main' into build/docker-dependency-layer 2025-03-03 09:32:04 -05:00
Kevin Turner
80d38c0e47 chore(docker): include fewer files while installing dependencies
including just invokeai/version seems sufficient to appease uv sync here. including everything else would invalidate the cache we're trying to establish.
2025-02-16 12:31:14 -08:00
Kevin Turner
22362350dc chore(docker): revert to keeping venv in /opt/venv 2025-02-16 11:26:06 -08:00
Kevin Turner
275d891f48 Merge branch 'main' into build/docker-dependency-layer 2025-02-16 10:34:17 -08:00
Kevin Turner
3848e1926b chore(docker): reduce size between docker builds
by adding a layer with all the pytorch dependencies that don't change most of the time.
2025-01-18 09:10:54 -08:00
14 changed files with 344 additions and 71 deletions

View File

@@ -13,48 +13,63 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
git
# Install `uv` for package management
COPY --from=ghcr.io/astral-sh/uv:0.5.5 /uv /uvx /bin/
COPY --from=ghcr.io/astral-sh/uv:0.6.0 /uv /uvx /bin/
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ENV INVOKEAI_SRC=/opt/invokeai
ENV PYTHON_VERSION=3.11
ENV UV_PYTHON=3.11
ENV UV_COMPILE_BYTECODE=1
ENV UV_LINK_MODE=copy
ENV UV_PROJECT_ENVIRONMENT="$VIRTUAL_ENV"
ENV UV_INDEX="https://download.pytorch.org/whl/cu124"
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
ARG BUILDPLATFORM
# Switch to the `ubuntu` user to work around dependency issues with uv-installed python
RUN mkdir -p ${VIRTUAL_ENV} && \
mkdir -p ${INVOKEAI_SRC} && \
chmod -R a+w /opt
chmod -R a+w /opt && \
mkdir ~ubuntu/.cache && chown ubuntu: ~ubuntu/.cache
USER ubuntu
# Install python and create the venv
RUN uv python install ${PYTHON_VERSION} && \
uv venv --relocatable --prompt "invoke" --python ${PYTHON_VERSION} ${VIRTUAL_ENV}
# Install python
RUN --mount=type=cache,target=/home/ubuntu/.cache/uv,uid=1000,gid=1000 \
uv python install ${PYTHON_VERSION}
WORKDIR ${INVOKEAI_SRC}
COPY invokeai ./invokeai
COPY pyproject.toml ./
# Editable mode helps use the same image for development:
# the local working copy can be bind-mounted into the image
# at path defined by ${INVOKEAI_SRC}
# Install project's dependencies as a separate layer so they aren't rebuilt every commit.
# bind-mount instead of copy to defer adding sources to the image until next layer.
#
# NOTE: there are no pytorch builds for arm64 + cuda, only cpu
# x86_64/CUDA is the default
RUN --mount=type=cache,target=/home/ubuntu/.cache/uv,uid=1000,gid=1000 \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
--mount=type=bind,source=invokeai/version,target=invokeai/version \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm6.1"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu124"; \
UV_INDEX="https://download.pytorch.org/whl/rocm6.1"; \
fi && \
uv pip install --python ${PYTHON_VERSION} $extra_index_url_arg -e "."
uv sync --no-install-project
# Now that the bulk of the dependencies have been installed, copy in the project files that change more frequently.
COPY invokeai invokeai
COPY pyproject.toml .
RUN --mount=type=cache,target=/home/ubuntu/.cache/uv,uid=1000,gid=1000 \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
UV_INDEX="https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
UV_INDEX="https://download.pytorch.org/whl/rocm6.1"; \
fi && \
uv sync
#### Build the Web UI ------------------------------------
@@ -98,6 +113,7 @@ RUN apt update && apt install -y --no-install-recommends \
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv
ENV UV_PROJECT_ENVIRONMENT="$VIRTUAL_ENV"
ENV PYTHON_VERSION=3.11
ENV INVOKEAI_ROOT=/invokeai
ENV INVOKEAI_HOST=0.0.0.0
@@ -109,7 +125,7 @@ ENV CONTAINER_GID=${CONTAINER_GID:-1000}
# Install `uv` for package management
# and install python for the ubuntu user (expected to exist on ubuntu >=24.x)
# this is too tiny to optimize with multi-stage builds, but maybe we'll come back to it
COPY --from=ghcr.io/astral-sh/uv:0.5.5 /uv /uvx /bin/
COPY --from=ghcr.io/astral-sh/uv:0.6.0 /uv /uvx /bin/
USER ubuntu
RUN uv python install ${PYTHON_VERSION}
USER root

View File

@@ -9,6 +9,6 @@ def validate_weights(weights: Union[float, list[float]]) -> None:
def validate_begin_end_step(begin_step_percent: float, end_step_percent: float) -> None:
"""Validate that begin_step_percent is less than end_step_percent"""
if begin_step_percent >= end_step_percent:
"""Validate that begin_step_percent is less than or equal to end_step_percent"""
if begin_step_percent > end_step_percent:
raise ValueError("Begin step percent must be less than or equal to end step percent")

View File

@@ -1,20 +1,31 @@
import logging
import os
import sys
def configure_torch_cuda_allocator(pytorch_cuda_alloc_conf: str, logger: logging.Logger | None = None):
def configure_torch_cuda_allocator(pytorch_cuda_alloc_conf: str, logger: logging.Logger):
"""Configure the PyTorch CUDA memory allocator. See
https://pytorch.org/docs/stable/notes/cuda.html#optimizing-memory-usage-with-pytorch-cuda-alloc-conf for supported
configurations.
"""
# Raise if the PYTORCH_CUDA_ALLOC_CONF environment variable is already set.
if "torch" in sys.modules:
raise RuntimeError("configure_torch_cuda_allocator() must be called before importing torch.")
# Log a warning if the PYTORCH_CUDA_ALLOC_CONF environment variable is already set.
prev_cuda_alloc_conf = os.environ.get("PYTORCH_CUDA_ALLOC_CONF", None)
if prev_cuda_alloc_conf is not None:
raise RuntimeError(
f"Attempted to configure the PyTorch CUDA memory allocator, but PYTORCH_CUDA_ALLOC_CONF is already set to "
f"'{prev_cuda_alloc_conf}'."
)
if prev_cuda_alloc_conf == pytorch_cuda_alloc_conf:
logger.info(
f"PYTORCH_CUDA_ALLOC_CONF is already set to '{pytorch_cuda_alloc_conf}'. Skipping configuration."
)
return
else:
logger.warning(
f"Attempted to configure the PyTorch CUDA memory allocator with '{pytorch_cuda_alloc_conf}', but PYTORCH_CUDA_ALLOC_CONF is already set to "
f"'{prev_cuda_alloc_conf}'. Skipping configuration."
)
return
# Configure the PyTorch CUDA memory allocator.
# NOTE: It is important that this happens before torch is imported.
@@ -38,5 +49,4 @@ def configure_torch_cuda_allocator(pytorch_cuda_alloc_conf: str, logger: logging
"not imported before calling configure_torch_cuda_allocator()."
)
if logger is not None:
logger.info(f"PyTorch CUDA memory allocator: {torch.cuda.get_allocator_backend()}")
logger.info(f"PyTorch CUDA memory allocator: {torch.cuda.get_allocator_backend()}")

View File

@@ -107,7 +107,13 @@
"min": "Min",
"max": "Max",
"resetToDefaults": "Auf Standard zurücksetzen",
"seed": "Seed"
"seed": "Seed",
"row": "Reihe",
"column": "Spalte",
"end": "Ende",
"layout": "Layout",
"board": "Ordner",
"combinatorial": "Kombinatorisch"
},
"gallery": {
"galleryImageSize": "Bildgröße",
@@ -616,7 +622,9 @@
"hfTokenUnableToVerify": "HF-Token kann nicht überprüft werden",
"hfTokenUnableToVerifyErrorMessage": "HuggingFace-Token kann nicht überprüft werden. Dies ist wahrscheinlich auf einen Netzwerkfehler zurückzuführen. Bitte versuchen Sie es später erneut.",
"hfTokenSaved": "HF-Token gespeichert",
"hfTokenRequired": "Sie versuchen, ein Modell herunterzuladen, für das ein gültiges HuggingFace-Token erforderlich ist."
"hfTokenRequired": "Sie versuchen, ein Modell herunterzuladen, für das ein gültiges HuggingFace-Token erforderlich ist.",
"urlUnauthorizedErrorMessage2": "Hier erfahren wie.",
"urlForbidden": "Sie haben keinen Zugriff auf dieses Modell"
},
"parameters": {
"images": "Bilder",
@@ -683,7 +691,8 @@
"iterations": "Iterationen",
"guidance": "Führung",
"coherenceMode": "Modus",
"recallMetadata": "Metadaten abrufen"
"recallMetadata": "Metadaten abrufen",
"gaussianBlur": "Gaußsche Unschärfe"
},
"settings": {
"displayInProgress": "Zwischenbilder anzeigen",
@@ -883,7 +892,8 @@
"canvas": "Leinwand",
"prompts_one": "Prompt",
"prompts_other": "Prompts",
"batchSize": "Stapelgröße"
"batchSize": "Stapelgröße",
"confirm": "Bestätigen"
},
"metadata": {
"negativePrompt": "Negativ Beschreibung",
@@ -1298,7 +1308,13 @@
"noBatchGroup": "keine Gruppe",
"generatorNoValues": "leer",
"generatorLoading": "wird geladen",
"generatorLoadFromFile": "Aus Datei laden"
"generatorLoadFromFile": "Aus Datei laden",
"showEdgeLabels": "Kantenbeschriftungen anzeigen",
"downloadWorkflowError": "Fehler beim Herunterladen des Arbeitsablaufs",
"nodeName": "Knotenname",
"description": "Beschreibung",
"loadWorkflowDesc": "Arbeitsablauf laden?",
"loadWorkflowDesc2": "Ihr aktueller Arbeitsablauf enthält nicht gespeicherte Änderungen."
},
"hrf": {
"enableHrf": "Korrektur für hohe Auflösungen",

View File

@@ -1911,7 +1911,7 @@
"resetGenerationSettings": "Reset Generation Settings",
"replaceCurrent": "Replace Current",
"controlLayerEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or draw on the canvas to get started.",
"referenceImageEmptyState": "<UploadButton>Upload an image</UploadButton> or drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer to get started.",
"referenceImageEmptyState": "<UploadButton>Upload an image</UploadButton>, drag an image from the <GalleryButton>gallery</GalleryButton> onto this layer, or <PullBboxButton>pull the bounding box into this layer</PullBboxButton> to get started.",
"warnings": {
"problemsFound": "Problems found",
"unsupportedModel": "layer not supported for selected base model",
@@ -2306,8 +2306,8 @@
"whatsNew": {
"whatsNewInInvoke": "What's New in Invoke",
"items": [
"Workflow Editor: New drag-and-drop form builder for easier workflow creation.",
"Other improvements: Faster batch queuing, better upscaling, improved color picker, and metadata nodes."
"Memory Management: New setting for users with Nvidia GPUs to reduce VRAM usage.",
"Performance: Continued improvements to overall application performance and responsiveness."
],
"readReleaseNotes": "Read Release Notes",
"watchRecentReleaseVideos": "Watch Recent Release Videos",

View File

@@ -1766,7 +1766,12 @@
"both": "Entrambi",
"emptyRootPlaceholderViewMode": "Fare clic su Modifica per iniziare a creare un modulo per questo flusso di lavoro.",
"textPlaceholder": "Testo vuoto",
"workflowBuilderAlphaWarning": "Il generatore di flussi di lavoro è attualmente in versione alpha. Potrebbero esserci cambiamenti radicali prima della versione stabile."
"workflowBuilderAlphaWarning": "Il generatore di flussi di lavoro è attualmente in versione alpha. Potrebbero esserci cambiamenti radicali prima della versione stabile.",
"heading": "Intestazione",
"divider": "Divisore",
"container": "Contenitore",
"text": "Testo",
"numberInput": "Ingresso numerico"
}
},
"accordions": {
@@ -2189,7 +2194,7 @@
"controlLayerEmptyState": "<UploadButton>Carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello oppure disegna sulla tela per iniziare.",
"useImage": "Usa immagine",
"resetGenerationSettings": "Ripristina impostazioni di generazione",
"referenceImageEmptyState": "Per iniziare, <UploadButton>carica un'immagine</UploadButton> oppure trascina un'immagine dalla <GalleryButton>galleria</GalleryButton> su questo livello.",
"referenceImageEmptyState": "Per iniziare, <UploadButton>carica un'immagine</UploadButton>, trascina un'immagine dalla <GalleryButton>galleria</GalleryButton>, oppure <PullBboxButton>trascina il riquadro di delimitazione in questo livello</PullBboxButton> su questo livello.",
"asRasterLayer": "Come $t(controlLayers.rasterLayer)",
"asRasterLayerResize": "Come $t(controlLayers.rasterLayer) (Ridimensiona)",
"asControlLayer": "Come $t(controlLayers.controlLayer)",

View File

@@ -2167,7 +2167,7 @@
"parameterSetDesc": "Gợi lại {{parameter}}",
"loadedWithWarnings": "Đã Tải Workflow Với Cảnh Báo",
"outOfMemoryErrorDesc": "Thiết lập tạo sinh hiện tại đã vượt mức cho phép của thiết bị. Hãy điều chỉnh thiết lập và thử lại.",
"setNodeField": "Đặt làm vùng cho node",
"setNodeField": "Đặt làm vùng node",
"problemRetrievingWorkflow": "Có Vấn Đề Khi Lấy Lại Workflow",
"somethingWentWrong": "Có Vấn Đề Phát Sinh",
"problemDeletingWorkflow": "Có Vấn Đề Khi Xoá Workflow",
@@ -2259,7 +2259,7 @@
"uploadAndSaveWorkflow": "Tải Lên Thư Viện",
"openLibrary": "Mở Thư Viện",
"builder": {
"resetAllNodeFields": "Tải Lại Các Vùng Cho Node",
"resetAllNodeFields": "Tải Lại Các Vùng Node",
"builder": "Trình Tạo Vùng Nhập",
"layout": "Bố Cục",
"row": "Hàng",
@@ -2274,15 +2274,19 @@
"slider": "Thanh Trượt",
"both": "Cả Hai",
"emptyRootPlaceholderViewMode": "Chọn Chỉnh Sửa để bắt đầu tạo nên một vùng nhập cho workflow này.",
"emptyRootPlaceholderEditMode": "Kéo thành phần vùng nhập hoặc vùng cho node vào đây để bắt đầu.",
"emptyRootPlaceholderEditMode": "Kéo thành phần vùng nhập hoặc vùng node vào đây để bắt đầu.",
"containerPlaceholder": "Hộp Chứa Trống",
"headingPlaceholder": "Đầu Dòng Trống",
"textPlaceholder": "Mô Tả Trống",
"textPlaceholder": "Văn Bản Trống",
"column": "Cột",
"deleteAllElements": "Xóa Tất Cả Thành Phần",
"nodeField": "Vùng Cho Node",
"nodeFieldTooltip": "Để thêm vùng cho node, bấm vào dấu cộng nhỏ trên vùng trong Trình Biên Tập Workflow, hoặc kéo vùng theo tên của nó vào vùng nhập.",
"workflowBuilderAlphaWarning": "Trình tạo vùng nhập đang trong giai đoạn alpha. Nó có thể xuất hiện những thay đổi đột ngột trước khi chính thức được phát hành."
"nodeField": "Vùng Node",
"nodeFieldTooltip": "Để thêm vùng node, bấm vào dấu cộng nhỏ trên vùng trong Trình Biên Tập Workflow, hoặc kéo vùng theo tên của nó vào vùng nhập.",
"workflowBuilderAlphaWarning": "Trình tạo vùng nhập đang trong giai đoạn alpha. Nó có thể xuất hiện những thay đổi đột ngột trước khi chính thức được phát hành.",
"container": "Hộp Chứa",
"heading": "Đầu Dòng",
"text": "Văn Bản",
"divider": "Gạch Chia"
}
},
"upscaling": {

View File

@@ -2,6 +2,7 @@ import { Button, Flex, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoGlobalReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import type { SetGlobalReferenceImageDndTargetData } from 'features/dnd/dnd';
import { setGlobalReferenceImageDndTarget } from 'features/dnd/dnd';
@@ -27,6 +28,7 @@ export const IPAdapterSettingsEmptyState = memo(() => {
const onClickGalleryButton = useCallback(() => {
dispatch(activeTabCanvasRightPanelChanged('gallery'));
}, [dispatch]);
const pullBboxIntoIPAdapter = usePullBboxIntoGlobalReferenceImage(entityIdentifier);
const dndTargetData = useMemo<SetGlobalReferenceImageDndTargetData>(
() => setGlobalReferenceImageDndTarget.getData({ entityIdentifier }),
@@ -41,8 +43,11 @@ export const IPAdapterSettingsEmptyState = memo(() => {
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
PullBboxButton: (
<Button onClick={pullBboxIntoIPAdapter} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}),
[isBusy, onClickGalleryButton, uploadApi]
[isBusy, onClickGalleryButton, pullBboxIntoIPAdapter, uploadApi]
);
return (

View File

@@ -2,6 +2,7 @@ import { Button, Flex, IconButton, Spacer, Text } from '@invoke-ai/ui-library';
import { useAppDispatch } from 'app/store/storeHooks';
import { useImageUploadButton } from 'common/hooks/useImageUploadButton';
import { useEntityIdentifierContext } from 'features/controlLayers/contexts/EntityIdentifierContext';
import { usePullBboxIntoRegionalGuidanceReferenceImage } from 'features/controlLayers/hooks/saveCanvasHooks';
import { useCanvasIsBusy } from 'features/controlLayers/hooks/useCanvasIsBusy';
import { rgIPAdapterDeleted } from 'features/controlLayers/store/canvasSlice';
import type { SetRegionalGuidanceReferenceImageDndTargetData } from 'features/dnd/dnd';
@@ -36,6 +37,7 @@ export const RegionalGuidanceIPAdapterSettingsEmptyState = memo(({ referenceImag
const onDeleteIPAdapter = useCallback(() => {
dispatch(rgIPAdapterDeleted({ entityIdentifier, referenceImageId }));
}, [dispatch, entityIdentifier, referenceImageId]);
const pullBboxIntoIPAdapter = usePullBboxIntoRegionalGuidanceReferenceImage(entityIdentifier, referenceImageId);
const dndTargetData = useMemo<SetRegionalGuidanceReferenceImageDndTargetData>(
() =>
@@ -46,6 +48,21 @@ export const RegionalGuidanceIPAdapterSettingsEmptyState = memo(({ referenceImag
[entityIdentifier, referenceImageId]
);
const components = useMemo(
() => ({
UploadButton: (
<Button isDisabled={isBusy} size="sm" variant="link" color="base.300" {...uploadApi.getUploadButtonProps()} />
),
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
PullBboxButton: (
<Button onClick={pullBboxIntoIPAdapter} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}),
[isBusy, onClickGalleryButton, pullBboxIntoIPAdapter, uploadApi]
);
return (
<Flex flexDir="column" gap={2} position="relative" w="full">
<Flex alignItems="center" gap={2}>
@@ -66,23 +83,7 @@ export const RegionalGuidanceIPAdapterSettingsEmptyState = memo(({ referenceImag
</Flex>
<Flex alignItems="center" gap={2} p={4}>
<Text textAlign="center" color="base.300">
<Trans
i18nKey="controlLayers.referenceImageEmptyState"
components={{
UploadButton: (
<Button
isDisabled={isBusy}
size="sm"
variant="link"
color="base.300"
{...uploadApi.getUploadButtonProps()}
/>
),
GalleryButton: (
<Button onClick={onClickGalleryButton} isDisabled={isBusy} size="sm" variant="link" color="base.300" />
),
}}
/>
<Trans i18nKey="controlLayers.referenceImageEmptyState" components={components} />
</Text>
</Flex>
<input {...uploadApi.getUploadInputProps()} />

View File

@@ -1 +1 @@
__version__ = "5.7.2rc2"
__version__ = "5.7.2"

View File

@@ -101,8 +101,7 @@ dependencies = [
"xformers" = [
# Core generation dependencies, pinned for reproducible builds.
"xformers>=0.0.28.post1; sys_platform!='darwin'",
# Auxiliary dependencies, pinned only if necessary.
"triton; sys_platform=='linux'",
# torch 2.4+cu carries its own triton dependency
]
"onnx" = ["onnxruntime"]
"onnx-cuda" = ["onnxruntime-gpu"]

View File

@@ -1,13 +1,127 @@
import pytest
import torch
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
from tests.dangerously_run_function_in_subprocess import dangerously_run_function_in_subprocess
# These tests are a bit fiddly, because the depend on the import behaviour of torch. They use subprocesses to isolate
# the import behaviour of torch, and then check that the function behaves as expected. We have to hack in some logging
# to check that the tested function is behaving as expected.
@pytest.mark.skipif(not torch.cuda.is_available(), reason="Requires CUDA device.")
def test_configure_torch_cuda_allocator_raises_if_torch_is_already_imported():
"""Test that configure_torch_cuda_allocator() raises a RuntimeError if torch is already imported."""
import torch # noqa: F401
def test_configure_torch_cuda_allocator_configures_backend():
"""Test that configure_torch_cuda_allocator() raises a RuntimeError if the configured backend does not match the
expected backend."""
with pytest.raises(RuntimeError, match="Failed to configure the PyTorch CUDA memory allocator."):
configure_torch_cuda_allocator("backend:cudaMallocAsync")
def test_func():
import os
# Unset the environment variable if it is set so that we can test setting it
try:
del os.environ["PYTORCH_CUDA_ALLOC_CONF"]
except KeyError:
pass
from unittest.mock import MagicMock
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
mock_logger = MagicMock()
# Set the PyTorch CUDA memory allocator to cudaMallocAsync
configure_torch_cuda_allocator("backend:cudaMallocAsync", logger=mock_logger)
# Verify that the PyTorch CUDA memory allocator was configured correctly
import torch
assert torch.cuda.get_allocator_backend() == "cudaMallocAsync"
# Verify that the logger was called with the correct message
mock_logger.info.assert_called_once()
args, _kwargs = mock_logger.info.call_args
logged_message = args[0]
print(logged_message)
stdout, _stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 0
assert "PyTorch CUDA memory allocator: cudaMallocAsync" in stdout
@pytest.mark.skipif(not torch.cuda.is_available(), reason="Requires CUDA device.")
def test_configure_torch_cuda_allocator_raises_if_torch_already_imported():
"""Test that configure_torch_cuda_allocator() raises a RuntimeError if torch was already imported."""
def test_func():
from unittest.mock import MagicMock
# Import torch before calling configure_torch_cuda_allocator()
import torch # noqa: F401
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
try:
configure_torch_cuda_allocator("backend:cudaMallocAsync", logger=MagicMock())
except RuntimeError as e:
print(e)
stdout, _stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 0
assert "configure_torch_cuda_allocator() must be called before importing torch." in stdout
@pytest.mark.skipif(not torch.cuda.is_available(), reason="Requires CUDA device.")
def test_configure_torch_cuda_allocator_warns_if_env_var_is_set_differently():
"""Test that configure_torch_cuda_allocator() logs at WARNING level if PYTORCH_CUDA_ALLOC_CONF is set and doesn't
match the requested configuration."""
def test_func():
import os
# Explicitly set the environment variable
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "backend:native"
from unittest.mock import MagicMock
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
mock_logger = MagicMock()
# Set the PyTorch CUDA memory allocator a different configuration
configure_torch_cuda_allocator("backend:cudaMallocAsync", logger=mock_logger)
# Verify that the logger was called with the correct message
mock_logger.warning.assert_called_once()
args, _kwargs = mock_logger.warning.call_args
logged_message = args[0]
print(logged_message)
stdout, _stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 0
assert "Attempted to configure the PyTorch CUDA memory allocator with 'backend:cudaMallocAsync'" in stdout
@pytest.mark.skipif(not torch.cuda.is_available(), reason="Requires CUDA device.")
def test_configure_torch_cuda_allocator_logs_if_env_var_is_already_set_correctly():
"""Test that configure_torch_cuda_allocator() logs at INFO level if PYTORCH_CUDA_ALLOC_CONF is set and matches the
requested configuration."""
def test_func():
import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "backend:native"
from unittest.mock import MagicMock
from invokeai.app.util.torch_cuda_allocator import configure_torch_cuda_allocator
mock_logger = MagicMock()
configure_torch_cuda_allocator("backend:native", logger=mock_logger)
mock_logger.info.assert_called_once()
args, _kwargs = mock_logger.info.call_args
logged_message = args[0]
print(logged_message)
stdout, _stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 0
assert "PYTORCH_CUDA_ALLOC_CONF is already set to 'backend:native'" in stdout

View File

@@ -0,0 +1,46 @@
import inspect
import subprocess
import sys
import textwrap
from typing import Any, Callable
def dangerously_run_function_in_subprocess(func: Callable[[], Any]) -> tuple[str, str, int]:
"""**Use with caution! This should _only_ be used with trusted code!**
Extracts a function's source and runs it in a separate subprocess. Returns stdout, stderr, and return code
from the subprocess.
This is useful for tests where an isolated environment is required.
The function to be called must not have any arguments and must not have any closures over the scope in which is was
defined.
Any modules that the function depends on must be imported inside the function.
"""
source_code = inspect.getsource(func)
# Must dedent the source code to avoid indentation errors
dedented_source_code = textwrap.dedent(source_code)
# Get the function name so we can call it in the subprocess
func_name = func.__name__
# Create a script that calls the function
script = f"""
import sys
{dedented_source_code}
if __name__ == "__main__":
{func_name}()
"""
result = subprocess.run(
[sys.executable, "-c", textwrap.dedent(script)], # Run the script in a subprocess
capture_output=True, # Capture stdout and stderr
text=True,
)
return result.stdout, result.stderr, result.returncode

View File

@@ -0,0 +1,57 @@
from tests.dangerously_run_function_in_subprocess import dangerously_run_function_in_subprocess
def test_simple_function():
def test_func():
print("Hello, Test!")
stdout, stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 0
assert stdout.strip() == "Hello, Test!"
assert stderr == ""
def test_function_with_error():
def test_func():
raise ValueError("This is an error")
_stdout, stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode != 0 # Should fail
assert "ValueError: This is an error" in stderr
def test_function_with_imports():
def test_func():
import math
print(math.sqrt(4))
stdout, stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 0
assert stdout.strip() == "2.0"
assert stderr == ""
def test_function_with_sys_exit():
def test_func():
import sys
sys.exit(42)
_stdout, _stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 42 # Should return the custom exit code
def test_function_with_closure():
foo = "bar"
def test_func():
print(foo)
_stdout, _stderr, returncode = dangerously_run_function_in_subprocess(test_func)
assert returncode == 1 # Should fail because of closure