mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-01-20 00:37:57 -05:00
Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e315fb9e7b | ||
|
|
827ac4b841 | ||
|
|
73e5b08c1f | ||
|
|
e8eb9fd533 | ||
|
|
250def76de | ||
|
|
b2fb108414 | ||
|
|
383f8908be | ||
|
|
ec233e30fb | ||
|
|
018121330a |
2
.github/workflows/build-installer.yml
vendored
2
.github/workflows/build-installer.yml
vendored
@@ -41,5 +41,5 @@ jobs:
|
||||
- name: upload installer artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ steps.create_installer.outputs.INSTALLER_FILENAME }}
|
||||
name: installer
|
||||
path: ${{ steps.create_installer.outputs.INSTALLER_PATH }}
|
||||
|
||||
@@ -61,11 +61,33 @@ This sets up both python and frontend dependencies and builds the python package
|
||||
|
||||
#### Sanity Check & Smoke Test
|
||||
|
||||
At this point, the release workflow pauses as the remaining publish jobs require approval.
|
||||
At this point, the release workflow pauses as the remaining publish jobs require approval. Time to test the installer.
|
||||
|
||||
A maintainer should go to the **Summary** tab of the workflow, download the installer and test it. Ensure the app loads and generates.
|
||||
Because the installer pulls from PyPI, and we haven't published to PyPI yet, you will need to install from the wheel:
|
||||
|
||||
> The same wheel file is bundled in the installer and in the `dist` artifact, which is uploaded to PyPI. You should end up with the exactly the same installation of the `invokeai` package from any of these methods.
|
||||
- Download and unzip `dist.zip` and the installer from the **Summary** tab of the workflow
|
||||
- Run the installer script using the `--wheel` CLI arg, pointing at the wheel:
|
||||
|
||||
```sh
|
||||
./install.sh --wheel ../InvokeAI-4.0.0rc6-py3-none-any.whl
|
||||
```
|
||||
|
||||
- Install to a temporary directory so you get the new user experience
|
||||
- Download a model and generate
|
||||
|
||||
> The same wheel file is bundled in the installer and in the `dist` artifact, which is uploaded to PyPI. You should end up with the exactly the same installation as if the installer got the wheel from PyPI.
|
||||
|
||||
##### Something isn't right
|
||||
|
||||
If testing reveals any issues, no worries. Cancel the workflow, which will cancel the pending publish jobs (you didn't approve them prematurely, right?).
|
||||
|
||||
Now you can start from the top:
|
||||
|
||||
- Fix the issues and PR the fixes per usual
|
||||
- Get the PR approved and merged per usual
|
||||
- Switch to `main` and pull in the fixes
|
||||
- Run `make tag-release` to move the tag to `HEAD` (which has the fixes) and kick off the release workflow again
|
||||
- Re-do the sanity check
|
||||
|
||||
#### PyPI Publish Jobs
|
||||
|
||||
@@ -81,6 +103,12 @@ Both jobs require a maintainer to approve them from the workflow's **Summary** t
|
||||
|
||||
> **If the version already exists on PyPI, the publish jobs will fail.** PyPI only allows a given version to be published once - you cannot change it. If version published on PyPI has a problem, you'll need to "fail forward" by bumping the app version and publishing a followup release.
|
||||
|
||||
##### Failing PyPI Publish
|
||||
|
||||
Check the [python infrastructure status page] for incidents.
|
||||
|
||||
If there are no incidents, contact @hipsterusername or @lstein, who have owner access to GH and PyPI, to see if access has expired or something like that.
|
||||
|
||||
#### `publish-testpypi` Job
|
||||
|
||||
Publishes the distribution on the [Test PyPI] index, using the `testpypi` GitHub environment.
|
||||
@@ -110,11 +138,13 @@ Publishes the distribution on the production PyPI index, using the `pypi` GitHub
|
||||
Once the release is published to PyPI, it's time to publish the GitHub release.
|
||||
|
||||
1. [Draft a new release] on GitHub, choosing the tag that triggered the release.
|
||||
2. Write the release notes, describing important changes. The **Generate release notes** button automatically inserts the changelog and new contributors, and you can copy/paste the intro from previous releases.
|
||||
3. Upload the zip file created in **`build`** job into the Assets section of the release notes. You can also upload the zip into the body of the release notes, since it can be hard for users to find the Assets section.
|
||||
4. Check the **Set as a pre-release** and **Create a discussion for this release** checkboxes at the bottom of the release page.
|
||||
5. Publish the pre-release.
|
||||
6. Announce the pre-release in Discord.
|
||||
1. Write the release notes, describing important changes. The **Generate release notes** button automatically inserts the changelog and new contributors, and you can copy/paste the intro from previous releases.
|
||||
1. Use `scripts/get_external_contributions.py` to get a list of external contributions to shout out in the release notes.
|
||||
1. Upload the zip file created in **`build`** job into the Assets section of the release notes.
|
||||
1. Check **Set as a pre-release** if it's a pre-release.
|
||||
1. Check **Create a discussion for this release**.
|
||||
1. Publish the release.
|
||||
1. Announce the release in Discord.
|
||||
|
||||
> **TODO** Workflows can create a GitHub release from a template and upload release assets. One popular action to handle this is [ncipollo/release-action]. A future enhancement to the release process could set this up.
|
||||
|
||||
@@ -140,3 +170,4 @@ This functionality is available as a fallback in case something goes wonky. Typi
|
||||
[trusted publishers]: https://docs.pypi.org/trusted-publishers/
|
||||
[samuelcolvin/check-python-version]: https://github.com/samuelcolvin/check-python-version
|
||||
[manually]: #manual-release
|
||||
[python infrastructure status page]: https://status.python.org/
|
||||
|
||||
@@ -9,7 +9,8 @@ from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField
|
||||
from invokeai.app.invocations.primitives import ConditioningOutput
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.ti_utils import generate_ti_list
|
||||
from invokeai.backend.lora import LoRAModelRaw
|
||||
from invokeai.backend.lora_model_patcher import LoraModelPatcher
|
||||
from invokeai.backend.lora_model_raw import LoRAModelRaw
|
||||
from invokeai.backend.model_patcher import ModelPatcher
|
||||
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
|
||||
BasicConditioningInfo,
|
||||
@@ -80,7 +81,8 @@ class CompelInvocation(BaseInvocation):
|
||||
),
|
||||
text_encoder_info as text_encoder,
|
||||
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
|
||||
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
|
||||
# ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
|
||||
LoraModelPatcher.apply_lora_to_text_encoder(text_encoder, _lora_loader(), "text_encoder"),
|
||||
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
||||
ModelPatcher.apply_clip_skip(text_encoder_model, self.clip.skipped_layers),
|
||||
):
|
||||
@@ -181,7 +183,8 @@ class SDXLPromptInvocationBase:
|
||||
),
|
||||
text_encoder_info as text_encoder,
|
||||
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
|
||||
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
|
||||
# ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
|
||||
LoraModelPatcher.apply_lora_to_text_encoder(text_encoder, _lora_loader(), lora_prefix),
|
||||
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
|
||||
ModelPatcher.apply_clip_skip(text_encoder_model, clip_field.skipped_layers),
|
||||
):
|
||||
@@ -259,15 +262,15 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
|
||||
@torch.no_grad()
|
||||
def invoke(self, context: InvocationContext) -> ConditioningOutput:
|
||||
c1, c1_pooled, ec1 = self.run_clip_compel(
|
||||
context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True
|
||||
context, self.clip, self.prompt, False, "text_encoder", zero_on_empty=True
|
||||
)
|
||||
if self.style.strip() == "":
|
||||
c2, c2_pooled, ec2 = self.run_clip_compel(
|
||||
context, self.clip2, self.prompt, True, "lora_te2_", zero_on_empty=True
|
||||
context, self.clip2, self.prompt, True, "text_encoder_2", zero_on_empty=True
|
||||
)
|
||||
else:
|
||||
c2, c2_pooled, ec2 = self.run_clip_compel(
|
||||
context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True
|
||||
context, self.clip2, self.style, True, "text_encoder_2", zero_on_empty=True
|
||||
)
|
||||
|
||||
original_size = (self.original_height, self.original_width)
|
||||
|
||||
@@ -52,7 +52,8 @@ from invokeai.app.invocations.t2i_adapter import T2IAdapterField
|
||||
from invokeai.app.services.shared.invocation_context import InvocationContext
|
||||
from invokeai.app.util.controlnet_utils import prepare_control_image
|
||||
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter, IPAdapterPlus
|
||||
from invokeai.backend.lora import LoRAModelRaw
|
||||
from invokeai.backend.lora_model_patcher import LoraModelPatcher
|
||||
from invokeai.backend.lora_model_raw import LoRAModelRaw
|
||||
from invokeai.backend.model_manager import BaseModelType, LoadedModel
|
||||
from invokeai.backend.model_patcher import ModelPatcher
|
||||
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
|
||||
@@ -739,7 +740,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
|
||||
set_seamless(unet_info.model, self.unet.seamless_axes), # FIXME
|
||||
unet_info as unet,
|
||||
# Apply the LoRA after unet has been moved to its target device for faster patching.
|
||||
ModelPatcher.apply_lora_unet(unet, _lora_loader()),
|
||||
# ModelPatcher.apply_lora_unet(unet, _lora_loader()),
|
||||
LoraModelPatcher.apply_lora_to_unet(unet, _lora_loader()),
|
||||
):
|
||||
assert isinstance(unet, UNet2DConditionModel)
|
||||
latents = latents.to(device=unet.device, dtype=unet.dtype)
|
||||
|
||||
@@ -9,7 +9,6 @@ from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection
|
||||
|
||||
from invokeai.backend.ip_adapter.ip_attention_weights import IPAttentionWeights
|
||||
|
||||
from ..raw_model import RawModel
|
||||
from .resampler import Resampler
|
||||
|
||||
|
||||
@@ -92,7 +91,7 @@ class MLPProjModel(torch.nn.Module):
|
||||
return clip_extra_context_tokens
|
||||
|
||||
|
||||
class IPAdapter(RawModel):
|
||||
class IPAdapter(torch.nn.Module):
|
||||
"""IP-Adapter: https://arxiv.org/pdf/2308.06721.pdf"""
|
||||
|
||||
def __init__(
|
||||
|
||||
65
invokeai/backend/lora_model_patcher.py
Normal file
65
invokeai/backend/lora_model_patcher.py
Normal file
@@ -0,0 +1,65 @@
|
||||
from contextlib import contextmanager
|
||||
from typing import Iterator, Tuple, Union
|
||||
|
||||
from diffusers.loaders.lora import LoraLoaderMixin
|
||||
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
|
||||
from diffusers.utils.peft_utils import recurse_remove_peft_layers
|
||||
from transformers import CLIPTextModel
|
||||
|
||||
from invokeai.backend.lora_model_raw import LoRAModelRaw
|
||||
|
||||
|
||||
class LoraModelPatcher:
|
||||
@classmethod
|
||||
def unload_lora_from_model(cls, m: Union[UNet2DConditionModel, CLIPTextModel]):
|
||||
"""Unload all LoRA models from a UNet or Text Encoder.
|
||||
This implementation is base on LoraLoaderMixin.unload_lora_weights().
|
||||
"""
|
||||
recurse_remove_peft_layers(m)
|
||||
if hasattr(m, "peft_config"):
|
||||
del m.peft_config # type: ignore
|
||||
if hasattr(m, "_hf_peft_config_loaded"):
|
||||
m._hf_peft_config_loaded = None # type: ignore
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora_to_unet(cls, unet: UNet2DConditionModel, loras: Iterator[Tuple[LoRAModelRaw, float]]):
|
||||
try:
|
||||
# TODO(ryand): Test speed of low_cpu_mem_usage=True.
|
||||
for lora, lora_weight in loras:
|
||||
LoraLoaderMixin.load_lora_into_unet(
|
||||
state_dict=lora.state_dict,
|
||||
network_alphas=lora.network_alphas,
|
||||
unet=unet,
|
||||
low_cpu_mem_usage=True,
|
||||
adapter_name=lora.name,
|
||||
_pipeline=None,
|
||||
)
|
||||
yield
|
||||
finally:
|
||||
cls.unload_lora_from_model(unet)
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def apply_lora_to_text_encoder(
|
||||
cls, text_encoder: CLIPTextModel, loras: Iterator[Tuple[LoRAModelRaw, float]], prefix: str
|
||||
):
|
||||
assert prefix in ["text_encoder", "text_encoder_2"]
|
||||
try:
|
||||
for lora, lora_weight in loras:
|
||||
# Filter the state_dict to only include the keys that start with the prefix.
|
||||
text_encoder_state_dict = {
|
||||
key: value for key, value in lora.state_dict.items() if key.startswith(prefix + ".")
|
||||
}
|
||||
if len(text_encoder_state_dict) > 0:
|
||||
LoraLoaderMixin.load_lora_into_text_encoder(
|
||||
state_dict=text_encoder_state_dict,
|
||||
network_alphas=lora.network_alphas,
|
||||
text_encoder=text_encoder,
|
||||
low_cpu_mem_usage=True,
|
||||
adapter_name=lora.name,
|
||||
_pipeline=None,
|
||||
)
|
||||
yield
|
||||
finally:
|
||||
cls.unload_lora_from_model(text_encoder)
|
||||
66
invokeai/backend/lora_model_raw.py
Normal file
66
invokeai/backend/lora_model_raw.py
Normal file
@@ -0,0 +1,66 @@
|
||||
from pathlib import Path
|
||||
from typing import Optional, Union
|
||||
|
||||
import torch
|
||||
from diffusers.loaders.lora import LoraLoaderMixin
|
||||
from typing_extensions import Self
|
||||
|
||||
|
||||
class LoRAModelRaw:
|
||||
def __init__(
|
||||
self,
|
||||
name: str,
|
||||
state_dict: dict[str, torch.Tensor],
|
||||
network_alphas: Optional[dict[str, float]],
|
||||
):
|
||||
self._name = name
|
||||
self.state_dict = state_dict
|
||||
self.network_alphas = network_alphas
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return self._name
|
||||
|
||||
def to(self, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None) -> None:
|
||||
for key, layer in self.state_dict.items():
|
||||
self.state_dict[key] = layer.to(device=device, dtype=dtype)
|
||||
|
||||
def calc_size(self) -> int:
|
||||
"""Calculate the size of the model in bytes."""
|
||||
model_size = 0
|
||||
for layer in self.state_dict.values():
|
||||
model_size += layer.numel() * layer.element_size()
|
||||
return model_size
|
||||
|
||||
@classmethod
|
||||
def from_checkpoint(
|
||||
cls, file_path: Union[str, Path], device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None
|
||||
) -> Self:
|
||||
"""This function is based on diffusers LoraLoaderMixin.load_lora_weights()."""
|
||||
|
||||
file_path = Path(file_path)
|
||||
if file_path.is_dir():
|
||||
raise NotImplementedError("LoRA models from directories are not yet supported.")
|
||||
|
||||
dir_path = file_path.parent
|
||||
file_name = file_path.name
|
||||
|
||||
state_dict, network_alphas = LoraLoaderMixin.lora_state_dict(
|
||||
pretrained_model_name_or_path_or_dict=str(file_path), local_files_only=True, weight_name=str(file_name)
|
||||
)
|
||||
|
||||
is_correct_format = all("lora" in key for key in state_dict.keys())
|
||||
if not is_correct_format:
|
||||
raise ValueError("Invalid LoRA checkpoint.")
|
||||
|
||||
model = cls(
|
||||
# TODO(ryand): Handle both files and directories here?
|
||||
name=Path(file_path).stem,
|
||||
state_dict=state_dict,
|
||||
network_alphas=network_alphas,
|
||||
)
|
||||
|
||||
device = device or torch.device("cpu")
|
||||
dtype = dtype or torch.float32
|
||||
model.to(device=device, dtype=dtype)
|
||||
return model
|
||||
@@ -11,8 +11,6 @@ from typing_extensions import Self
|
||||
|
||||
from invokeai.backend.model_manager import BaseModelType
|
||||
|
||||
from .raw_model import RawModel
|
||||
|
||||
|
||||
class LoRALayerBase:
|
||||
# rank: Optional[int]
|
||||
@@ -368,7 +366,7 @@ class IA3Layer(LoRALayerBase):
|
||||
AnyLoRALayer = Union[LoRALayer, LoHALayer, LoKRLayer, FullLayer, IA3Layer]
|
||||
|
||||
|
||||
class LoRAModelRaw(RawModel): # (torch.nn.Module):
|
||||
class LoRAModelRaw(torch.nn.Module):
|
||||
_name: str
|
||||
layers: Dict[str, AnyLoRALayer]
|
||||
|
||||
@@ -31,12 +31,13 @@ from typing_extensions import Annotated, Any, Dict
|
||||
|
||||
from invokeai.app.invocations.constants import SCHEDULER_NAME_VALUES
|
||||
from invokeai.app.util.misc import uuid_string
|
||||
|
||||
from ..raw_model import RawModel
|
||||
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
|
||||
from invokeai.backend.lora_model_raw import LoRAModelRaw
|
||||
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
|
||||
from invokeai.backend.textual_inversion import TextualInversionModelRaw
|
||||
|
||||
# ModelMixin is the base class for all diffusers and transformers models
|
||||
# RawModel is the InvokeAI wrapper class for ip_adapters, loras, textual_inversion and onnx runtime
|
||||
AnyModel = Union[ModelMixin, RawModel, torch.nn.Module]
|
||||
AnyModel = Union[ModelMixin, torch.nn.Module, IPAdapter, LoRAModelRaw, TextualInversionModelRaw, IAIOnnxRuntimeModel]
|
||||
|
||||
|
||||
class InvalidModelConfigException(Exception):
|
||||
|
||||
@@ -6,7 +6,7 @@ from pathlib import Path
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from invokeai.app.services.config import InvokeAIAppConfig
|
||||
from invokeai.backend.lora import LoRAModelRaw
|
||||
from invokeai.backend.lora_model_raw import LoRAModelRaw
|
||||
from invokeai.backend.model_manager import (
|
||||
AnyModel,
|
||||
AnyModelConfig,
|
||||
@@ -51,7 +51,6 @@ class LoRALoader(ModelLoader):
|
||||
model = LoRAModelRaw.from_checkpoint(
|
||||
file_path=model_path,
|
||||
dtype=self._torch_dtype,
|
||||
base_model=self._model_base,
|
||||
)
|
||||
return model
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ from invokeai.backend.model_manager import AnyModel
|
||||
from invokeai.backend.model_manager.load.optimizations import skip_torch_weight_init
|
||||
from invokeai.backend.onnx.onnx_runtime import IAIOnnxRuntimeModel
|
||||
|
||||
from .lora import LoRAModelRaw
|
||||
from .lora_model_raw import LoRAModelRaw
|
||||
from .textual_inversion import TextualInversionManager, TextualInversionModelRaw
|
||||
|
||||
"""
|
||||
|
||||
@@ -6,17 +6,16 @@ from typing import Any, List, Optional, Tuple, Union
|
||||
|
||||
import numpy as np
|
||||
import onnx
|
||||
import torch
|
||||
from onnx import numpy_helper
|
||||
from onnxruntime import InferenceSession, SessionOptions, get_available_providers
|
||||
|
||||
from ..raw_model import RawModel
|
||||
|
||||
ONNX_WEIGHTS_NAME = "model.onnx"
|
||||
|
||||
|
||||
# NOTE FROM LS: This was copied from Stalker's original implementation.
|
||||
# I have not yet gone through and fixed all the type hints
|
||||
class IAIOnnxRuntimeModel(RawModel):
|
||||
class IAIOnnxRuntimeModel(torch.nn.Module):
|
||||
class _tensor_access:
|
||||
def __init__(self, model): # type: ignore
|
||||
self.model = model
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
"""Base class for 'Raw' models.
|
||||
|
||||
The RawModel class is the base class of LoRAModelRaw and TextualInversionModelRaw,
|
||||
and is used for type checking of calls to the model patcher. Its main purpose
|
||||
is to avoid a circular import issues when lora.py tries to import BaseModelType
|
||||
from invokeai.backend.model_manager.config, and the latter tries to import LoRAModelRaw
|
||||
from lora.py.
|
||||
|
||||
The term 'raw' was introduced to describe a wrapper around a torch.nn.Module
|
||||
that adds additional methods and attributes.
|
||||
"""
|
||||
|
||||
|
||||
class RawModel:
|
||||
"""Base class for 'Raw' model wrappers."""
|
||||
@@ -9,10 +9,8 @@ from safetensors.torch import load_file
|
||||
from transformers import CLIPTokenizer
|
||||
from typing_extensions import Self
|
||||
|
||||
from .raw_model import RawModel
|
||||
|
||||
|
||||
class TextualInversionModelRaw(RawModel):
|
||||
class TextualInversionModelRaw(torch.nn.Module):
|
||||
embedding: torch.Tensor # [n, 768]|[n, 1280]
|
||||
embedding_2: Optional[torch.Tensor] = None # [n, 768]|[n, 1280] - for SDXL models
|
||||
|
||||
|
||||
@@ -44,6 +44,7 @@ dependencies = [
|
||||
"onnx==1.15.0",
|
||||
"onnxruntime==1.16.3",
|
||||
"opencv-python==4.9.0.80",
|
||||
"peft==0.9.0",
|
||||
"pytorch-lightning==2.1.3",
|
||||
"safetensors==0.4.2",
|
||||
"timm==0.6.13", # needed to override timm latest in controlnet_aux, see https://github.com/isl-org/ZoeDepth/issues/26
|
||||
@@ -73,7 +74,7 @@ dependencies = [
|
||||
"easing-functions",
|
||||
"einops",
|
||||
"facexlib",
|
||||
"matplotlib", # needed for plotting of Penner easing functions
|
||||
"matplotlib", # needed for plotting of Penner easing functions
|
||||
"npyscreen",
|
||||
"omegaconf",
|
||||
"picklescan",
|
||||
|
||||
122
scripts/get_external_contributions.py
Normal file
122
scripts/get_external_contributions.py
Normal file
@@ -0,0 +1,122 @@
|
||||
import re
|
||||
from argparse import ArgumentParser, RawTextHelpFormatter
|
||||
from typing import Any
|
||||
|
||||
import requests
|
||||
from attr import dataclass
|
||||
from tqdm import tqdm
|
||||
|
||||
|
||||
def get_author(commit: dict[str, Any]) -> str:
|
||||
"""Gets the author of a commit.
|
||||
|
||||
If the author is not present, the committer is used instead and an asterisk appended to the name."""
|
||||
return commit["author"]["login"] if commit["author"] else f"{commit['commit']['author']['name']}*"
|
||||
|
||||
|
||||
@dataclass
|
||||
class CommitInfo:
|
||||
sha: str
|
||||
url: str
|
||||
author: str
|
||||
is_username: bool
|
||||
message: str
|
||||
data: dict[str, Any]
|
||||
|
||||
def __str__(self) -> str:
|
||||
return f"{self.sha}: {self.author}{'*' if not self.is_username else ''} - {self.message} ({self.url})"
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, commit: dict[str, Any]) -> "CommitInfo":
|
||||
return CommitInfo(
|
||||
sha=commit["sha"],
|
||||
url=commit["url"],
|
||||
author=commit["author"]["login"] if commit["author"] else commit["commit"]["author"]["name"],
|
||||
is_username=bool(commit["author"]),
|
||||
message=commit["commit"]["message"].split("\n")[0],
|
||||
data=commit,
|
||||
)
|
||||
|
||||
|
||||
def fetch_commits_between_tags(
|
||||
org_name: str, repo_name: str, from_ref: str, to_ref: str, token: str
|
||||
) -> list[CommitInfo]:
|
||||
"""Fetches all commits between two tags in a GitHub repository."""
|
||||
|
||||
commit_info: list[CommitInfo] = []
|
||||
headers = {"Authorization": f"token {token}"} if token else None
|
||||
|
||||
# Get the total number of pages w/ an intial request - a bit hacky but it works...
|
||||
response = requests.get(
|
||||
f"https://api.github.com/repos/{org_name}/{repo_name}/compare/{from_ref}...{to_ref}?page=1&per_page=100",
|
||||
headers=headers,
|
||||
)
|
||||
last_page_match = re.search(r'page=(\d+)&per_page=\d+>; rel="last"', response.headers["Link"])
|
||||
last_page = int(last_page_match.group(1)) if last_page_match else 1
|
||||
|
||||
pbar = tqdm(range(1, last_page + 1), desc="Fetching commits", unit="page", leave=False)
|
||||
|
||||
for page in pbar:
|
||||
compare_url = f"https://api.github.com/repos/{org_name}/{repo_name}/compare/{from_ref}...{to_ref}?page={page}&per_page=100"
|
||||
response = requests.get(compare_url, headers=headers)
|
||||
commits = response.json()["commits"]
|
||||
commit_info.extend([CommitInfo.from_data(c) for c in commits])
|
||||
|
||||
return commit_info
|
||||
|
||||
|
||||
def main():
|
||||
description = """Fetch external contributions between two tags in the InvokeAI GitHub repository. Useful for generating a list of contributors to include in release notes.
|
||||
|
||||
When the GitHub username for a commit is not available, the committer name is used instead and an asterisk appended to the name.
|
||||
|
||||
Example output (note the second commit has an asterisk appended to the name):
|
||||
171f2aa20ddfefa23c5edbeb2849c4bd601fe104: rohinish404 - fix(ui): image not getting selected (https://api.github.com/repos/invoke-ai/InvokeAI/commits/171f2aa20ddfefa23c5edbeb2849c4bd601fe104)
|
||||
0bb0e226dcec8a17e843444ad27c29b4821dad7c: Mark E. Shoulson* - Flip default ordering of workflow library; #5477 (https://api.github.com/repos/invoke-ai/InvokeAI/commits/0bb0e226dcec8a17e843444ad27c29b4821dad7c)
|
||||
"""
|
||||
|
||||
parser = ArgumentParser(description=description, formatter_class=RawTextHelpFormatter)
|
||||
parser.add_argument("--token", dest="token", type=str, default=None, help="The GitHub token to use")
|
||||
parser.add_argument("--from", dest="from_ref", type=str, help="The start reference (commit, tag, etc)")
|
||||
parser.add_argument("--to", dest="to_ref", type=str, help="The end reference (commit, tag, etc)")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
org_name = "invoke-ai"
|
||||
repo_name = "InvokeAI"
|
||||
|
||||
# List of members of the organization, including usernames and known display names,
|
||||
# any of which may be used in the commit data. Used to filter out commits.
|
||||
org_members = [
|
||||
"blessedcoolant",
|
||||
"brandonrising",
|
||||
"chainchompa",
|
||||
"ebr",
|
||||
"Eugene Brodsky",
|
||||
"hipsterusername",
|
||||
"Kent Keirsey",
|
||||
"lstein",
|
||||
"Lincoln Stein",
|
||||
"maryhipp",
|
||||
"Mary Hipp Rogers",
|
||||
"Mary Hipp",
|
||||
"psychedelicious",
|
||||
"RyanJDick",
|
||||
"Ryan Dick",
|
||||
]
|
||||
|
||||
all_commits = fetch_commits_between_tags(
|
||||
org_name=org_name,
|
||||
repo_name=repo_name,
|
||||
from_ref=args.from_ref,
|
||||
to_ref=args.to_ref,
|
||||
token=args.token,
|
||||
)
|
||||
filtered_commits = filter(lambda x: x.author not in org_members, all_commits)
|
||||
|
||||
for commit in filtered_commits:
|
||||
print(commit)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -5,7 +5,7 @@
|
||||
import pytest
|
||||
import torch
|
||||
|
||||
from invokeai.backend.lora import LoRALayer, LoRAModelRaw
|
||||
from invokeai.backend.lora_model_raw import LoRALayer, LoRAModelRaw
|
||||
from invokeai.backend.model_patcher import ModelPatcher
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user