Update-docs (#1382)

* update IMG2IMG.md

* update INPAINTING.md

* update WEBUIHOTKEYS.md

* more doc updates (mostly fix formatting):
- OUTPAINTING.md
- POSTPROCESS.md
- PROMPTS.md
- VARIATIONS.md
- WEB.md
- WEBUIHOTKEYS.md
This commit is contained in:
Matthias Wild
2022-11-05 14:36:45 +01:00
committed by GitHub
parent fcdefa0620
commit 7b9a4564b1
8 changed files with 601 additions and 503 deletions

View File

@@ -6,10 +6,11 @@ title: Image-to-Image
## `img2img`
This script also provides an `img2img` feature that lets you seed your creations with an initial
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
top of the image you provide, preserving the original's basic shape and layout. To use it, provide
the `--init_img` option as shown here:
This script also provides an `img2img` feature that lets you seed your creations
with an initial drawing or photo. This is a really cool feature that tells
stable diffusion to build the prompt on top of the image you provide, preserving
the original's basic shape and layout. To use it, provide the `--init_img`
option as shown here:
```commandline
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
@@ -18,31 +19,33 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic
This will take the original image shown here:
<figure markdown>
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
![](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png)
</figure>
and generate a new image based on it as shown here:
<figure markdown>
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
![](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png)
</figure>
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` (`-f`) controls how much
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
adding steps will continuously change the resulting image and it will not converge.
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
the original intact), to `1.0` (ignore the original completely). The default is
`0.75`, and ranges from `0.25-0.90` give interesting results. Other relevant
options include `-C` (classification free guidance scale), and `-s` (steps).
Unlike `txt2img`, adding steps will continuously change the resulting image and
it will not converge.
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
the original image. This is done by passing the first generated image
back into img2img the requested number of times. It generates
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>`
count variants on the original image. This is done by passing the first
generated image back into img2img the requested number of times. It generates
interesting variants.
Note that the prompt makes a big difference. For example, this slight variation on the prompt produces
a very different image:
Note that the prompt makes a big difference. For example, this slight variation
on the prompt produces a very different image:
<figure markdown>
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png)
<caption markdown>photograph of a tree on a hill with a river</caption>
</figure>
@@ -52,27 +55,37 @@ a very different image:
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
transparent regions, a process called [`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting). However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not erased.
If the initial image contains transparent regions, then Stable Diffusion will
only draw within the transparent regions, a process called
[`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting).
However, for this to work correctly, the color information underneath the
transparent needs to be preserved, not erased.
!!! warning
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
by width x height:
~~~
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller
than 512x512. Please scale your image to at least 512x512 before using it.
Larger images are not a problem, but may run out of VRAM on your GPU card. To
fix this, use the --fit option, which downscales the initial image to fit within
the box specified by width x height:
```
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
~~~
```
## How does it actually work, though?
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
gaussian noise and progressively refines it over the requested number of steps, `img2img` skips some of these earlier steps
(how many it skips is indirectly controlled by the `--strength` parameter), and uses instead your initial image mixed with gaussian noise as the starting image.
The main difference between `img2img` and `prompt2img` is the starting point.
While `prompt2img` always starts with pure gaussian noise and progressively
refines it over the requested number of steps, `img2img` skips some of these
earlier steps (how many it skips is indirectly controlled by the `--strength`
parameter), and uses instead your initial image mixed with gaussian noise as the
starting image.
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
**Let's start** by thinking about vanilla `prompt2img`, just generating an image
from a prompt. If the step count is 10, then the "latent space" (Stable
Diffusion's internal representation of the image) for the prompt "fire" with
seed `1592514025` develops something like this:
```commandline
invoke> "fire" -s10 -W384 -H384 -S1592514025
@@ -82,9 +95,16 @@ invoke> "fire" -s10 -W384 -H384 -S1592514025
![latent steps](../assets/img2img/000019.steps.png)
</figure>
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
Put simply: starting from a frame of fuzz/static, SD finds details in each frame
that it thinks look like "fire" and brings them a little bit more into focus,
gradually scrubbing out the fuzz until a clear image remains.
**When you use `img2img`** some of the earlier steps are cut, and instead an initial image of your choice is used. But because of how the maths behind Stable Diffusion works, this image needs to be mixed with just the right amount of noise (fuzz/static) for where it is being inserted. This is where the strength parameter comes in. Depending on the set strength, your image will be inserted into the sequence at the appropriate point, with just the right amount of noise.
**When you use `img2img`** some of the earlier steps are cut, and instead an
initial image of your choice is used. But because of how the maths behind Stable
Diffusion works, this image needs to be mixed with just the right amount of
noise (fuzz/static) for where it is being inserted. This is where the strength
parameter comes in. Depending on the set strength, your image will be inserted
into the sequence at the appropriate point, with just the right amount of noise.
### A concrete example
@@ -94,7 +114,9 @@ I want SD to draw a fire based on this hand-drawn image:
![drawing of a fireplace](../assets/img2img/fire-drawing.png)
</figure>
Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like:
Let's only do 10 steps, to make it easier to see what's happening. If strength
is `0.7`, this is what the internal steps the algorithm has to take will look
like:
<figure markdown>
![gravity32](../assets/img2img/000032.steps.gravity.png)
@@ -106,31 +128,47 @@ With strength `0.4`, the steps look more like this:
![gravity30](../assets/img2img/000030.steps.gravity.png)
</figure>
Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`:
Notice how much more fuzzy the starting image is for strength `0.7` compared to
`0.4`, and notice also how much longer the sequence is with `0.7`:
| | strength = 0.7 | strength = 0.4 |
| -- | -- | -- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| steps argument to `invoke>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 |
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
| | strength = 0.7 | strength = 0.4 |
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| steps argument to `invoke>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 |
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing.
Both of the outputs look kind of like what I was thinking of. With the strength
higher, my input becomes more vague, _and_ Stable Diffusion has more steps to
refine its output. But it's not really making what I want, which is a picture of
cheery open fire. With the strength lower, my input is more clear, _but_ Stable
Diffusion has less chance to refine itself, so the result ends up inheriting all
the problems of my bad drawing.
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `"fire"`:
If you want to try this out yourself, all of these are using a seed of
`1592514025` with a width/height of `384`, step count `10`, the default sampler
(`k_lms`), and the single-word prompt `"fire"`:
```commandline
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
The code for rendering intermediates is on my (damian0815's) branch
[document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) -
run `invoke.py` and check your `outputs/img-samples/intermediates` folder while
generating an image.
### Compensating for the reduced step count
After putting this guide together I was curious to see how the difference would be if I increased the step count to compensate, so that SD could have the same amount of steps to develop the image regardless of the strength. So I ran the generation again using the same seed, but this time adapting the step count to give each generation 20 steps.
After putting this guide together I was curious to see how the difference would
be if I increased the step count to compensate, so that SD could have the same
amount of steps to develop the image regardless of the strength. So I ran the
generation again using the same seed, but this time adapting the step count to
give each generation 20 steps.
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
does `20` steps from my image):
```commandline
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
@@ -140,7 +178,8 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
![000035.1592514025](../assets/img2img/000035.1592514025.png)
</figure>
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
make sure SD does `20` steps from my image):
```commandline
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
@@ -150,7 +189,11 @@ invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
![000046.1592514025](../assets/img2img/000046.1592514025.png)
</figure>
In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`:
In both cases the image is nice and clean and "finished", but because at
strength `0.7` Stable Diffusion has been give so much more freedom to improve on
my badly-drawn flames, they've come out looking much better. You can really see
the difference when looking at the latent steps. There's more noise on the first
image with strength `0.7`:
<figure markdown>
![gravity46](../assets/img2img/000046.steps.gravity.png)
@@ -162,15 +205,19 @@ than there is for strength `0.4`:
![gravity35](../assets/img2img/000035.steps.gravity.png)
</figure>
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
and that extra noise gives the algorithm more choices when it is evaluating how
to denoise any particular pixel in the image.
Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
Unfortunately, it seems that `img2img` is very sensitive to the step count.
Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
<figure markdown>
![gravity45](../assets/img2img/000045.1592514025.png)
</figure>
By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames.
By comparing the latents we can sort of see that something got interpreted
differently enough on the third or fourth step to lead to a rather different
interpretation of the flames.
<figure markdown>
![gravity46](../assets/img2img/000046.steps.gravity.png)
@@ -180,4 +227,9 @@ By comparing the latents we can sort of see that something got interpreted diffe
![gravity45](../assets/img2img/000045.steps.gravity.png)
</figure>
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see [stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.
This is the result of a difference in the de-noising "schedule" - basically the
noise has to be cleaned by a certain degree each step or the model won't
"converge" on the image properly (see
[stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more
about that). A different step count means a different schedule, which means
things get interpreted slightly differently at every step.