Compare commits

...

28 Commits

Author SHA1 Message Date
Lincoln Stein
3c74dd41c4 Merge branch 'hwharrison-main' into main
This enables k_lms sampling (now the default)`:wq
2022-08-21 20:17:22 -04:00
Lincoln Stein
f5450bad61 k_lms sampling working; half precision working, can override with --full_precision 2022-08-21 20:16:31 -04:00
Lincoln Stein
2ace56313c Update README.md 2022-08-21 19:59:36 -04:00
Lincoln Stein
78aba5b770 preparing for merge into main 2022-08-21 19:57:48 -04:00
Lincoln Stein
49f0d31fac turned off debugging flag 2022-08-21 18:27:48 -04:00
Lincoln Stein
bb91ca0462 first attempt to fold k_lms changes proposed by hwharrison and bmaltais 2022-08-21 17:09:00 -04:00
Lincoln Stein
d340afc9e5 Merge branch 'main' of https://github.com/hwharrison/stable-diffusion into hwharrison-main 2022-08-21 16:32:31 -04:00
Lincoln Stein
7085d1910b set sys.path to include "." before loading simplet1i module 2022-08-21 11:03:22 -04:00
Lincoln Stein
a997e09c48 Merge pull request #13 from hwharrison/fix_windows_bug
Fix windows path error.
2022-08-21 10:46:27 -04:00
henry
503f962f68 ntpath doesn't have append, use join instead 2022-08-20 22:38:56 -05:00
henry
41f0afbcb6 add klms sampling 2022-08-20 22:28:29 -05:00
Lincoln Stein
6650b98e7c close #11 2022-08-20 19:49:12 -04:00
Lincoln Stein
1ca3dc553c added "." directory to sys path to prevent ModuleNotFound error on ldm.simplet2i that some Windows users have experienced 2022-08-20 19:46:54 -04:00
Lincoln Stein
09afcc321c Merge pull request #4 from xraxra/halfPrecision
use Half precision for reduced memory usage & faster speed
2022-08-20 09:42:17 -04:00
Lincoln Stein
7b2335068c Update README.md 2022-08-19 15:44:26 -04:00
Lincoln Stein
d3eff4d827 Update README.md 2022-08-19 15:42:50 -04:00
Lincoln Stein
0d23a0f899 Update README.md 2022-08-19 15:41:54 -04:00
Lincoln Stein
985948c8b9 Update README.md 2022-08-19 15:40:13 -04:00
Lincoln Stein
6ae09f6e46 Update README.md 2022-08-19 15:37:54 -04:00
Lincoln Stein
ae821ce0e6 Create README.md 2022-08-19 15:33:18 -04:00
Lincoln Stein
ce5b94bf40 Update README.md 2022-08-19 14:32:05 -04:00
Lincoln Stein
b5d9981125 Update README.md 2022-08-19 14:29:05 -04:00
Lincoln Stein
9a237015da Fixed an errant quotation mark in README 2022-08-19 07:55:20 -04:00
Lincoln Stein
5eff5d4cd2 Update README.md 2022-08-19 07:04:38 -04:00
Lincoln Stein
4527ef15f9 Update README.md 2022-08-19 06:58:25 -04:00
Lincoln Stein
0cea751476 remove shebang line from scripts; suspected culprit in Windows "module ldm.simplet2i not found" error 2022-08-19 06:33:42 -04:00
xra
a5fb8469ed use Half precision for reduced memory usage & faster speed
This allows users with 6 & 8gb cards to run 512x512 and for even larger resolutions for bigger GPUs
I compared the output in Beyond Compare and there are minor differences detected at tolerance 3, but side by side the differences are not perceptible.
2022-08-19 17:23:43 +09:00
Lincoln Stein
9eaef0c5a8 Update README.md 2022-08-18 23:26:41 -04:00
7 changed files with 305 additions and 50 deletions

169
README.md
View File

@@ -19,8 +19,9 @@ from the command-line interface is very fast.
The script uses the readline library to allow for in-line editing,
command history (up and down arrows), autocompletion, and more.
Note that this has only been tested in the Linux environment. Testing
and tweaking for Windows is in progress.
The script is confirmed to work on Linux and Windows systems. It should
work on MacOSX as well, but this is not confirmed. Note that this script
runs from the command-line (CMD or Terminal window), and does not have a GUI.
~~~~
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
@@ -45,9 +46,10 @@ dream> "there's a fly in my soup" -n6 -g
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
~~~~
The dream> prompt's arguments are pretty-much
The dream> prompt's arguments are pretty much
identical to those used in the Discord bot, except you don't need to
type "!dream". A significant change is that creation of individual images is the default
type "!dream" (it doesn't hurt if you do). A significant change is that creation of individual images
is now the default
unless --grid (-g) is given. For backward compatibility, the -i switch is recognized.
For command-line help type -h (or --help) at the dream> prompt.
@@ -71,31 +73,150 @@ The --init_img (-I) option gives the path to the seed picture. --strength (-f) c
the original will be modified, ranging from 0.0 (keep the original intact), to 1.0 (ignore the original
completely). The default is 0.75, and ranges from 0.25-0.75 give interesting results.
## Changes
- v1.01 (21 August 2022)
* added k_lms sampling **Please run "conda update -f environment.yaml" to load the k_lms dependencies**
* use half precision arithmetic by default, resulting in faster execution and lower memory requirements
Pass argument --full_precision to dream.py to get slower but more accurate image generation
## Installation
For installation, follow the instructions from the original CompViz/stable-diffusion
README which is appended to this README for your convenience. A few things to be aware of:
### Linux/Mac
1. You will need the stable-diffusion model weights, which have to be downloaded separately as described
in the CompViz instructions. They are expected to be released in the latter half of August.
1. You will need to install the following prerequisites if they are not already available. Use your
operating system's preferred installer
* Python (version 3.8 or higher)
* git
2. If you do not have the weights and want to play with low-quality image generation, then you can use
the public LAION400m weights, which can be installed like this:
2. Install the Python Anaconda environment manager using pip3.
```
~$ pip3 install anaconda
```
After installing anaconda, you should log out of your system and log back in. If the installation
worked, your command prompt will be prefixed by the name of the current anaconda environment, "(base)".
~~~~
mkdir -p models/ldm/text2img-large/
wget -O models/ldm/text2img-large/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt
~~~~
3. Copy the stable-diffusion source code from GitHub:
```
(base) ~$ git clone https://github.com/lstein/stable-diffusion.git
```
This will create stable-diffusion folder where you will follow the rest of the steps.
You will then have to invoke dream.py with the --laion400m (or -l for short) flag:
~~~~
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py -l
~~~~
4. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
```
(base) ~$ cd stable-diffusion
(base) ~/stable-diffusion$
```
5. Use anaconda to copy necessary python packages, create a new python environment named "ldm",
and activate the environment.
```
(base) ~/stable-diffusion$ conda env create -f environment.yaml
(base) ~/stable-diffusion$ conda activate ldm
(ldm) ~/stable-diffusion$
```
After these steps, your command prompt will be prefixed by "(ldm)" as shown above.
3. To get around issues that arise when running the stable diffusion model on a machine without internet
connectivity, I wrote a script that pre-downloads internet dependencies. Whether or not your GPU machine
has connectivity, you will need to run this preloading script before the first run of dream.py. See
"Workaround for machines with limited internet connectivity" below for the walkthrough.
6. Load a couple of small machine-learning models required by stable diffusion:
```
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
```
7. Now you need to install the weights for the stable diffusion model.
For testing prior to the release of the real weights, you can use an older weight file that produces low-quality images. Create a directory within stable-diffusion named "models/ldm/text2img.large", and use the wget URL downloader tool to copy the weight file into it:
```
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/text2img-large
(ldm) ~/stable-diffusion$ wget -O models/ldm/text2img-large/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt
```
For testing with the released weighs, you will do something similar, but with a directory named "models/ldm/stable-diffusion-v1"
```
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
(ldm) ~/stable-diffusion$ wget -O models/ldm/stable-diffusion-v1/model.ckpt <ENTER URL HERE>
```
These weight files are ~5 GB in size, so downloading may take a while.
8. Start generating images!
```
# for the pre-release weights use the -l or --liaon400m switch
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -l
# for the post-release weights do not use the switch
(ldm) ~/stable-diffusion$ python3 scripts/dream.py
# for additional configuration switches and arguments, use -h or --help
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
```
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the "stable-diffusion"
directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple ModuleNotFound errors.
### Updating to newer versions of the script
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
```
(ldm) ~/stable-diffusion$ git pull
```
This will bring your local copy into sync with the remote one.
### Windows
1. Install the most recent Python from here: https://www.python.org/downloads/windows/
2. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
3. Install Git from here: https://git-scm.com/download/win
4. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
5. Run the command:
```
git clone https://github.com/lstein/stable-diffusion.git
```
This will create stable-diffusion folder where you will follow the rest of the steps.
6. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
```
cd stable-diffusion
```
7. Run the following two commands:
```
conda env create -f environment.yaml (step 7a)
conda activate ldm (step 7b)
```
This will install all python requirements and activate the "ldm" environment which sets PATH and other environment variables properly.
8. Run the command:
```
python scripts\preload_models.py
```
This installs two machine learning models that stable diffusion requires.
9. Now you need to install the weights for the big stable diffusion model.
For testing prior to the release of the real weights, create a directory within stable-diffusion named "models\ldm\text2img.large".
For testing with the released weights, create a directory within stable-diffusion named "models\ldm\stable-diffusion-v1".
Then use a web browser to copy model.ckpt into the appropriate directory. For the text2img.large (pre-release) model, the weights are at https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt. Check back here later for the release URL.
10. Start generating images!
```
# for the pre-release weights
python scripts\dream.py -l
# for the post-release weights
python scripts\dream.py
```
11. Subsequently, to relaunch the script, first activate the Anaconda command window (step 4), run "conda activate ldm" (step 7b), and then launch the dream script (step 10).
### Updating to newer versions of the script
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
```
git pull
```
This will bring your local copy into sync with the remote one.
## Simplified API for text to image generation
@@ -181,6 +302,7 @@ See [this section](#stable-diffusion-v1) below and the [model card](https://hugg
## Requirements
A suitable [conda](https://conda.io/) environment named `ldm` can be created
and activated with:
@@ -195,8 +317,7 @@ You can also update an existing [latent diffusion](https://github.com/CompVis/la
conda install pytorch torchvision -c pytorch
pip install transformers==4.19.2
pip install -e .
```
```
## Stable Diffusion v1

View File

@@ -24,6 +24,8 @@ dependencies:
- transformers==4.19.2
- torchmetrics==0.6.0
- kornia==0.6
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- accelerate==0.12.0
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- -e git+https://github.com/lstein/k-diffusion.git@master#egg=k-diffusion
- -e .

View File

@@ -0,0 +1,74 @@
'''wrapper around part of Karen Crownson's k-duffsion library, making it call compatible with other Samplers'''
import k_diffusion as K
import torch
import torch.nn as nn
import accelerate
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
return uncond + (cond - uncond) * cond_scale
class KSampler(object):
def __init__(self,model,schedule="lms", **kwargs):
super().__init__()
self.model = K.external.CompVisDenoiser(model)
self.accelerator = accelerate.Accelerator()
self.device = self.accelerator.device
self.schedule = schedule
def forward(self, x, sigma, uncond, cond, cond_scale):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
return uncond + (cond - uncond) * cond_scale
# most of these arguments are ignored and are only present for compatibility with
# other samples
@torch.no_grad()
def sample(self,
S,
batch_size,
shape,
conditioning=None,
callback=None,
normals_sequence=None,
img_callback=None,
quantize_x0=False,
eta=0.,
mask=None,
x0=None,
temperature=1.,
noise_dropout=0.,
score_corrector=None,
corrector_kwargs=None,
verbose=True,
x_T=None,
log_every_t=100,
unconditional_guidance_scale=1.,
unconditional_conditioning=None,
# this has to come in the same format as the conditioning, # e.g. as encoded tokens, ...
**kwargs
):
sigmas = self.model.get_sigmas(S)
if x_T:
x = x_T
else:
x = torch.randn([batch_size, *shape], device=self.device) * sigmas[0] # for GPU draw
model_wrap_cfg = CFGDenoiser(self.model)
extra_args = {'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': unconditional_guidance_scale}
return (K.sampling.sample_lms(model_wrap_cfg, x, sigmas, extra_args=extra_args, disable=not self.accelerator.is_main_process),
None)
def gather(samples_ddim):
return self.accelerator.gather(samples_ddim)

View File

@@ -11,7 +11,7 @@ t2i = T2I(outdir = <path> // outputs/txt2img-samples
batch_size = <integer> // how many images to generate per sampling (1)
steps = <integer> // 50
seed = <integer> // current system time
sampler = ['ddim','plms'] // ddim
sampler = ['ddim','plms','klms'] // klms
grid = <boolean> // false
width = <integer> // image width, multiple of 64 (512)
height = <integer> // image height, multiple of 64 (512)
@@ -62,8 +62,9 @@ import time
import math
from ldm.util import instantiate_from_config
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler
from ldm.models.diffusion.ksampler import KSampler
class T2I:
"""T2I class
@@ -101,12 +102,13 @@ class T2I:
cfg_scale=7.5,
weights="models/ldm/stable-diffusion-v1/model.ckpt",
config = "configs/latent-diffusion/txt2img-1p4B-eval.yaml",
sampler="plms",
sampler="klms",
latent_channels=4,
downsampling_factor=8,
ddim_eta=0.0, # deterministic
fixed_code=False,
precision='autocast',
full_precision=False,
strength=0.75 # default in scripts/img2img.py
):
self.outdir = outdir
@@ -125,6 +127,7 @@ class T2I:
self.downsampling_factor = downsampling_factor
self.ddim_eta = ddim_eta
self.precision = precision
self.full_precision = full_precision
self.strength = strength
self.model = None # empty for now
self.sampler = None
@@ -256,6 +259,8 @@ class T2I:
model = self.load_model() # will instantiate the model or return it from cache
precision_scope = autocast if self.precision=="autocast" else nullcontext
# grid and individual are mutually exclusive, with individual taking priority.
# not necessary, but needed for compatability with dream bot
if (grid is None):
@@ -279,7 +284,8 @@ class T2I:
assert os.path.isfile(init_img)
init_image = self._load_img(init_img).to(self.device)
init_image = repeat(init_image, '1 ... -> b ...', b=batch_size)
init_latent = model.get_first_stage_encoding(model.encode_first_stage(init_image)) # move to latent space
with precision_scope("cuda"):
init_latent = model.get_first_stage_encoding(model.encode_first_stage(init_image)) # move to latent space
sampler.make_schedule(ddim_num_steps=steps, ddim_eta=ddim_eta, verbose=False)
@@ -292,7 +298,6 @@ class T2I:
t_enc = int(strength * steps)
print(f"target t_enc is {t_enc} steps")
precision_scope = autocast if self.precision=="autocast" else nullcontext
images = list()
seeds = list()
@@ -385,6 +390,9 @@ class T2I:
elif self.sampler_name == 'ddim':
print("setting sampler to ddim")
self.sampler = DDIMSampler(self.model)
elif self.sampler_name == 'klms':
print("setting sampler to klms")
self.sampler = KSampler(self.model,'lms')
else:
print(f"unsupported sampler {self.sampler_name}, defaulting to plms")
self.sampler = PLMSSampler(self.model)
@@ -401,6 +409,11 @@ class T2I:
m, u = model.load_state_dict(sd, strict=False)
model.cuda()
model.eval()
if self.full_precision:
print('Using slower but more accurate full-precision math (--full_precision)')
else:
print('Using half precision math. Call with --full_precision to use full precision')
model.half()
return model
def _load_img(self,path):

View File

@@ -1,8 +1,9 @@
#!/usr/bin/env python
#!/usr/bin/env python3
import argparse
import shlex
import atexit
import os
import sys
# readline unavailable on windows systems
try:
@@ -11,7 +12,7 @@ try:
except:
readline_available = False
debugging = True
debugging = False
def main():
''' Initialize command-line parsers and the diffusion model '''
@@ -35,6 +36,7 @@ def main():
setup_readline()
print("* Initializing, be patient...\n")
sys.path.append('.')
from pytorch_lightning import logging
from ldm.simplet2i import T2I
@@ -48,6 +50,7 @@ def main():
outdir=opt.outdir,
sampler=opt.sampler,
weights=weights,
full_precision=opt.full_precision,
config=config)
# make sure the output directory exists
@@ -164,14 +167,18 @@ def create_argv_parser():
type=int,
default=1,
help="number of images to generate")
parser.add_argument('-F','--full_precision',
dest='full_precision',
action='store_true',
help="use slower full precision math for calculations")
parser.add_argument('-b','--batch_size',
type=int,
default=1,
help="number of images to produce per iteration (currently not working properly - producing too many images)")
parser.add_argument('--sampler',
choices=['plms','ddim'],
default='plms',
help="which sampler to use")
choices=['plms','ddim', 'klms'],
default='klms',
help="which sampler to use (klms)")
parser.add_argument('-o',
'--outdir',
type=str,

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
# Before running stable-diffusion on an internet-isolated machine,
# run this script from one with internet connectivity. The
# two machines must share a common .cache directory.

View File

@@ -12,6 +12,10 @@ from pytorch_lightning import seed_everything
from torch import autocast
from contextlib import contextmanager, nullcontext
import accelerate
import k_diffusion as K
import torch.nn as nn
from ldm.util import instantiate_from_config
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.models.diffusion.plms import PLMSSampler
@@ -80,6 +84,11 @@ def main():
action='store_true',
help="use plms sampling",
)
parser.add_argument(
"--klms",
action='store_true',
help="use klms sampling",
)
parser.add_argument(
"--laion400m",
action='store_true',
@@ -190,6 +199,22 @@ def main():
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model = model.to(device)
#for klms
model_wrap = K.external.CompVisDenoiser(model)
accelerator = accelerate.Accelerator()
device = accelerator.device
class CFGDenoiser(nn.Module):
def __init__(self, model):
super().__init__()
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale):
x_in = torch.cat([x] * 2)
sigma_in = torch.cat([sigma] * 2)
cond_in = torch.cat([uncond, cond])
uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
return uncond + (cond - uncond) * cond_scale
if opt.plms:
sampler = PLMSSampler(model)
else:
@@ -226,8 +251,8 @@ def main():
with model.ema_scope():
tic = time.time()
all_samples = list()
for n in trange(opt.n_iter, desc="Sampling"):
for prompts in tqdm(data, desc="data"):
for n in trange(opt.n_iter, desc="Sampling", disable =not accelerator.is_main_process):
for prompts in tqdm(data, desc="data", disable =not accelerator.is_main_process):
uc = None
if opt.scale != 1.0:
uc = model.get_learned_conditioning(batch_size * [""])
@@ -235,18 +260,32 @@ def main():
prompts = list(prompts)
c = model.get_learned_conditioning(prompts)
shape = [opt.C, opt.H // opt.f, opt.W // opt.f]
samples_ddim, _ = sampler.sample(S=opt.ddim_steps,
conditioning=c,
batch_size=opt.n_samples,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt.scale,
unconditional_conditioning=uc,
eta=opt.ddim_eta,
x_T=start_code)
if not opt.klms:
samples_ddim, _ = sampler.sample(S=opt.ddim_steps,
conditioning=c,
batch_size=opt.n_samples,
shape=shape,
verbose=False,
unconditional_guidance_scale=opt.scale,
unconditional_conditioning=uc,
eta=opt.ddim_eta,
x_T=start_code)
else:
sigmas = model_wrap.get_sigmas(opt.ddim_steps)
if start_code:
x = start_code
else:
x = torch.randn([opt.n_samples, *shape], device=device) * sigmas[0] # for GPU draw
model_wrap_cfg = CFGDenoiser(model_wrap)
extra_args = {'cond': c, 'uncond': uc, 'cond_scale': opt.scale}
samples_ddim = K.sampling.sample_lms(model_wrap_cfg, x, sigmas, extra_args=extra_args, disable=not accelerator.is_main_process)
x_samples_ddim = model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
if opt.klms:
x_sample = accelerator.gather(x_samples_ddim)
if not opt.skip_save:
for x_sample in x_samples_ddim: