Compare commits

...

18 Commits

Author SHA1 Message Date
Lincoln Stein
a6d6bafd13 updated README with info on weighted partial prompts 2022-08-23 01:58:47 -04:00
Lincoln Stein
9d1343dce3 resolved conflicts and tested 2022-08-23 01:44:43 -04:00
Lincoln Stein
11c0df07b7 prompt weighting not working 2022-08-23 01:23:14 -04:00
Lincoln Stein
ca8a799373 Merge pull request #24 from bakkot/patch-1
Fix usage of simplified API in readme
2022-08-23 01:02:13 -04:00
Lincoln Stein
710b908290 Keyboard interrupt retains seed and log information in files produced prior to interrupt. Closes #21 2022-08-23 00:51:38 -04:00
Kevin Gibbons
c80ce4fff5 fix default config to match docs / dream.py 2022-08-22 21:46:22 -07:00
Lincoln Stein
bc7b1fdd37 Added --from_file argument to load input from a file. Closes #23 2022-08-23 00:30:06 -04:00
Kevin Gibbons
1b7d414784 Fix usage of simplified API in readme 2022-08-22 21:01:15 -07:00
Lincoln Stein
6d1219deec fixed filenames 2022-08-22 23:56:36 -04:00
Lincoln Stein
e019de34ac can now change output directories in mid-session using cd and pwd commands 2022-08-22 21:14:31 -04:00
Lincoln Stein
88563fd27a added support for cd command in path completer 2022-08-22 21:01:06 -04:00
Lincoln Stein
18289dabcb better exception handling for out of memory errors and badly formatted prompts 2022-08-22 16:55:18 -04:00
Lincoln Stein
e70169257e better exception handling for out of memory errors and badly formatted prompts 2022-08-22 16:55:06 -04:00
Lincoln Stein
2afa87e911 Update README.md 2022-08-22 15:45:44 -04:00
Lincoln Stein
281e381cfc clarify use of preload_models.py 2022-08-22 15:42:06 -04:00
xra
e4eb775b63 added optional parameter to skip subprompt weight normalization
allows more control when fine-tuning
2022-08-23 00:03:32 +09:00
xra
a3632f5b4f improved comments & added warning if value couldn't be parsed correctly 2022-08-22 23:32:01 +09:00
xra
2736d7e15e optional weighting for creative blending of prompts
example: "an apple: a banana:0 a watermelon:0.5"
        the above example turns into 3 sub-prompts:
        "an apple" 1.0 (default if no value)
        "a banana" 0.0
        "a watermelon" 0.5
        The weights are added and normalized
        The resulting image will be: apple 66%, banana 0%, watermelon 33%
2022-08-22 22:59:06 +09:00
3 changed files with 368 additions and 143 deletions

View File

@@ -57,16 +57,16 @@ dream> q
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
~~~~
The dream> prompt's arguments are pretty much
identical to those used in the Discord bot, except you don't need to
type "!dream" (it doesn't hurt if you do). A significant change is that creation of individual images
is now the default
unless --grid (-g) is given. For backward compatibility, the -i switch is recognized.
For command-line help type -h (or --help) at the dream> prompt.
The dream> prompt's arguments are pretty much identical to those used
in the Discord bot, except you don't need to type "!dream" (it doesn't
hurt if you do). A significant change is that creation of individual
images is now the default unless --grid (-g) is given. For backward
compatibility, the -i switch is recognized. For command-line help
type -h (or --help) at the dream> prompt.
The script itself also recognizes a series of command-line switches that will change
important global defaults, such as the directory for image outputs and the location
of the model weight files.
The script itself also recognizes a series of command-line switches
that will change important global defaults, such as the directory for
image outputs and the location of the model weight files.
## Image-to-Image
@@ -84,8 +84,45 @@ The --init_img (-I) option gives the path to the seed picture. --strength (-f) c
the original will be modified, ranging from 0.0 (keep the original intact), to 1.0 (ignore the original
completely). The default is 0.75, and ranges from 0.25-0.75 give interesting results.
## Weighted Prompts
You may weight different sections of the prompt to tell the sampler to attach different levels of
priority to them, by adding :(number) to the end of the section you wish to up- or downweight.
For example consider this prompt:
~~~~
tabby cat:0.25 white duck:0.75 hybrid
~~~~
This will tell the sampler to invest 25% of its effort on the tabby
cat aspect of the image and 75% on the white duck aspect
(surprisingly, this example actually works). The prompt weights can
use any combination of integers and floating point numbers, and they
do not need to add up to 1. A practical example of using this type of
weighting is described here:
https://www.reddit.com/r/StableDiffusion/comments/wvb7q7/using_prompt_weights_to_tweak_an_image_with/
## Changes
* v1.06 (23 August 2022)
* Added weighted prompt support contributed by [xraxra](https://github.com/xraxra)
* Example of using weighted prompts to tweak a demonic figure contributed by [bmaltais](https://github.com/bmaltais)
* v1.05 (22 August 2022 - after the drop)
* Filenames now use the following formats:
000010.95183149.png -- Two files produced by the same command (e.g. -n2),
000010.26742632.png -- distinguished by a different seed.
000011.455191342.01.png -- Two files produced by the same command using
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid can
be regenerated with the indicated key
* It should no longer be possible for one image to overwrite another
* You can use the "cd" and "pwd" commands at the dream> prompt to set and retrieve
the path of the output directory.
* v1.04 (22 August 2022 - after the drop)
* Updated README to reflect installation of the released weights.
* Suppressed very noisy and inconsequential warning when loading the frozen CLIP
@@ -110,6 +147,8 @@ completely). The default is 0.75, and ranges from 0.25-0.75 give interesting res
## Installation
There are separate installation walkthroughs for [Linux/Mac](#linuxmac) and [Windows](#windows).
### Linux/Mac
1. You will need to install the following prerequisites if they are not already available. Use your
@@ -149,24 +188,27 @@ After these steps, your command prompt will be prefixed by "(ldm)" as shown abov
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
```
Note that this step is necessary because I modified the original
just-in-time model loading scheme to allow the script to work on GPU
machines that are not internet connected. See [Workaround for machines with limited internet connectivity](#workaround-for-machines-with-limited-internet-connectivity)
7. Now you need to install the weights for the stable diffusion model.
For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
Use your credentials to log in, and then point browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
You may be asked to sign a license agreement at this point.
Click on "Files and versions" near the top of the page, and then click on the file named "sd-v1-4.ckpt". You'll be taken
to a page that prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
to a page that prompts you to click the "download" link. Save the file somewhere safe on your local machine.
Now run the following commands from within the stable-diffusion directory to point it to the weights file.
Now run the following commands from within the stable-diffusion directory. This will create a symbolic
link from the stable-diffusion model.ckpt file, to the true location of the sd-v1-4.ckpt file.
```
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
(ldm) ~/stable-diffusion$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
```
The weight file is >4 GB in size, so downloading may take a while.
8. Start generating images!
```
# for the pre-release weights use the -l or --liaon400m switch
@@ -181,7 +223,7 @@ The weight file is >4 GB in size, so downloading may take a while.
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the "stable-diffusion"
directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple ModuleNotFound errors.
### Updating to newer versions of the script
#### Updating to newer versions of the script
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
```
@@ -232,7 +274,7 @@ downloaded just-in-time)
For running with the released weights, you will first need to set up
an acount with Hugging Face (https://huggingface.co). Use your
credentials to log in, and then point browser at
credentials to log in, and then point your browser at
https://huggingface.co/CompVis/stable-diffusion-v-1-4-original. You
may be asked to sign a license agreement at this point.
@@ -243,15 +285,16 @@ safe on your local machine. The weight file is >4 GB in size, so
downloading may take a while.
Now run the following commands from **within the stable-diffusion
directory** to point it to the weights file:
directory** to copy the weights file to the right place:
```
mkdir -p models/ldm/stable-diffusion-v1
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
```
Instead of copying the file, you may instead create a shortcut within the
models\ldm\stable-diffusion-v1\ directory that points to it.
Please replace "C:\path\to\sd-v1.4.ckpt" with the correct path to wherever
you stashed this file. If you prefer not to copy or move the .ckpt file,
you may instead create a shortcut to it from within
"models\ldm\stable-diffusion-v1\".
10. Start generating images!
```
@@ -263,7 +306,7 @@ python scripts\dream.py
```
11. Subsequently, to relaunch the script, first activate the Anaconda command window (step 4), enter the stable-diffusion directory (step 6, "cd \path\to\stable-diffusion"), run "conda activate ldm" (step 7b), and then launch the dream script (step 10).
### Updating to newer versions of the script
#### Updating to newer versions of the script
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
```
@@ -280,7 +323,7 @@ lets you create images from a prompt in just three lines of code:
~~~~
from ldm.simplet2i import T2I
model = T2I()
outputs = model.text2image("a unicorn in manhattan")
outputs = model.txt2img("a unicorn in manhattan")
~~~~
Outputs is a list of lists in the format [[filename1,seed1],[filename2,seed2]...]

View File

@@ -60,6 +60,7 @@ from torch import autocast
from contextlib import contextmanager, nullcontext
import time
import math
import re
from ldm.util import instantiate_from_config
from ldm.models.diffusion.ddim import DDIMSampler
@@ -103,7 +104,7 @@ The vast majority of these arguments default to reasonable values.
seed=None,
cfg_scale=7.5,
weights="models/ldm/stable-diffusion-v1/model.ckpt",
config = "configs/latent-diffusion/txt2img-1p4B-eval.yaml",
config = "configs/stable-diffusion/v1-inference.yaml",
sampler_name="klms",
latent_channels=4,
downsampling_factor=8,
@@ -142,7 +143,7 @@ The vast majority of these arguments default to reasonable values.
def txt2img(self,prompt,outdir=None,batch_size=None,iterations=None,
steps=None,seed=None,grid=None,individual=None,width=None,height=None,
cfg_scale=None,ddim_eta=None,strength=None,init_img=None):
cfg_scale=None,ddim_eta=None,strength=None,init_img=None,skip_normalize=False):
"""
Generate an image from the prompt, writing iteration images into the outdir
The output is a list of lists in the format: [[filename1,seed1], [filename2,seed2],...]
@@ -171,7 +172,6 @@ The vast majority of these arguments default to reasonable values.
# make directories and establish names for the output files
os.makedirs(outdir, exist_ok=True)
base_count = len(os.listdir(outdir))-1
start_code = None
if self.fixed_code:
@@ -185,65 +185,90 @@ The vast majority of these arguments default to reasonable values.
sampler = self.sampler
images = list()
seeds = list()
filename = None
image_count = 0
tic = time.time()
with torch.no_grad():
with precision_scope("cuda"):
with model.ema_scope():
all_samples = list()
for n in trange(iterations, desc="Sampling"):
seed_everything(seed)
for prompts in tqdm(data, desc="data", dynamic_ncols=True):
uc = None
if cfg_scale != 1.0:
uc = model.get_learned_conditioning(batch_size * [""])
if isinstance(prompts, tuple):
prompts = list(prompts)
c = model.get_learned_conditioning(prompts)
shape = [self.latent_channels, height // self.downsampling_factor, width // self.downsampling_factor]
samples_ddim, _ = sampler.sample(S=steps,
conditioning=c,
batch_size=batch_size,
shape=shape,
verbose=False,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,
eta=ddim_eta,
x_T=start_code)
x_samples_ddim = model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
# Gawd. Too many levels of indent here. Need to refactor into smaller routines!
try:
with torch.no_grad():
with precision_scope("cuda"):
with model.ema_scope():
all_samples = list()
for n in trange(iterations, desc="Sampling"):
seed_everything(seed)
for prompts in tqdm(data, desc="data", dynamic_ncols=True):
uc = None
if cfg_scale != 1.0:
uc = model.get_learned_conditioning(batch_size * [""])
if isinstance(prompts, tuple):
prompts = list(prompts)
if not grid:
for x_sample in x_samples_ddim:
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
filename = os.path.join(outdir, f"{base_count:05}.png")
Image.fromarray(x_sample.astype(np.uint8)).save(filename)
images.append([filename,seed])
base_count += 1
else:
all_samples.append(x_samples_ddim)
seeds.append(seed)
# weighted sub-prompts
subprompts,weights = T2I._split_weighted_subprompts(prompts[0])
if len(subprompts) > 1:
# i dont know if this is correct.. but it works
c = torch.zeros_like(uc)
# get total weight for normalizing
totalWeight = sum(weights)
# normalize each "sub prompt" and add it
for i in range(0,len(subprompts)):
weight = weights[i]
if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c,model.get_learned_conditioning(subprompts[i]), alpha=weight)
else: # just standard 1 prompt
c = model.get_learned_conditioning(prompts)
seed = self._new_seed()
if grid:
images = self._make_grid(samples=all_samples,
seeds=seeds,
batch_size=batch_size,
iterations=iterations,
outdir=outdir)
shape = [self.latent_channels, height // self.downsampling_factor, width // self.downsampling_factor]
samples_ddim, _ = sampler.sample(S=steps,
conditioning=c,
batch_size=batch_size,
shape=shape,
verbose=False,
unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,
eta=ddim_eta,
x_T=start_code)
x_samples_ddim = model.decode_first_stage(samples_ddim)
x_samples_ddim = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0)
if not grid:
for x_sample in x_samples_ddim:
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
filename = self._unique_filename(outdir,previousname=filename,
seed=seed,isbatch=(batch_size>1))
assert not os.path.exists(filename)
Image.fromarray(x_sample.astype(np.uint8)).save(filename)
images.append([filename,seed])
else:
all_samples.append(x_samples_ddim)
seeds.append(seed)
image_count += 1
seed = self._new_seed()
if grid:
images = self._make_grid(samples=all_samples,
seeds=seeds,
batch_size=batch_size,
iterations=iterations,
outdir=outdir)
except KeyboardInterrupt:
print('*interrupted*')
print('Partial results will be returned; if --grid was requested, nothing will be returned.')
except RuntimeError as e:
print(str(e))
toc = time.time()
print(f'{batch_size * iterations} images generated in',"%4.2fs"% (toc-tic))
print(f'{image_count} images generated in',"%4.2fs"% (toc-tic))
return images
# There is lots of shared code between this and txt2img and should be refactored.
def img2img(self,prompt,outdir=None,init_img=None,batch_size=None,iterations=None,
steps=None,seed=None,grid=None,individual=None,width=None,height=None,
cfg_scale=None,ddim_eta=None,strength=None):
cfg_scale=None,ddim_eta=None,strength=None,skip_normalize=False):
"""
Generate an image from the prompt and the initial image, writing iteration images into the outdir
The output is a list of lists in the format: [[filename1,seed1], [filename2,seed2],...]
@@ -283,7 +308,6 @@ The vast majority of these arguments default to reasonable values.
# make directories and establish names for the output files
os.makedirs(outdir, exist_ok=True)
base_count = len(os.listdir(outdir))-1
assert os.path.isfile(init_img)
init_image = self._load_img(init_img).to(self.device)
@@ -304,60 +328,83 @@ The vast majority of these arguments default to reasonable values.
images = list()
seeds = list()
filename = None
image_count = 0 # actual number of iterations performed
tic = time.time()
with torch.no_grad():
with precision_scope("cuda"):
with model.ema_scope():
all_samples = list()
for n in trange(iterations, desc="Sampling"):
seed_everything(seed)
for prompts in tqdm(data, desc="data", dynamic_ncols=True):
uc = None
if cfg_scale != 1.0:
uc = model.get_learned_conditioning(batch_size * [""])
if isinstance(prompts, tuple):
prompts = list(prompts)
c = model.get_learned_conditioning(prompts)
# encode (scaled latent)
z_enc = sampler.stochastic_encode(init_latent, torch.tensor([t_enc]*batch_size).to(self.device))
# decode it
samples = sampler.decode(z_enc, c, t_enc, unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,)
# Gawd. Too many levels of indent here. Need to refactor into smaller routines!
try:
with torch.no_grad():
with precision_scope("cuda"):
with model.ema_scope():
all_samples = list()
for n in trange(iterations, desc="Sampling"):
seed_everything(seed)
for prompts in tqdm(data, desc="data", dynamic_ncols=True):
uc = None
if cfg_scale != 1.0:
uc = model.get_learned_conditioning(batch_size * [""])
if isinstance(prompts, tuple):
prompts = list(prompts)
x_samples = model.decode_first_stage(samples)
x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0)
# weighted sub-prompts
subprompts,weights = T2I._split_weighted_subprompts(prompts[0])
if len(subprompts) > 1:
# i dont know if this is correct.. but it works
c = torch.zeros_like(uc)
# get total weight for normalizing
totalWeight = sum(weights)
# normalize each "sub prompt" and add it
for i in range(0,len(subprompts)):
weight = weights[i]
if not skip_normalize:
weight = weight / totalWeight
c = torch.add(c,model.get_learned_conditioning(subprompts[i]), alpha=weight)
else: # just standard 1 prompt
c = model.get_learned_conditioning(prompts)
if not grid:
for x_sample in x_samples:
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
filename = os.path.join(outdir, f"{base_count:05}.png")
Image.fromarray(x_sample.astype(np.uint8)).save(filename)
images.append([filename,seed])
base_count += 1
else:
all_samples.append(x_samples)
seeds.append(seed)
# encode (scaled latent)
z_enc = sampler.stochastic_encode(init_latent, torch.tensor([t_enc]*batch_size).to(self.device))
# decode it
samples = sampler.decode(z_enc, c, t_enc, unconditional_guidance_scale=cfg_scale,
unconditional_conditioning=uc,)
seed = self._new_seed()
x_samples = model.decode_first_stage(samples)
x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0)
if grid:
images = self._make_grid(samples=all_samples,
seeds=seeds,
batch_size=batch_size,
iterations=iterations,
outdir=outdir)
if not grid:
for x_sample in x_samples:
x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c')
filename = self._unique_filename(outdir,previousname=filename,
seed=seed,isbatch=(batch_size>1))
assert not os.path.exists(filename)
Image.fromarray(x_sample.astype(np.uint8)).save(filename)
images.append([filename,seed])
else:
all_samples.append(x_samples)
seeds.append(seed)
image_count +=1
seed = self._new_seed()
if grid:
images = self._make_grid(samples=all_samples,
seeds=seeds,
batch_size=batch_size,
iterations=iterations,
outdir=outdir)
except KeyboardInterrupt:
print('*interrupted*')
print('Partial results will be returned; if --grid was requested, nothing will be returned.')
except RuntimeError as e:
print(str(e))
toc = time.time()
print(f'{batch_size * iterations} images generated in',"%4.2fs"% (toc-tic))
print(f'{image_count} images generated in',"%4.2fs"% (toc-tic))
return images
def _make_grid(self,samples,seeds,batch_size,iterations,outdir):
images = list()
base_count = len(os.listdir(outdir))-1
n_rows = batch_size if batch_size>1 else int(math.sqrt(batch_size * iterations))
# save as grid
grid = torch.stack(samples, 0)
@@ -366,7 +413,7 @@ The vast majority of these arguments default to reasonable values.
# to image
grid = 255. * rearrange(grid, 'c h w -> h w c').cpu().numpy()
filename = os.path.join(outdir, f"{base_count:05}.png")
filename = self._unique_filename(outdir,seed=seeds[0],grid_count=batch_size*iterations)
Image.fromarray(grid.astype(np.uint8)).save(filename)
for s in seeds:
images.append([filename,s])
@@ -430,3 +477,85 @@ The vast majority of these arguments default to reasonable values.
image = image[None].transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
return 2.*image - 1.
def _unique_filename(self,outdir,previousname=None,seed=0,isbatch=False,grid_count=None):
revision = 1
if previousname is None:
# count up until we find an unfilled slot
dir_list = [a.split('.',1)[0] for a in os.listdir(outdir)]
uniques = dict.fromkeys(dir_list,True)
basecount = 1
while f'{basecount:06}' in uniques:
basecount += 1
if grid_count is not None:
grid_label = f'grid#1-{grid_count}'
filename = f'{basecount:06}.{seed}.{grid_label}.png'
elif isbatch:
filename = f'{basecount:06}.{seed}.01.png'
else:
filename = f'{basecount:06}.{seed}.png'
return os.path.join(outdir,filename)
else:
previousname = os.path.basename(previousname)
x = re.match('^(\d+)\..*\.png',previousname)
if not x:
return self._unique_filename(outdir,previousname,seed)
basecount = int(x.groups()[0])
series = 0
finished = False
while not finished:
series += 1
filename = f'{basecount:06}.{seed}.png'
if isbatch or os.path.exists(os.path.join(outdir,filename)):
filename = f'{basecount:06}.{seed}.{series:02}.png'
finished = not os.path.exists(os.path.join(outdir,filename))
return os.path.join(outdir,filename)
def _split_weighted_subprompts(text):
"""
grabs all text up to the first occurrence of ':'
uses the grabbed text as a sub-prompt, and takes the value following ':' as weight
if ':' has no value defined, defaults to 1.0
repeats until no text remaining
"""
remaining = len(text)
prompts = []
weights = []
while remaining > 0:
if ":" in text:
idx = text.index(":") # first occurrence from start
# grab up to index as sub-prompt
prompt = text[:idx]
remaining -= idx
# remove from main text
text = text[idx+1:]
# find value for weight
if " " in text:
idx = text.index(" ") # first occurence
else: # no space, read to end
idx = len(text)
if idx != 0:
try:
weight = float(text[:idx])
except: # couldn't treat as float
print(f"Warning: '{text[:idx]}' is not a value, are you missing a space?")
weight = 1.0
else: # no value found
weight = 1.0
# remove from main text
remaining -= idx
text = text[idx+1:]
# append the sub-prompt and its weight
prompts.append(prompt)
weights.append(weight)
else: # no : found
if len(text) > 0: # there is still text though
# take remainder as weight 1
prompts.append(text)
weights.append(1.0)
remaining = 0
return prompts, weights

View File

@@ -67,36 +67,71 @@ def main():
# gets rid of annoying messages about random seed
logging.getLogger("pytorch_lightning").setLevel(logging.ERROR)
infile = None
try:
if opt.infile is not None:
infile = open(opt.infile,'r')
except FileNotFoundError as e:
print(e)
exit(-1)
# preload the model
if not debugging:
t2i.load_model()
print("\n* Initialization done! Awaiting your command (-h for help, q to quit)...")
print("\n* Initialization done! Awaiting your command (-h for help, 'q' to quit, 'cd' to change output dir, 'pwd' to print output dir)...")
log_path = os.path.join(opt.outdir,"dream_log.txt")
log_path = os.path.join(opt.outdir,'dream_log.txt')
with open(log_path,'a') as log:
cmd_parser = create_cmd_parser()
main_loop(t2i,cmd_parser,log)
main_loop(t2i,cmd_parser,log,infile)
log.close()
if infile:
infile.close()
def main_loop(t2i,parser,log):
def main_loop(t2i,parser,log,infile):
''' prompt/read/execute loop '''
done = False
while not done:
try:
command = input("dream> ")
command = infile.readline() if infile else input("dream> ")
except EOFError:
done = True
break
elements = shlex.split(command)
if len(elements)==0:
continue
if elements[0]=='q': #
if infile and len(command)==0:
done = True
break
if command.startswith(('#','//')):
continue
try:
elements = shlex.split(command)
except ValueError as e:
print(str(e))
continue
if len(elements)==0:
continue
if elements[0]=='q':
done = True
break
if elements[0]=='cd' and len(elements)>1:
if os.path.exists(elements[1]):
print(f"setting image output directory to {elements[1]}")
t2i.outdir=elements[1]
else:
print(f"directory {elements[1]} does not exist")
continue
if elements[0]=='pwd':
print(f"current output directory is {t2i.outdir}")
continue
if elements[0].startswith('!dream'): # in case a stored prompt still contains the !dream command
elements.pop(0)
@@ -123,16 +158,13 @@ def main_loop(t2i,parser,log):
print("Try again with a prompt!")
continue
try:
if opt.init_img is None:
results = t2i.txt2img(**vars(opt))
else:
results = t2i.img2img(**vars(opt))
print("Outputs:")
write_log_message(t2i,opt,results,log)
except KeyboardInterrupt:
print('*interrupted*')
continue
if opt.init_img is None:
results = t2i.txt2img(**vars(opt))
else:
results = t2i.img2img(**vars(opt))
print("Outputs:")
write_log_message(t2i,opt,results,log)
print("goodbye!")
@@ -147,7 +179,13 @@ def write_log_message(t2i,opt,results,logfile):
img_num = 1
batch_size = opt.batch_size or t2i.batch_size
seenit = {}
seeds = [a[1] for a in results]
if batch_size > 1:
seeds = f"(seeds for each batch row: {seeds})"
else:
seeds = f"(seeds for individual images: {seeds})"
for r in results:
seed = r[1]
log_message = (f'{r[0]}: {prompt_str} -S{seed}')
@@ -166,7 +204,10 @@ def write_log_message(t2i,opt,results,logfile):
if r[0] not in seenit:
seenit[r[0]] = True
try:
_write_prompt_to_png(r[0],f'{prompt_str} -S{seed}')
if opt.grid:
_write_prompt_to_png(r[0],f'{prompt_str} -g -S{seed} {seeds}')
else:
_write_prompt_to_png(r[0],f'{prompt_str} -S{seed}')
except FileNotFoundError:
print(f"Could not open file '{r[0]}' for reading")
@@ -201,6 +242,10 @@ def create_argv_parser():
dest='laion400m',
action='store_true',
help="fallback to the latent diffusion (laion400m) weights and config")
parser.add_argument("--from_file",
dest='infile',
type=str,
help="if specified, load prompts from this file")
parser.add_argument('-n','--iterations',
type=int,
default=1,
@@ -240,11 +285,13 @@ def create_cmd_parser():
parser.add_argument('-i','--individual',action='store_true',help="generate individual files (default)")
parser.add_argument('-I','--init_img',type=str,help="path to input image (supersedes width and height)")
parser.add_argument('-f','--strength',default=0.75,type=float,help="strength for noising/unnoising. 0.0 preserves image exactly, 1.0 replaces it completely")
parser.add_argument('-x','--skip_normalize',action='store_true',help="skip subprompt weight normalization")
return parser
if readline_available:
def setup_readline():
readline.set_completer(Completer(['--steps','-s','--seed','-S','--iterations','-n','--batch_size','-b',
readline.set_completer(Completer(['cd','pwd',
'--steps','-s','--seed','-S','--iterations','-n','--batch_size','-b',
'--width','-W','--height','-H','--cfg_scale','-C','--grid','-g',
'--individual','-i','--init_img','-I','--strength','-f']).complete)
readline.set_completer_delims(" ")
@@ -266,8 +313,13 @@ if readline_available:
return
def complete(self,text,state):
if text.startswith('-I') or text.startswith('--init_img'):
return self._image_completions(text,state)
buffer = readline.get_line_buffer()
if text.startswith(('-I','--init_img')):
return self._path_completions(text,state,('.png'))
if buffer.strip().endswith('cd') or text.startswith(('.','/')):
return self._path_completions(text,state,())
response = None
if state == 0:
@@ -287,12 +339,14 @@ if readline_available:
response = None
return response
def _image_completions(self,text,state):
def _path_completions(self,text,state,extensions):
# get the path so far
if text.startswith('-I'):
path = text.replace('-I','',1).lstrip()
elif text.startswith('--init_img='):
path = text.replace('--init_img=','',1).lstrip()
else:
path = text
matches = list()
@@ -309,7 +363,7 @@ if readline_available:
if full_path.startswith(path):
if os.path.isdir(full_path):
matches.append(os.path.join(os.path.dirname(text),n)+'/')
elif n.endswith('.png'):
elif n.endswith(extensions):
matches.append(os.path.join(os.path.dirname(text),n))
try:
@@ -317,7 +371,6 @@ if readline_available:
except IndexError:
response = None
return response
if __name__ == "__main__":
main()