Compare commits
928 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f5d1fbd896 | ||
|
|
424cee63f1 | ||
|
|
da193ecd4a | ||
|
|
56fd202e21 | ||
|
|
29454a2974 | ||
|
|
c977d295f5 | ||
|
|
28eaffa188 | ||
|
|
3feff09fb3 | ||
|
|
158d1ef384 | ||
|
|
4785a1cd05 | ||
|
|
2876c4ddec | ||
|
|
4bce81de26 | ||
|
|
12d15a1a3f | ||
|
|
2aad4dab90 | ||
|
|
c19107e0a8 | ||
|
|
eaf29e1751 | ||
|
|
d964374a91 | ||
|
|
9826f80d7f | ||
|
|
ec89bd19dc | ||
|
|
23aaf54f56 | ||
|
|
6d3cc25bca | ||
|
|
c9d246c4ec | ||
|
|
74406456f2 | ||
|
|
8e0cd2df18 | ||
|
|
4d4b1777db | ||
|
|
d6e5da6e37 | ||
|
|
dec7d8b160 | ||
|
|
4ecf016ace | ||
|
|
4d74af2363 | ||
|
|
c6a2ba12e2 | ||
|
|
350b5205a3 | ||
|
|
06028e0131 | ||
|
|
c6d13e679f | ||
|
|
72357266a6 | ||
|
|
9d69843a9d | ||
|
|
0547d20b2f | ||
|
|
2af6b8fbd8 | ||
|
|
0cee72dba5 | ||
|
|
77c11a42ee | ||
|
|
1d62b4210f | ||
|
|
d5a3571c00 | ||
|
|
8b2ed9b8fd | ||
|
|
24792eb5da | ||
|
|
614220576f | ||
|
|
70bcbc7401 | ||
|
|
492605ac3e | ||
|
|
67f892455f | ||
|
|
ae689d1a4a | ||
|
|
10990799db | ||
|
|
c5b4397212 | ||
|
|
f62bbef9f7 | ||
|
|
9157da8237 | ||
|
|
9c2b9af3a8 | ||
|
|
e3419c82e8 | ||
|
|
65f3d22649 | ||
|
|
39b0288595 | ||
|
|
13d12a0ceb | ||
|
|
b92dc8db83 | ||
|
|
b49188a39d | ||
|
|
b9c8270ee6 | ||
|
|
f0f3520bca | ||
|
|
3efe9899c2 | ||
|
|
bdbe4660fc | ||
|
|
8af9432f63 | ||
|
|
668d9cdb9d | ||
|
|
90f5811e59 | ||
|
|
15d21206a3 | ||
|
|
b622286f17 | ||
|
|
176add58b2 | ||
|
|
33c5f5a9c2 | ||
|
|
2b7752b72e | ||
|
|
5478d2a15e | ||
|
|
9ad76fe80c | ||
|
|
d74c4009cb | ||
|
|
ffe0e81ec9 | ||
|
|
bdf683ec41 | ||
|
|
28a5424242 | ||
|
|
8d418af20b | ||
|
|
055badd611 | ||
|
|
944f9e98a7 | ||
|
|
fcffcf5602 | ||
|
|
f121dfe120 | ||
|
|
a7dd7b4298 | ||
|
|
d94780651c | ||
|
|
d26abd7f01 | ||
|
|
7e2b122105 | ||
|
|
8a21fc1c50 | ||
|
|
275d5040f4 | ||
|
|
1b5930dcad | ||
|
|
d5810f6270 | ||
|
|
ebc51dc535 | ||
|
|
ac6e9238f1 | ||
|
|
6343b245ef | ||
|
|
8c80da2844 | ||
|
|
a12189e088 | ||
|
|
472c97e4e8 | ||
|
|
5baf0ae755 | ||
|
|
a56e3014a4 | ||
|
|
f3eff38f90 | ||
|
|
53d2d34b3d | ||
|
|
ede7d1a8f7 | ||
|
|
ac23a321b0 | ||
|
|
f52b233205 | ||
|
|
8242fc8bad | ||
|
|
09b6f7572b | ||
|
|
bde6e96800 | ||
|
|
13474e985b | ||
|
|
28b40bebbe | ||
|
|
1c9fd00f98 | ||
|
|
8ab66a211c | ||
|
|
bc03ff8b30 | ||
|
|
0247d63511 | ||
|
|
7604b36577 | ||
|
|
4a026bd46e | ||
|
|
6241fc19e0 | ||
|
|
25d7d71dd8 | ||
|
|
2432adb38f | ||
|
|
91acae30bf | ||
|
|
ca749b7de1 | ||
|
|
7486aa8608 | ||
|
|
0402766f4d | ||
|
|
a9ef5d1532 | ||
|
|
a485d45400 | ||
|
|
a40bdef29f | ||
|
|
fc2670b4d6 | ||
|
|
f0cd1aa736 | ||
|
|
c3807b044d | ||
|
|
b7ab025f40 | ||
|
|
633f702b39 | ||
|
|
3969637488 | ||
|
|
658ef829d4 | ||
|
|
0240656361 | ||
|
|
719a5de506 | ||
|
|
05bb9e444b | ||
|
|
0076757767 | ||
|
|
6ab03c4d08 | ||
|
|
142016827f | ||
|
|
466a82bcc2 | ||
|
|
05349f6cdc | ||
|
|
ab585aefae | ||
|
|
083ce9358b | ||
|
|
f56cf2400a | ||
|
|
5de5e659d0 | ||
|
|
fc53f6d47c | ||
|
|
2f70daef8f | ||
|
|
fc2a136eb0 | ||
|
|
ce3da40434 | ||
|
|
7933f27a72 | ||
|
|
1c197c602f | ||
|
|
90656aa7bf | ||
|
|
394b4a771e | ||
|
|
9c3f548900 | ||
|
|
5662d2daa8 | ||
|
|
fc0f966ad2 | ||
|
|
eb702a5049 | ||
|
|
1386d73302 | ||
|
|
6089f33e54 | ||
|
|
3a260cf54f | ||
|
|
9949a438f4 | ||
|
|
84c1122208 | ||
|
|
cc3d431928 | ||
|
|
c44b060a2e | ||
|
|
eff7fb89d8 | ||
|
|
cd5c112fcd | ||
|
|
563867fa99 | ||
|
|
2e230774c2 | ||
|
|
9577410be4 | ||
|
|
4ada4c9f1f | ||
|
|
9a6966924c | ||
|
|
0d62525f3d | ||
|
|
2ec864e37e | ||
|
|
9307ce3dc3 | ||
|
|
15996446e0 | ||
|
|
7a06c8fd89 | ||
|
|
4895fe8395 | ||
|
|
1e793a2dfe | ||
|
|
9c8fcaaf86 | ||
|
|
bf4344be51 | ||
|
|
f7532cdfd4 | ||
|
|
f1dd76c20b | ||
|
|
3016eeb6fb | ||
|
|
75b62d6ca8 | ||
|
|
82ae2769c8 | ||
|
|
61149abd2f | ||
|
|
eff126af6e | ||
|
|
0ca499cf96 | ||
|
|
3abf85e658 | ||
|
|
5095285854 | ||
|
|
93623a4449 | ||
|
|
0197459b02 | ||
|
|
1578bc68cc | ||
|
|
4ace397a99 | ||
|
|
d85a710211 | ||
|
|
536d534ab4 | ||
|
|
fc752a4e75 | ||
|
|
3c06d114c3 | ||
|
|
00d79c1fe3 | ||
|
|
60213893ab | ||
|
|
3b58413d9f | ||
|
|
1139884493 | ||
|
|
17e8f966d0 | ||
|
|
a42b25339f | ||
|
|
1b0731dd1a | ||
|
|
61c3886843 | ||
|
|
f76d57637e | ||
|
|
6bf73a0cf9 | ||
|
|
5145df21d9 | ||
|
|
e96ac61cb3 | ||
|
|
0e35d829c1 | ||
|
|
d08f048621 | ||
|
|
cfd453c1c7 | ||
|
|
a1b1a48fb3 | ||
|
|
b5160321bf | ||
|
|
0cc2a8176e | ||
|
|
9ac81c1dc4 | ||
|
|
50191774fc | ||
|
|
fcd9b813e3 | ||
|
|
813f92a1ae | ||
|
|
0d141c1d84 | ||
|
|
2e3cd03b27 | ||
|
|
4500c8b244 | ||
|
|
d569c9dec6 | ||
|
|
01a2b8c05b | ||
|
|
b23664c794 | ||
|
|
f06fefcacc | ||
|
|
7fa3a499bb | ||
|
|
c50b64ec1d | ||
|
|
76b0bdb6f9 | ||
|
|
b0ad109886 | ||
|
|
66b312c353 | ||
|
|
fc857f9d91 | ||
|
|
d6bd0cbf61 | ||
|
|
a32f6e9ea7 | ||
|
|
b41342a779 | ||
|
|
7603c8982c | ||
|
|
d351e365d6 | ||
|
|
d453afbf6b | ||
|
|
9ae55c91cc | ||
|
|
9e46badc40 | ||
|
|
ca0f3ec0e4 | ||
|
|
4b9be6113d | ||
|
|
31964c7c4c | ||
|
|
64f9fbda2f | ||
|
|
3ece2f19f0 | ||
|
|
c38b0b906d | ||
|
|
c79678a643 | ||
|
|
2217998010 | ||
|
|
3b43f3a5a1 | ||
|
|
3f193d2b97 | ||
|
|
9fe660c515 | ||
|
|
16356d5225 | ||
|
|
e04cb70c7c | ||
|
|
ddd5137cc6 | ||
|
|
b9aef33ae8 | ||
|
|
797e2f780d | ||
|
|
0642728484 | ||
|
|
fe9b4f4a3c | ||
|
|
756e50f641 | ||
|
|
2202288eb2 | ||
|
|
fc3378bb74 | ||
|
|
96228507d2 | ||
|
|
1fe5ec32f5 | ||
|
|
6dee9051a1 | ||
|
|
d58574ca46 | ||
|
|
d282000c05 | ||
|
|
80c5322ccc | ||
|
|
da181ce64e | ||
|
|
5ef66ca237 | ||
|
|
e99e720474 | ||
|
|
7aa331af8c | ||
|
|
9e943ff7dc | ||
|
|
b5040ba8d0 | ||
|
|
07462d1d99 | ||
|
|
d273fba42c | ||
|
|
735545dca1 | ||
|
|
328f87559b | ||
|
|
6f10b06a0c | ||
|
|
fd60c8297d | ||
|
|
480064fa06 | ||
|
|
3810d6a4ce | ||
|
|
44d36a0e0b | ||
|
|
3996ee843c | ||
|
|
6d966313b9 | ||
|
|
8ce9f07223 | ||
|
|
11ac50a6ea | ||
|
|
31146eb797 | ||
|
|
99cd598334 | ||
|
|
5441be8169 | ||
|
|
3e98b50b62 | ||
|
|
5f16148dea | ||
|
|
9628d45a92 | ||
|
|
6cbdd88fe2 | ||
|
|
d423db4f82 | ||
|
|
5c8c204a1b | ||
|
|
a03471c588 | ||
|
|
6608343455 | ||
|
|
abd972f099 | ||
|
|
bd57793a65 | ||
|
|
8cdc65effc | ||
|
|
85b553c567 | ||
|
|
af74a2d1f4 | ||
|
|
6fdc9ac224 | ||
|
|
8107d354d9 | ||
|
|
7ca8abb206 | ||
|
|
28c17613c4 | ||
|
|
eeb7a4c28c | ||
|
|
0009d82a92 | ||
|
|
e6d52d7ce6 | ||
|
|
8c726d3e3e | ||
|
|
56e2d22b6e | ||
|
|
053d11fe30 | ||
|
|
0066187651 | ||
|
|
d3d24fa816 | ||
|
|
4d58fed6b0 | ||
|
|
bde5874707 | ||
|
|
eed802f5d9 | ||
|
|
c13e11a264 | ||
|
|
1c377b7995 | ||
|
|
efe8dcaae9 | ||
|
|
fc8e3dbcd3 | ||
|
|
ec1e83e912 | ||
|
|
ab9daf1241 | ||
|
|
c061c1b1b6 | ||
|
|
b9cc56593e | ||
|
|
6a0e1c8673 | ||
|
|
371edc993a | ||
|
|
d71734c90d | ||
|
|
9ad4c03277 | ||
|
|
5299324321 | ||
|
|
817e36f8bf | ||
|
|
d044d4c577 | ||
|
|
3f1120e6f2 | ||
|
|
17d73d09c0 | ||
|
|
478c379534 | ||
|
|
c5c160a788 | ||
|
|
27ee939e4b | ||
|
|
c222cf7e64 | ||
|
|
b2a3b8bbf6 | ||
|
|
11cb03f7de | ||
|
|
6b1dc34523 | ||
|
|
44786b0496 | ||
|
|
d9ed0f6005 | ||
|
|
2e7a002308 | ||
|
|
5ce62e00c9 | ||
|
|
5a8c28de97 | ||
|
|
07e03b31b7 | ||
|
|
5ee5c5a012 | ||
|
|
3075c99ed2 | ||
|
|
2c0bee2a6d | ||
|
|
8f86aa7ded | ||
|
|
34e0d7aaa8 | ||
|
|
abe4e1ea91 | ||
|
|
f1f8ce604a | ||
|
|
47dbe7bc0d | ||
|
|
ebe6daac56 | ||
|
|
d209dab881 | ||
|
|
2ff47cdecf | ||
|
|
22c34aabfe | ||
|
|
b58a80109b | ||
|
|
c5a9e70e7f | ||
|
|
c5914ce236 | ||
|
|
242abac12d | ||
|
|
4b659982b7 | ||
|
|
71733bcfa1 | ||
|
|
d047e070b8 | ||
|
|
02c530e200 | ||
|
|
d36bbb817c | ||
|
|
9997fde144 | ||
|
|
9e22ed5c12 | ||
|
|
169c56e471 | ||
|
|
b186965e77 | ||
|
|
88526b9294 | ||
|
|
071a438745 | ||
|
|
93129fde32 | ||
|
|
802b95b9d9 | ||
|
|
c279314cf5 | ||
|
|
f75b194b76 | ||
|
|
bf1996bbcf | ||
|
|
d3962ab7b5 | ||
|
|
2296f5449e | ||
|
|
b6d37a70ca | ||
|
|
71b6ddf5fb | ||
|
|
14de7ed925 | ||
|
|
6556b200b5 | ||
|
|
d627cd1865 | ||
|
|
09b6104bfd | ||
|
|
1bb5b4ab32 | ||
|
|
c18db4e47b | ||
|
|
f9c92e3576 | ||
|
|
1ceb7a60db | ||
|
|
f509650ec5 | ||
|
|
0d0f35a1e2 | ||
|
|
6dbc42fc1a | ||
|
|
f6018fe5aa | ||
|
|
e4cd66216e | ||
|
|
995fbc78c8 | ||
|
|
3083f8313d | ||
|
|
c0614ac7f3 | ||
|
|
0186630514 | ||
|
|
d53df09203 | ||
|
|
12a29bfbc0 | ||
|
|
f36114eb94 | ||
|
|
c255481c11 | ||
|
|
7f81105acf | ||
|
|
c8de679dc3 | ||
|
|
85b18fe9ee | ||
|
|
e0d8c19da6 | ||
|
|
5567808237 | ||
|
|
2817f8a428 | ||
|
|
8e4c044ca2 | ||
|
|
9dc3832b9b | ||
|
|
046abb634e | ||
|
|
d3a469d136 | ||
|
|
e79f89b619 | ||
|
|
cbd967cbc4 | ||
|
|
e090c0dc10 | ||
|
|
c381788ab9 | ||
|
|
fb312f9ed3 | ||
|
|
729752620b | ||
|
|
8ed8bf52d0 | ||
|
|
a49d546125 | ||
|
|
288e31fc60 | ||
|
|
7b2c0d12a3 | ||
|
|
2978c3eb8d | ||
|
|
5e7ed964d2 | ||
|
|
93a24445dc | ||
|
|
95d147c5df | ||
|
|
41aed57449 | ||
|
|
34a3f4a820 | ||
|
|
1f5ad1b05e | ||
|
|
87c63f1f08 | ||
|
|
5b054dd5b7 | ||
|
|
fc5c8cc800 | ||
|
|
eb2ca4970b | ||
|
|
c2b10e6461 | ||
|
|
190d266060 | ||
|
|
8c8e1a448d | ||
|
|
c52dd7e3f4 | ||
|
|
a4aea1540b | ||
|
|
3c53b46a35 | ||
|
|
65fd6cd105 | ||
|
|
61403fe306 | ||
|
|
b2f288d6ec | ||
|
|
d1d12e4f92 | ||
|
|
eaf7934d74 | ||
|
|
079ec4cb5c | ||
|
|
38d0b1e3df | ||
|
|
fc6500e819 | ||
|
|
f521f5feba | ||
|
|
ce865a8d69 | ||
|
|
00839d02ab | ||
|
|
ce52d0c42b | ||
|
|
f687d90bca | ||
|
|
7473d814f5 | ||
|
|
b2c30c2093 | ||
|
|
a7048eea5f | ||
|
|
87c9398266 | ||
|
|
63c6019f92 | ||
|
|
8eaf0d8bfe | ||
|
|
5344481809 | ||
|
|
9f32daab2d | ||
|
|
884768c39d | ||
|
|
bc2194228e | ||
|
|
10c3afef17 | ||
|
|
98e9721101 | ||
|
|
66babb2e81 | ||
|
|
31a967965b | ||
|
|
b9c9b947cd | ||
|
|
1eee08a070 | ||
|
|
aca1b61413 | ||
|
|
e18beaff9c | ||
|
|
d7554b01fd | ||
|
|
70f8793700 | ||
|
|
0d4e6cbff5 | ||
|
|
ea61bf2c94 | ||
|
|
7dead7696c | ||
|
|
ffcc5ad795 | ||
|
|
48deb3e49d | ||
|
|
6c31225d19 | ||
|
|
c0610f7cb9 | ||
|
|
313b206ff8 | ||
|
|
f0fe483915 | ||
|
|
4ee8d104f0 | ||
|
|
89791d91e8 | ||
|
|
87f3da92e9 | ||
|
|
f169bb0020 | ||
|
|
155efadec2 | ||
|
|
bffe199ad7 | ||
|
|
0c2a511671 | ||
|
|
e94c8fa285 | ||
|
|
b3363a934d | ||
|
|
599c558c87 | ||
|
|
d35ec3398d | ||
|
|
96a900d1fe | ||
|
|
f00f7095f9 | ||
|
|
d7217e3801 | ||
|
|
fc5fdae562 | ||
|
|
a491644e56 | ||
|
|
ec2a509e01 | ||
|
|
6a3a0af676 | ||
|
|
ef4b03289a | ||
|
|
963b666844 | ||
|
|
5a788f8f73 | ||
|
|
5afb63e41b | ||
|
|
279ffcfe15 | ||
|
|
9b73292fcb | ||
|
|
67d91dc550 | ||
|
|
a1c0818a08 | ||
|
|
2cf825b169 | ||
|
|
292b0d70d8 | ||
|
|
c3aa3d48a0 | ||
|
|
9e3c947cd3 | ||
|
|
4b8aebabfb | ||
|
|
080fc4b380 | ||
|
|
195294e74f | ||
|
|
da81165a4b | ||
|
|
f3ff386491 | ||
|
|
da524f159e | ||
|
|
2d1eeec063 | ||
|
|
a8bb1a1109 | ||
|
|
d9fa505412 | ||
|
|
02ce602a38 | ||
|
|
9b1843307b | ||
|
|
f0010919f2 | ||
|
|
d113b4ad41 | ||
|
|
895505976e | ||
|
|
171f4aa71b | ||
|
|
775e1a21c7 | ||
|
|
3c3d893b9d | ||
|
|
33a5c83c74 | ||
|
|
7ee0edcb9e | ||
|
|
7bd2220a24 | ||
|
|
284b432ffd | ||
|
|
ab675af264 | ||
|
|
be58a6bfbc | ||
|
|
5a40aadbee | ||
|
|
e11f15cf78 | ||
|
|
ce17051b28 | ||
|
|
a2bdc8b579 | ||
|
|
1c62ae461e | ||
|
|
c5b802b596 | ||
|
|
b9ab9ffb4a | ||
|
|
f232068ab8 | ||
|
|
4556f29359 | ||
|
|
c1521be445 | ||
|
|
f3e952ecf0 | ||
|
|
aa4e8d8cf3 | ||
|
|
a7b2074106 | ||
|
|
2282e681f7 | ||
|
|
6e2365f835 | ||
|
|
e4ea98c277 | ||
|
|
2fd5fe6c89 | ||
|
|
4a9e93463d | ||
|
|
0b5c0c374e | ||
|
|
5750f5dac2 | ||
|
|
3fb095de88 | ||
|
|
c5fecfe281 | ||
|
|
1fa6a3558e | ||
|
|
2ee68cecd9 | ||
|
|
c8d1d4d159 | ||
|
|
529b19f8f6 | ||
|
|
be4f44fafd | ||
|
|
5aec48735e | ||
|
|
3c919f0337 | ||
|
|
858ddffab6 | ||
|
|
212fec669a | ||
|
|
fc2098834d | ||
|
|
8a31e5c5e3 | ||
|
|
bcc0110c59 | ||
|
|
ce1c5e70b8 | ||
|
|
ce00c9856f | ||
|
|
7e8f364d8d | ||
|
|
088cd2c4dd | ||
|
|
9460763eff | ||
|
|
fe46d9d0f7 | ||
|
|
563196bd03 | ||
|
|
d2a038200c | ||
|
|
d6ac0eeffd | ||
|
|
3a1724652e | ||
|
|
8c073a7818 | ||
|
|
8c94f6a234 | ||
|
|
5fa8f8be43 | ||
|
|
5b35fa53a7 | ||
|
|
a2ee32f57f | ||
|
|
4486169a83 | ||
|
|
bfeafa8d5e | ||
|
|
f86c8b043c | ||
|
|
251a409087 | ||
|
|
6fdbc1978d | ||
|
|
c855d2a350 | ||
|
|
4dd74cdc68 | ||
|
|
746e97ea1d | ||
|
|
241313c4a6 | ||
|
|
b6d1a17a1e | ||
|
|
c73434c2a3 | ||
|
|
69b15024a9 | ||
|
|
26e413ae9c | ||
|
|
91eb84c5d9 | ||
|
|
5d69bd408b | ||
|
|
21bf512056 | ||
|
|
6c6e534c1a | ||
|
|
010378153f | ||
|
|
9091b6e24a | ||
|
|
64700b07a8 | ||
|
|
34f8117241 | ||
|
|
c3f82d4481 | ||
|
|
3929bd3e13 | ||
|
|
caf7caddf7 | ||
|
|
9fded69f0c | ||
|
|
9f719883c8 | ||
|
|
5d4da31dcd | ||
|
|
686640af3a | ||
|
|
edc22e06c3 | ||
|
|
409a46e2c4 | ||
|
|
e7ee4ecac7 | ||
|
|
da6c690d7b | ||
|
|
7c4544f95e | ||
|
|
f173e0a085 | ||
|
|
2a90e0c55f | ||
|
|
9d103ef030 | ||
|
|
4cc60669c1 | ||
|
|
d456aea8f3 | ||
|
|
4151883cb2 | ||
|
|
a029d90630 | ||
|
|
211d6b3831 | ||
|
|
b40faa98bd | ||
|
|
8d4ad0de4e | ||
|
|
e4b2f815e8 | ||
|
|
0dd5804949 | ||
|
|
53476af72e | ||
|
|
61ee597f4b | ||
|
|
ad0b366e47 | ||
|
|
942f029a24 | ||
|
|
e0d7c466cc | ||
|
|
16c0132a6b | ||
|
|
7cb2fcf8b4 | ||
|
|
1a65d43569 | ||
|
|
1313e31f62 | ||
|
|
aa213285bb | ||
|
|
f691353570 | ||
|
|
1c75010f29 | ||
|
|
eba8fb58ed | ||
|
|
83a7e60fe5 | ||
|
|
d4e86feeeb | ||
|
|
427614d1df | ||
|
|
ce6fb8ea29 | ||
|
|
df858eb3f9 | ||
|
|
6523fd07ab | ||
|
|
a823e37126 | ||
|
|
4eed06903c | ||
|
|
79d577bff9 | ||
|
|
3521557541 | ||
|
|
e66b1a685c | ||
|
|
c351aa19eb | ||
|
|
aa1f46820f | ||
|
|
1d34405f4f | ||
|
|
f961e865f5 | ||
|
|
9eba6acb7f | ||
|
|
e32dd1d703 | ||
|
|
bbbfea488d | ||
|
|
c8a9848ad6 | ||
|
|
e88e274bf2 | ||
|
|
cca8d14c79 | ||
|
|
464aafa862 | ||
|
|
6e98b5535d | ||
|
|
ab2972f320 | ||
|
|
1ba40db361 | ||
|
|
f69fc68e06 | ||
|
|
7d8d4bcafb | ||
|
|
4fd97ceddd | ||
|
|
ded49523cd | ||
|
|
914e5fc4f8 | ||
|
|
ab4d391a3a | ||
|
|
82f59829b8 | ||
|
|
147834e99c | ||
|
|
f41da11d66 | ||
|
|
5c5454e4a5 | ||
|
|
dedbdeeafc | ||
|
|
d1770bff37 | ||
|
|
20652620d9 | ||
|
|
51613525a4 | ||
|
|
dc39f8d6a7 | ||
|
|
f1748d7017 | ||
|
|
de7abce464 | ||
|
|
2aa5bb6aad | ||
|
|
c0c4d7ca69 | ||
|
|
7d09d9da49 | ||
|
|
ffa54f4a35 | ||
|
|
69cc0993f8 | ||
|
|
1050f2726a | ||
|
|
f7170e4156 | ||
|
|
bfa8fed568 | ||
|
|
2923dfaed1 | ||
|
|
0932b4affa | ||
|
|
0b10835269 | ||
|
|
6e0f3475b4 | ||
|
|
9b9e276491 | ||
|
|
392c0725f3 | ||
|
|
2a2f38a016 | ||
|
|
7a4e647287 | ||
|
|
b8e1151a9c | ||
|
|
f39cb668fc | ||
|
|
6c015eedb3 | ||
|
|
834e56a513 | ||
|
|
652aaa809b | ||
|
|
89880e1f72 | ||
|
|
d94f955d9d | ||
|
|
64339af2dc | ||
|
|
5d20f47993 | ||
|
|
ccf8a46320 | ||
|
|
af3d72e001 | ||
|
|
1d78e1af9c | ||
|
|
1fd605604f | ||
|
|
f0b04c5066 | ||
|
|
2836976d6d | ||
|
|
474220ce8e | ||
|
|
4074705194 | ||
|
|
e89ff01caf | ||
|
|
2187d0f31c | ||
|
|
1219c39d78 | ||
|
|
bc0b0e4752 | ||
|
|
cd3da2900d | ||
|
|
4402ca10b2 | ||
|
|
1a1625406c | ||
|
|
36e6908266 | ||
|
|
7314f1a862 | ||
|
|
5c3cbd05f1 | ||
|
|
f4e7383490 | ||
|
|
96a12099ed | ||
|
|
e159bb3dce | ||
|
|
bd0c0d77d2 | ||
|
|
f745f78cb3 | ||
|
|
7efe0f3996 | ||
|
|
9f855a358a | ||
|
|
62b80a81d3 | ||
|
|
14587c9a95 | ||
|
|
fcae5defe3 | ||
|
|
e7144055d1 | ||
|
|
c857c6cc62 | ||
|
|
7ecb11cf86 | ||
|
|
e4b61923ae | ||
|
|
aa68e4e0da | ||
|
|
09365d6d2e | ||
|
|
b77f34998c | ||
|
|
0439b51a26 | ||
|
|
ef6870c714 | ||
|
|
8cbb50c204 | ||
|
|
12a8d7fc14 | ||
|
|
3d2b497eb0 | ||
|
|
786b8878d6 | ||
|
|
55132f6463 | ||
|
|
ed9186b099 | ||
|
|
d2026d0509 | ||
|
|
0bc4ed14cd | ||
|
|
06369d07c0 | ||
|
|
4e61069821 | ||
|
|
d7ba041007 | ||
|
|
3859302f1c | ||
|
|
865439114b | ||
|
|
4d76116152 | ||
|
|
42f5bd4e12 | ||
|
|
04e77f3858 | ||
|
|
1fc1eeec38 | ||
|
|
556081695a | ||
|
|
ad7917c7aa | ||
|
|
39cca8139f | ||
|
|
1d1988683b | ||
|
|
44a0055571 | ||
|
|
0cc01143d8 | ||
|
|
1c0247d58a | ||
|
|
d335f51e5f | ||
|
|
38cd968130 | ||
|
|
0111304982 | ||
|
|
c607d4fe6c | ||
|
|
6d6076d3c7 | ||
|
|
485fcc7fcb | ||
|
|
76633f500a | ||
|
|
ed6194351c | ||
|
|
f237744ab1 | ||
|
|
678cf8519e | ||
|
|
ee9de75b8d | ||
|
|
50f3847ef8 | ||
|
|
8596e3586c | ||
|
|
5ef1e0714b | ||
|
|
be871c3ab3 | ||
|
|
dec40d9b04 | ||
|
|
fe5c008dd5 | ||
|
|
72def2ae13 | ||
|
|
31cd76a2af | ||
|
|
00c78263ce | ||
|
|
5c31feb3a1 | ||
|
|
26f129cef8 | ||
|
|
292ee06751 | ||
|
|
c00d53fcce | ||
|
|
a78a8728fe | ||
|
|
6b5d19347a | ||
|
|
26671d8eed | ||
|
|
b487fa4391 | ||
|
|
12b98ba4ec | ||
|
|
fa25a64d37 | ||
|
|
29540452f2 | ||
|
|
c7960f930a | ||
|
|
c1c8b5026a | ||
|
|
5da42e0ad2 | ||
|
|
34d6f35408 | ||
|
|
401165ba35 | ||
|
|
6d8057c84f | ||
|
|
3f23dee6f4 | ||
|
|
8cdd961ad2 | ||
|
|
470b267939 | ||
|
|
bf399e303c | ||
|
|
b3d7ad7461 | ||
|
|
cd66b2c76d | ||
|
|
6b406e2b5e | ||
|
|
6737cc1443 | ||
|
|
7fd0eeb9f9 | ||
|
|
16e3b45fa2 | ||
|
|
2f07ea03a9 | ||
|
|
b563d75c58 | ||
|
|
a7b7b20d16 | ||
|
|
a47ef3ded9 | ||
|
|
7cb9b654f3 | ||
|
|
8819e12a86 | ||
|
|
967eb60ea9 | ||
|
|
b1091ecda1 | ||
|
|
2723dd9051 | ||
|
|
8f050d992e | ||
|
|
0346095876 | ||
|
|
f9bbc55f74 | ||
|
|
878a3907e9 | ||
|
|
4cfb41d9ae | ||
|
|
6ec64ecb3c | ||
|
|
540315edaa | ||
|
|
cf10a1b736 | ||
|
|
9fb2a43780 | ||
|
|
1b743f7d9b | ||
|
|
d7bf3f7d7b | ||
|
|
eba31e7caf | ||
|
|
bde456f9fa | ||
|
|
9ee83380e6 | ||
|
|
6982e6a469 | ||
|
|
0f4d71ed63 | ||
|
|
8f3f64b22e | ||
|
|
dba0280790 | ||
|
|
19e2cff18c | ||
|
|
58f65d49b6 | ||
|
|
e5edd025d6 | ||
|
|
29e229b409 | ||
|
|
93cdb476d9 | ||
|
|
1305e7a56c | ||
|
|
58edf262e4 | ||
|
|
fd67df9447 | ||
|
|
45e5053d06 | ||
|
|
9c5999ede1 | ||
|
|
7ddf7f0b7d | ||
|
|
b8de5244b1 | ||
|
|
72e011a4e4 | ||
|
|
98db0d746c | ||
|
|
1a8e007066 | ||
|
|
8b47c82992 | ||
|
|
eab435da27 | ||
|
|
cbc029c6f9 | ||
|
|
d318968abe | ||
|
|
e71655237a | ||
|
|
6b89adfa7e | ||
|
|
8aa4a258f4 | ||
|
|
174a9b78b0 | ||
|
|
90d37eac03 | ||
|
|
230de023ff | ||
|
|
febf86dedf | ||
|
|
76ae17abac | ||
|
|
339ff4b464 | ||
|
|
00c0e487dd | ||
|
|
5c8dfa38be | ||
|
|
acf85c66a5 | ||
|
|
3619918954 | ||
|
|
65b14683a8 | ||
|
|
f4fc02a3da | ||
|
|
c334170a93 | ||
|
|
deab6c64fc | ||
|
|
e1c9503951 | ||
|
|
9a21812bf5 | ||
|
|
347b5ce452 | ||
|
|
b39029521b | ||
|
|
97b26f3de2 | ||
|
|
e19a7a990d | ||
|
|
3e424e1046 | ||
|
|
db20b4af9c | ||
|
|
44ff8f8531 | ||
|
|
a8b794d7e0 | ||
|
|
f868362ca8 | ||
|
|
8858f7e97c | ||
|
|
2db4969e18 | ||
|
|
2ecc1abf21 | ||
|
|
703bc9494a | ||
|
|
e5ab07091d | ||
|
|
891678b656 | ||
|
|
39ea2a257c | ||
|
|
2d68eae16b | ||
|
|
d65948c423 | ||
|
|
9910a0b004 | ||
|
|
ff96358cb3 | ||
|
|
edf471f655 | ||
|
|
5b02c8ca4a | ||
|
|
e7688c53b8 | ||
|
|
87cada42db | ||
|
|
6fe67ee426 | ||
|
|
5fbc81885a | ||
|
|
25ba5451f2 | ||
|
|
138c9cf7a8 | ||
|
|
87981306a3 | ||
|
|
f7893b3ea9 | ||
|
|
87395fe6fe | ||
|
|
15f876c66c | ||
|
|
522c35ac5b | ||
|
|
bb2d6d640f | ||
|
|
2412d8dec1 | ||
|
|
2ab5a43663 | ||
|
|
0ec3d6c10a | ||
|
|
d208e1b0f5 | ||
|
|
8a6ba6a212 | ||
|
|
b793d69ff3 | ||
|
|
54f55471df | ||
|
|
cec7fb7dc6 | ||
|
|
b0b82efffe | ||
|
|
e599604294 | ||
|
|
57a3ea9d7b | ||
|
|
a3a50bb886 |
@@ -1,3 +1,23 @@
|
|||||||
|
# use this file as a whitelist
|
||||||
*
|
*
|
||||||
!environment*.yml
|
!invokeai
|
||||||
!docker-build
|
!ldm
|
||||||
|
!pyproject.toml
|
||||||
|
!README.md
|
||||||
|
|
||||||
|
# Guard against pulling in any models that might exist in the directory tree
|
||||||
|
**/*.pt*
|
||||||
|
**/*.ckpt
|
||||||
|
|
||||||
|
# ignore frontend but whitelist dist
|
||||||
|
invokeai/frontend/**
|
||||||
|
!invokeai/frontend/dist
|
||||||
|
|
||||||
|
# ignore invokeai/assets but whitelist invokeai/assets/web
|
||||||
|
invokeai/assets
|
||||||
|
!invokeai/assets/web
|
||||||
|
|
||||||
|
# ignore python cache
|
||||||
|
**/__pycache__
|
||||||
|
**/*.py[cod]
|
||||||
|
**/*.egg-info
|
||||||
|
|||||||
12
.editorconfig
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
# All files
|
||||||
|
[*]
|
||||||
|
charset = utf-8
|
||||||
|
end_of_line = lf
|
||||||
|
indent_size = 2
|
||||||
|
indent_style = space
|
||||||
|
insert_final_newline = true
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
|
||||||
|
# Python
|
||||||
|
[*.py]
|
||||||
|
indent_size = 4
|
||||||
2
.gitattributes
vendored
@@ -1,4 +1,4 @@
|
|||||||
# Auto normalizes line endings on commit so devs don't need to change local settings.
|
# Auto normalizes line endings on commit so devs don't need to change local settings.
|
||||||
# Only affects text files and ignores other file types.
|
# Only affects text files and ignores other file types.
|
||||||
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
|
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
|
||||||
* text=auto
|
* text=auto
|
||||||
|
|||||||
54
.github/CODEOWNERS
vendored
@@ -1,4 +1,50 @@
|
|||||||
ldm/invoke/pngwriter.py @CapableWeb
|
# continuous integration
|
||||||
ldm/invoke/server_legacy.py @CapableWeb
|
/.github/workflows/ @mauwii
|
||||||
scripts/legacy_api.py @CapableWeb
|
|
||||||
tests/legacy_tests.sh @CapableWeb
|
# documentation
|
||||||
|
/docs/ @lstein @mauwii @tildebyte
|
||||||
|
mkdocs.yml @lstein @mauwii
|
||||||
|
|
||||||
|
# installation and configuration
|
||||||
|
/pyproject.toml @mauwii @lstein @ebr
|
||||||
|
/docker/ @mauwii
|
||||||
|
/scripts/ @ebr @lstein
|
||||||
|
/installer/ @ebr @lstein @tildebyte
|
||||||
|
ldm/invoke/config @lstein @ebr
|
||||||
|
invokeai/assets @lstein @ebr
|
||||||
|
invokeai/configs @lstein @ebr
|
||||||
|
/ldm/invoke/_version.py @lstein @blessedcoolant
|
||||||
|
|
||||||
|
# web ui
|
||||||
|
/invokeai/frontend @blessedcoolant @psychedelicious
|
||||||
|
/invokeai/backend @blessedcoolant @psychedelicious
|
||||||
|
|
||||||
|
# generation and model management
|
||||||
|
/ldm/*.py @lstein
|
||||||
|
/ldm/generate.py @lstein @keturn
|
||||||
|
/ldm/invoke/args.py @lstein @blessedcoolant
|
||||||
|
/ldm/invoke/ckpt* @lstein
|
||||||
|
/ldm/invoke/ckpt_generator @lstein
|
||||||
|
/ldm/invoke/CLI.py @lstein
|
||||||
|
/ldm/invoke/config @lstein @ebr @mauwii
|
||||||
|
/ldm/invoke/generator @keturn @damian0815
|
||||||
|
/ldm/invoke/globals.py @lstein @blessedcoolant
|
||||||
|
/ldm/invoke/merge_diffusers.py @lstein
|
||||||
|
/ldm/invoke/model_manager.py @lstein @blessedcoolant
|
||||||
|
/ldm/invoke/txt2mask.py @lstein
|
||||||
|
/ldm/invoke/patchmatch.py @Kyle0654
|
||||||
|
/ldm/invoke/restoration @lstein @blessedcoolant
|
||||||
|
|
||||||
|
# attention, textual inversion, model configuration
|
||||||
|
/ldm/models @damian0815 @keturn
|
||||||
|
/ldm/modules @damian0815 @keturn
|
||||||
|
|
||||||
|
# Nodes
|
||||||
|
apps/ @Kyle0654
|
||||||
|
|
||||||
|
# legacy REST API
|
||||||
|
# is CapableWeb still engaged?
|
||||||
|
/ldm/invoke/pngwriter.py @CapableWeb
|
||||||
|
/ldm/invoke/server_legacy.py @CapableWeb
|
||||||
|
/scripts/legacy_api.py @CapableWeb
|
||||||
|
/tests/legacy_tests.sh @CapableWeb
|
||||||
|
|||||||
94
.github/workflows/build-container.yml
vendored
@@ -1,42 +1,92 @@
|
|||||||
# Building the Image without pushing to confirm it is still buildable
|
|
||||||
# confirum functionality would unfortunately need way more resources
|
|
||||||
name: build container image
|
name: build container image
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
- 'main'
|
- 'main'
|
||||||
- 'development'
|
- 'update/ci/*'
|
||||||
pull_request:
|
tags:
|
||||||
branches:
|
- 'v*.*.*'
|
||||||
- 'main'
|
|
||||||
- 'development'
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
docker:
|
docker:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
|
matrix:
|
||||||
|
flavor:
|
||||||
|
- amd
|
||||||
|
- cuda
|
||||||
|
- cpu
|
||||||
|
include:
|
||||||
|
- flavor: amd
|
||||||
|
pip-extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
- flavor: cuda
|
||||||
|
pip-extra-index-url: ''
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
- flavor: cpu
|
||||||
|
pip-extra-index-url: 'https://download.pytorch.org/whl/cpu'
|
||||||
|
dockerfile: docker/Dockerfile
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
name: ${{ matrix.flavor }}
|
||||||
steps:
|
steps:
|
||||||
- name: prepare docker-tag
|
|
||||||
env:
|
|
||||||
repository: ${{ github.repository }}
|
|
||||||
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
|
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v4
|
||||||
|
with:
|
||||||
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
images: ghcr.io/${{ github.repository }}
|
||||||
|
tags: |
|
||||||
|
type=ref,event=branch
|
||||||
|
type=ref,event=tag
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
type=semver,pattern={{major}}.{{minor}}
|
||||||
|
type=semver,pattern={{major}}
|
||||||
|
type=sha,enable=true,prefix=sha-,format=short
|
||||||
|
flavor: |
|
||||||
|
latest=${{ matrix.flavor == 'cuda' && github.ref == 'refs/heads/main' }}
|
||||||
|
suffix=-${{ matrix.flavor }},onlatest=false
|
||||||
- name: Set up QEMU
|
- name: Set up QEMU
|
||||||
uses: docker/setup-qemu-action@v2
|
uses: docker/setup-qemu-action@v2
|
||||||
|
|
||||||
- name: Set up Docker Buildx
|
- name: Set up Docker Buildx
|
||||||
uses: docker/setup-buildx-action@v2
|
uses: docker/setup-buildx-action@v2
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
with:
|
||||||
path: /tmp/.buildx-cache
|
platforms: ${{ matrix.platforms }}
|
||||||
key: buildx-${{ hashFiles('docker-build/Dockerfile') }}
|
|
||||||
|
- name: Login to GitHub Container Registry
|
||||||
|
if: github.event_name != 'pull_request'
|
||||||
|
uses: docker/login-action@v2
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Build container
|
- name: Build container
|
||||||
uses: docker/build-push-action@v3
|
uses: docker/build-push-action@v4
|
||||||
with:
|
with:
|
||||||
context: .
|
context: .
|
||||||
file: docker-build/Dockerfile
|
file: ${{ matrix.dockerfile }}
|
||||||
platforms: linux/amd64
|
platforms: ${{ matrix.platforms }}
|
||||||
push: false
|
push: ${{ github.event_name != 'pull_request' }}
|
||||||
tags: ${{ env.dockertag }}:latest
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache
|
build-args: PIP_EXTRA_INDEX_URL=${{ matrix.pip-extra-index-url }}
|
||||||
|
cache-from: type=gha
|
||||||
|
cache-to: type=gha,mode=max
|
||||||
|
|
||||||
|
- name: Output image, digest and metadata to summary
|
||||||
|
run: |
|
||||||
|
{
|
||||||
|
echo imageid: "${{ steps.docker_build.outputs.imageid }}"
|
||||||
|
echo digest: "${{ steps.docker_build.outputs.digest }}"
|
||||||
|
echo labels: "${{ steps.meta.outputs.labels }}"
|
||||||
|
echo tags: "${{ steps.meta.outputs.tags }}"
|
||||||
|
echo version: "${{ steps.meta.outputs.version }}"
|
||||||
|
} >> "$GITHUB_STEP_SUMMARY"
|
||||||
|
|||||||
34
.github/workflows/clean-caches.yml
vendored
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
name: cleanup caches by a branch
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
types:
|
||||||
|
- closed
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
cleanup:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Check out code
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Cleanup
|
||||||
|
run: |
|
||||||
|
gh extension install actions/gh-actions-cache
|
||||||
|
|
||||||
|
REPO=${{ github.repository }}
|
||||||
|
BRANCH=${{ github.ref }}
|
||||||
|
|
||||||
|
echo "Fetching list of cache key"
|
||||||
|
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
|
||||||
|
|
||||||
|
## Setting this to not fail the workflow while deleting cache keys.
|
||||||
|
set +e
|
||||||
|
echo "Deleting caches..."
|
||||||
|
for cacheKey in $cacheKeysForPR
|
||||||
|
do
|
||||||
|
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
|
||||||
|
done
|
||||||
|
echo "Done"
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
80
.github/workflows/create-caches.yml
vendored
@@ -1,80 +0,0 @@
|
|||||||
name: Create Caches
|
|
||||||
|
|
||||||
on: workflow_dispatch
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
os_matrix:
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest, macos-latest]
|
|
||||||
include:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
environment-file: environment.yml
|
|
||||||
default-shell: bash -l {0}
|
|
||||||
- os: macos-latest
|
|
||||||
environment-file: environment-mac.yml
|
|
||||||
default-shell: bash -l {0}
|
|
||||||
name: Test invoke.py on ${{ matrix.os }} with conda
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
defaults:
|
|
||||||
run:
|
|
||||||
shell: ${{ matrix.default-shell }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout sources
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
|
|
||||||
- name: setup miniconda
|
|
||||||
uses: conda-incubator/setup-miniconda@v2
|
|
||||||
with:
|
|
||||||
auto-activate-base: false
|
|
||||||
auto-update-conda: false
|
|
||||||
miniconda-version: latest
|
|
||||||
|
|
||||||
- name: set environment
|
|
||||||
run: |
|
|
||||||
[[ "$GITHUB_REF" == 'refs/heads/main' ]] \
|
|
||||||
&& echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV \
|
|
||||||
|| echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
|
|
||||||
echo "CONDA_ROOT=$CONDA" >> $GITHUB_ENV
|
|
||||||
echo "CONDA_ENV_NAME=invokeai" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Use Cached Stable Diffusion v1.4 Model
|
|
||||||
id: cache-sd-v1-4
|
|
||||||
uses: actions/cache@v3
|
|
||||||
env:
|
|
||||||
cache-name: cache-sd-v1-4
|
|
||||||
with:
|
|
||||||
path: models/ldm/stable-diffusion-v1/model.ckpt
|
|
||||||
key: ${{ env.cache-name }}
|
|
||||||
restore-keys: ${{ env.cache-name }}
|
|
||||||
|
|
||||||
- name: Download Stable Diffusion v1.4 Model
|
|
||||||
if: ${{ steps.cache-sd-v1-4.outputs.cache-hit != 'true' }}
|
|
||||||
run: |
|
|
||||||
[[ -d models/ldm/stable-diffusion-v1 ]] \
|
|
||||||
|| mkdir -p models/ldm/stable-diffusion-v1
|
|
||||||
[[ -r models/ldm/stable-diffusion-v1/model.ckpt ]] \
|
|
||||||
|| curl \
|
|
||||||
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
|
|
||||||
-o models/ldm/stable-diffusion-v1/model.ckpt \
|
|
||||||
-L https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
|
|
||||||
|
|
||||||
- name: Activate Conda Env
|
|
||||||
uses: conda-incubator/setup-miniconda@v2
|
|
||||||
with:
|
|
||||||
activate-environment: ${{ env.CONDA_ENV_NAME }}
|
|
||||||
environment-file: ${{ matrix.environment-file }}
|
|
||||||
|
|
||||||
- name: Use Cached Huggingface and Torch models
|
|
||||||
id: cache-hugginface-torch
|
|
||||||
uses: actions/cache@v3
|
|
||||||
env:
|
|
||||||
cache-name: cache-hugginface-torch
|
|
||||||
with:
|
|
||||||
path: ~/.cache
|
|
||||||
key: ${{ env.cache-name }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ env.cache-name }}-${{ hashFiles('scripts/preload_models.py') }}
|
|
||||||
|
|
||||||
- name: run preload_models.py
|
|
||||||
run: python scripts/preload_models.py
|
|
||||||
29
.github/workflows/lint-frontend.yml
vendored
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
name: Lint frontend
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
paths:
|
||||||
|
- 'invokeai/frontend/**'
|
||||||
|
push:
|
||||||
|
paths:
|
||||||
|
- 'invokeai/frontend/**'
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
working-directory: invokeai/frontend
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
lint-frontend:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
- name: Setup Node 18
|
||||||
|
uses: actions/setup-node@v3
|
||||||
|
with:
|
||||||
|
node-version: '18'
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- run: 'yarn install --frozen-lockfile'
|
||||||
|
- run: 'yarn tsc'
|
||||||
|
- run: 'yarn run madge'
|
||||||
|
- run: 'yarn run lint --max-warnings=0'
|
||||||
|
- run: 'yarn run prettier --check'
|
||||||
3
.github/workflows/mkdocs-material.yml
vendored
@@ -7,6 +7,7 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
mkdocs-material:
|
mkdocs-material:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: checkout sources
|
- name: checkout sources
|
||||||
@@ -22,7 +23,7 @@ jobs:
|
|||||||
- name: install requirements
|
- name: install requirements
|
||||||
run: |
|
run: |
|
||||||
python -m \
|
python -m \
|
||||||
pip install -r requirements-mkdocs.txt
|
pip install -r docs/requirements-mkdocs.txt
|
||||||
|
|
||||||
- name: confirm buildability
|
- name: confirm buildability
|
||||||
run: |
|
run: |
|
||||||
|
|||||||
20
.github/workflows/pyflakes.yml
vendored
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
- development
|
||||||
|
- 'release-candidate-*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
pyflakes:
|
||||||
|
name: runner / pyflakes
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
- name: pyflakes
|
||||||
|
uses: reviewdog/action-pyflakes@v1
|
||||||
|
with:
|
||||||
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
reporter: github-pr-review
|
||||||
41
.github/workflows/pypi-release.yml
vendored
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
name: PyPI Release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
paths:
|
||||||
|
- 'ldm/invoke/_version.py'
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
release:
|
||||||
|
if: github.repository == 'invoke-ai/InvokeAI'
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
env:
|
||||||
|
TWINE_USERNAME: __token__
|
||||||
|
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
|
||||||
|
TWINE_NON_INTERACTIVE: 1
|
||||||
|
steps:
|
||||||
|
- name: checkout sources
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: install deps
|
||||||
|
run: pip install --upgrade build twine
|
||||||
|
|
||||||
|
- name: build package
|
||||||
|
run: python3 -m build
|
||||||
|
|
||||||
|
- name: check distribution
|
||||||
|
run: twine check dist/*
|
||||||
|
|
||||||
|
- name: check PyPI versions
|
||||||
|
if: github.ref == 'refs/heads/main'
|
||||||
|
run: |
|
||||||
|
pip install --upgrade requests
|
||||||
|
python -c "\
|
||||||
|
import scripts.pypi_helper; \
|
||||||
|
EXISTS=scripts.pypi_helper.local_on_pypi(); \
|
||||||
|
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
- name: upload package
|
||||||
|
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
|
||||||
|
run: twine upload dist/*
|
||||||
113
.github/workflows/test-invoke-conda.yml
vendored
@@ -1,113 +0,0 @@
|
|||||||
name: Test invoke.py
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- 'main'
|
|
||||||
- 'development'
|
|
||||||
- 'fix-gh-actions-fork'
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- 'main'
|
|
||||||
- 'development'
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
matrix:
|
|
||||||
strategy:
|
|
||||||
fail-fast: false
|
|
||||||
matrix:
|
|
||||||
stable-diffusion-model:
|
|
||||||
# - 'https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt'
|
|
||||||
- 'https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt'
|
|
||||||
os:
|
|
||||||
- ubuntu-latest
|
|
||||||
- macOS-12
|
|
||||||
include:
|
|
||||||
- os: ubuntu-latest
|
|
||||||
environment-file: environment.yml
|
|
||||||
default-shell: bash -l {0}
|
|
||||||
- os: macOS-12
|
|
||||||
environment-file: environment-mac.yml
|
|
||||||
default-shell: bash -l {0}
|
|
||||||
# - stable-diffusion-model: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
|
|
||||||
# stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
|
|
||||||
# stable-diffusion-model-switch: stable-diffusion-1.4
|
|
||||||
- stable-diffusion-model: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
|
|
||||||
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
|
|
||||||
stable-diffusion-model-switch: stable-diffusion-1.5
|
|
||||||
name: ${{ matrix.os }} with ${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
env:
|
|
||||||
CONDA_ENV_NAME: invokeai
|
|
||||||
defaults:
|
|
||||||
run:
|
|
||||||
shell: ${{ matrix.default-shell }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout sources
|
|
||||||
id: checkout-sources
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
|
|
||||||
- name: create models.yaml from example
|
|
||||||
run: cp configs/models.yaml.example configs/models.yaml
|
|
||||||
|
|
||||||
- name: Use cached conda packages
|
|
||||||
id: use-cached-conda-packages
|
|
||||||
uses: actions/cache@v3
|
|
||||||
with:
|
|
||||||
path: ~/conda_pkgs_dir
|
|
||||||
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-file) }}
|
|
||||||
|
|
||||||
- name: Activate Conda Env
|
|
||||||
id: activate-conda-env
|
|
||||||
uses: conda-incubator/setup-miniconda@v2
|
|
||||||
with:
|
|
||||||
activate-environment: ${{ env.CONDA_ENV_NAME }}
|
|
||||||
environment-file: ${{ matrix.environment-file }}
|
|
||||||
miniconda-version: latest
|
|
||||||
|
|
||||||
- name: set test prompt to main branch validation
|
|
||||||
if: ${{ github.ref == 'refs/heads/main' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: set test prompt to development branch validation
|
|
||||||
if: ${{ github.ref == 'refs/heads/development' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: set test prompt to Pull Request validation
|
|
||||||
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
|
|
||||||
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Download ${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
id: download-stable-diffusion-model
|
|
||||||
run: |
|
|
||||||
[[ -d models/ldm/stable-diffusion-v1 ]] \
|
|
||||||
|| mkdir -p models/ldm/stable-diffusion-v1
|
|
||||||
curl \
|
|
||||||
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
|
|
||||||
-o ${{ matrix.stable-diffusion-model-dl-path }} \
|
|
||||||
-L ${{ matrix.stable-diffusion-model }}
|
|
||||||
|
|
||||||
- name: run preload_models.py
|
|
||||||
id: run-preload-models
|
|
||||||
run: |
|
|
||||||
python scripts/preload_models.py \
|
|
||||||
--no-interactive
|
|
||||||
|
|
||||||
- name: Run the tests
|
|
||||||
id: run-tests
|
|
||||||
run: |
|
|
||||||
time python scripts/invoke.py \
|
|
||||||
--model ${{ matrix.stable-diffusion-model-switch }} \
|
|
||||||
--from_file ${{ env.TEST_PROMPTS }}
|
|
||||||
|
|
||||||
- name: export conda env
|
|
||||||
id: export-conda-env
|
|
||||||
run: |
|
|
||||||
mkdir -p outputs/img-samples
|
|
||||||
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
|
|
||||||
|
|
||||||
- name: Archive results
|
|
||||||
id: archive-results
|
|
||||||
uses: actions/upload-artifact@v3
|
|
||||||
with:
|
|
||||||
name: results_${{ matrix.os }}_${{ matrix.stable-diffusion-model-switch }}
|
|
||||||
path: outputs/img-samples
|
|
||||||
135
.github/workflows/test-invoke-pip.yml
vendored
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
name: Test invoke.py pip
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
pull_request:
|
||||||
|
types:
|
||||||
|
- 'ready_for_review'
|
||||||
|
- 'opened'
|
||||||
|
- 'synchronize'
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
matrix:
|
||||||
|
if: github.event.pull_request.draft == false
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
python-version:
|
||||||
|
# - '3.9'
|
||||||
|
- '3.10'
|
||||||
|
pytorch:
|
||||||
|
# - linux-cuda-11_6
|
||||||
|
- linux-cuda-11_7
|
||||||
|
- linux-rocm-5_2
|
||||||
|
- linux-cpu
|
||||||
|
- macos-default
|
||||||
|
- windows-cpu
|
||||||
|
# - windows-cuda-11_6
|
||||||
|
# - windows-cuda-11_7
|
||||||
|
include:
|
||||||
|
# - pytorch: linux-cuda-11_6
|
||||||
|
# os: ubuntu-22.04
|
||||||
|
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
|
||||||
|
# github-env: $GITHUB_ENV
|
||||||
|
- pytorch: linux-cuda-11_7
|
||||||
|
os: ubuntu-22.04
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: linux-rocm-5_2
|
||||||
|
os: ubuntu-22.04
|
||||||
|
extra-index-url: 'https://download.pytorch.org/whl/rocm5.2'
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: linux-cpu
|
||||||
|
os: ubuntu-22.04
|
||||||
|
extra-index-url: 'https://download.pytorch.org/whl/cpu'
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: macos-default
|
||||||
|
os: macOS-12
|
||||||
|
github-env: $GITHUB_ENV
|
||||||
|
- pytorch: windows-cpu
|
||||||
|
os: windows-2022
|
||||||
|
github-env: $env:GITHUB_ENV
|
||||||
|
# - pytorch: windows-cuda-11_6
|
||||||
|
# os: windows-2022
|
||||||
|
# extra-index-url: 'https://download.pytorch.org/whl/cu116'
|
||||||
|
# github-env: $env:GITHUB_ENV
|
||||||
|
# - pytorch: windows-cuda-11_7
|
||||||
|
# os: windows-2022
|
||||||
|
# extra-index-url: 'https://download.pytorch.org/whl/cu117'
|
||||||
|
# github-env: $env:GITHUB_ENV
|
||||||
|
name: ${{ matrix.pytorch }} on ${{ matrix.python-version }}
|
||||||
|
runs-on: ${{ matrix.os }}
|
||||||
|
env:
|
||||||
|
PIP_USE_PEP517: '1'
|
||||||
|
steps:
|
||||||
|
- name: Checkout sources
|
||||||
|
id: checkout-sources
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: set test prompt to main branch validation
|
||||||
|
if: ${{ github.ref == 'refs/heads/main' }}
|
||||||
|
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
|
||||||
|
|
||||||
|
- name: set test prompt to Pull Request validation
|
||||||
|
if: ${{ github.ref != 'refs/heads/main' }}
|
||||||
|
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
|
||||||
|
|
||||||
|
- name: setup python
|
||||||
|
uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: ${{ matrix.python-version }}
|
||||||
|
cache: pip
|
||||||
|
cache-dependency-path: pyproject.toml
|
||||||
|
|
||||||
|
- name: install invokeai
|
||||||
|
env:
|
||||||
|
PIP_EXTRA_INDEX_URL: ${{ matrix.extra-index-url }}
|
||||||
|
run: >
|
||||||
|
pip3 install
|
||||||
|
--editable=".[test]"
|
||||||
|
|
||||||
|
- name: run pytest
|
||||||
|
id: run-pytest
|
||||||
|
run: pytest
|
||||||
|
|
||||||
|
- name: set INVOKEAI_OUTDIR
|
||||||
|
run: >
|
||||||
|
python -c
|
||||||
|
"import os;from ldm.invoke.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
|
||||||
|
>> ${{ matrix.github-env }}
|
||||||
|
|
||||||
|
- name: run invokeai-configure
|
||||||
|
id: run-preload-models
|
||||||
|
env:
|
||||||
|
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGINGFACE_TOKEN }}
|
||||||
|
run: >
|
||||||
|
invokeai-configure
|
||||||
|
--yes
|
||||||
|
--default_only
|
||||||
|
--full-precision
|
||||||
|
# can't use fp16 weights without a GPU
|
||||||
|
|
||||||
|
- name: run invokeai
|
||||||
|
id: run-invokeai
|
||||||
|
env:
|
||||||
|
# Set offline mode to make sure configure preloaded successfully.
|
||||||
|
HF_HUB_OFFLINE: 1
|
||||||
|
HF_DATASETS_OFFLINE: 1
|
||||||
|
TRANSFORMERS_OFFLINE: 1
|
||||||
|
run: >
|
||||||
|
invokeai
|
||||||
|
--no-patchmatch
|
||||||
|
--no-nsfw_checker
|
||||||
|
--from_file ${{ env.TEST_PROMPTS }}
|
||||||
|
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
|
||||||
|
|
||||||
|
- name: Archive results
|
||||||
|
id: archive-results
|
||||||
|
uses: actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: results
|
||||||
|
path: ${{ env.INVOKEAI_OUTDIR }}
|
||||||
31
.gitignore
vendored
@@ -1,10 +1,13 @@
|
|||||||
# ignore default image save location and model symbolic link
|
# ignore default image save location and model symbolic link
|
||||||
|
embeddings/
|
||||||
outputs/
|
outputs/
|
||||||
models/ldm/stable-diffusion-v1/model.ckpt
|
models/ldm/stable-diffusion-v1/model.ckpt
|
||||||
ldm/invoke/restoration/codeformer/weights
|
**/restoration/codeformer/weights
|
||||||
|
|
||||||
# ignore user models config
|
# ignore user models config
|
||||||
configs/models.user.yaml
|
configs/models.user.yaml
|
||||||
config/models.user.yml
|
config/models.user.yml
|
||||||
|
invokeai.init
|
||||||
|
|
||||||
# ignore the Anaconda/Miniconda installer used while building Docker image
|
# ignore the Anaconda/Miniconda installer used while building Docker image
|
||||||
anaconda.sh
|
anaconda.sh
|
||||||
@@ -69,6 +72,7 @@ coverage.xml
|
|||||||
.hypothesis/
|
.hypothesis/
|
||||||
.pytest_cache/
|
.pytest_cache/
|
||||||
cover/
|
cover/
|
||||||
|
junit/
|
||||||
|
|
||||||
# Translations
|
# Translations
|
||||||
*.mo
|
*.mo
|
||||||
@@ -192,7 +196,7 @@ checkpoints
|
|||||||
.DS_Store
|
.DS_Store
|
||||||
|
|
||||||
# Let the frontend manage its own gitignore
|
# Let the frontend manage its own gitignore
|
||||||
!frontend/*
|
!invokeai/frontend/*
|
||||||
|
|
||||||
# Scratch folder
|
# Scratch folder
|
||||||
.scratch/
|
.scratch/
|
||||||
@@ -200,6 +204,7 @@ checkpoints
|
|||||||
gfpgan/
|
gfpgan/
|
||||||
models/ldm/stable-diffusion-v1/*.sha256
|
models/ldm/stable-diffusion-v1/*.sha256
|
||||||
|
|
||||||
|
|
||||||
# GFPGAN model files
|
# GFPGAN model files
|
||||||
gfpgan/
|
gfpgan/
|
||||||
|
|
||||||
@@ -207,4 +212,24 @@ gfpgan/
|
|||||||
configs/models.yaml
|
configs/models.yaml
|
||||||
|
|
||||||
# weights (will be created by installer)
|
# weights (will be created by installer)
|
||||||
models/ldm/stable-diffusion-v1/*.ckpt
|
models/ldm/stable-diffusion-v1/*.ckpt
|
||||||
|
models/clipseg
|
||||||
|
models/gfpgan
|
||||||
|
|
||||||
|
# ignore initfile
|
||||||
|
.invokeai
|
||||||
|
|
||||||
|
# ignore environment.yml and requirements.txt
|
||||||
|
# these are links to the real files in environments-and-requirements
|
||||||
|
environment.yml
|
||||||
|
requirements.txt
|
||||||
|
|
||||||
|
# source installer files
|
||||||
|
installer/*zip
|
||||||
|
installer/install.bat
|
||||||
|
installer/install.sh
|
||||||
|
installer/update.bat
|
||||||
|
installer/update.sh
|
||||||
|
|
||||||
|
# no longer stored in source directory
|
||||||
|
models
|
||||||
128
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
# Contributor Covenant Code of Conduct
|
||||||
|
|
||||||
|
## Our Pledge
|
||||||
|
|
||||||
|
We as members, contributors, and leaders pledge to make participation in our
|
||||||
|
community a harassment-free experience for everyone, regardless of age, body
|
||||||
|
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||||
|
identity and expression, level of experience, education, socio-economic status,
|
||||||
|
nationality, personal appearance, race, religion, or sexual identity
|
||||||
|
and orientation.
|
||||||
|
|
||||||
|
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||||
|
diverse, inclusive, and healthy community.
|
||||||
|
|
||||||
|
## Our Standards
|
||||||
|
|
||||||
|
Examples of behavior that contributes to a positive environment for our
|
||||||
|
community include:
|
||||||
|
|
||||||
|
* Demonstrating empathy and kindness toward other people
|
||||||
|
* Being respectful of differing opinions, viewpoints, and experiences
|
||||||
|
* Giving and gracefully accepting constructive feedback
|
||||||
|
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||||
|
and learning from the experience
|
||||||
|
* Focusing on what is best not just for us as individuals, but for the
|
||||||
|
overall community
|
||||||
|
|
||||||
|
Examples of unacceptable behavior include:
|
||||||
|
|
||||||
|
* The use of sexualized language or imagery, and sexual attention or
|
||||||
|
advances of any kind
|
||||||
|
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||||
|
* Public or private harassment
|
||||||
|
* Publishing others' private information, such as a physical or email
|
||||||
|
address, without their explicit permission
|
||||||
|
* Other conduct which could reasonably be considered inappropriate in a
|
||||||
|
professional setting
|
||||||
|
|
||||||
|
## Enforcement Responsibilities
|
||||||
|
|
||||||
|
Community leaders are responsible for clarifying and enforcing our standards of
|
||||||
|
acceptable behavior and will take appropriate and fair corrective action in
|
||||||
|
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||||
|
or harmful.
|
||||||
|
|
||||||
|
Community leaders have the right and responsibility to remove, edit, or reject
|
||||||
|
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||||
|
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||||
|
decisions when appropriate.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
This Code of Conduct applies within all community spaces, and also applies when
|
||||||
|
an individual is officially representing the community in public spaces.
|
||||||
|
Examples of representing our community include using an official e-mail address,
|
||||||
|
posting via an official social media account, or acting as an appointed
|
||||||
|
representative at an online or offline event.
|
||||||
|
|
||||||
|
## Enforcement
|
||||||
|
|
||||||
|
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||||
|
may be reported to the community leaders responsible for enforcement
|
||||||
|
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
|
||||||
|
be reviewed and investigated promptly and fairly.
|
||||||
|
|
||||||
|
All community leaders are obligated to respect the privacy and security of the
|
||||||
|
reporter of any incident.
|
||||||
|
|
||||||
|
## Enforcement Guidelines
|
||||||
|
|
||||||
|
Community leaders will follow these Community Impact Guidelines in determining
|
||||||
|
the consequences for any action they deem in violation of this Code of Conduct:
|
||||||
|
|
||||||
|
### 1. Correction
|
||||||
|
|
||||||
|
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||||
|
unprofessional or unwelcome in the community.
|
||||||
|
|
||||||
|
**Consequence**: A private, written warning from community leaders, providing
|
||||||
|
clarity around the nature of the violation and an explanation of why the
|
||||||
|
behavior was inappropriate. A public apology may be requested.
|
||||||
|
|
||||||
|
### 2. Warning
|
||||||
|
|
||||||
|
**Community Impact**: A violation through a single incident or series
|
||||||
|
of actions.
|
||||||
|
|
||||||
|
**Consequence**: A warning with consequences for continued behavior. No
|
||||||
|
interaction with the people involved, including unsolicited interaction with
|
||||||
|
those enforcing the Code of Conduct, for a specified period of time. This
|
||||||
|
includes avoiding interactions in community spaces as well as external channels
|
||||||
|
like social media. Violating these terms may lead to a temporary or
|
||||||
|
permanent ban.
|
||||||
|
|
||||||
|
### 3. Temporary Ban
|
||||||
|
|
||||||
|
**Community Impact**: A serious violation of community standards, including
|
||||||
|
sustained inappropriate behavior.
|
||||||
|
|
||||||
|
**Consequence**: A temporary ban from any sort of interaction or public
|
||||||
|
communication with the community for a specified period of time. No public or
|
||||||
|
private interaction with the people involved, including unsolicited interaction
|
||||||
|
with those enforcing the Code of Conduct, is allowed during this period.
|
||||||
|
Violating these terms may lead to a permanent ban.
|
||||||
|
|
||||||
|
### 4. Permanent Ban
|
||||||
|
|
||||||
|
**Community Impact**: Demonstrating a pattern of violation of community
|
||||||
|
standards, including sustained inappropriate behavior, harassment of an
|
||||||
|
individual, or aggression toward or disparagement of classes of individuals.
|
||||||
|
|
||||||
|
**Consequence**: A permanent ban from any sort of public interaction within
|
||||||
|
the community.
|
||||||
|
|
||||||
|
## Attribution
|
||||||
|
|
||||||
|
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||||
|
version 2.0, available at
|
||||||
|
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||||
|
|
||||||
|
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||||
|
enforcement ladder](https://github.com/mozilla/diversity).
|
||||||
|
|
||||||
|
[homepage]: https://www.contributor-covenant.org
|
||||||
|
|
||||||
|
For answers to common questions about this code of conduct, see the FAQ at
|
||||||
|
https://www.contributor-covenant.org/faq. Translations are available at
|
||||||
|
https://www.contributor-covenant.org/translations.
|
||||||
84
InvokeAI_Statement_of_Values.md
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
<img src="docs/assets/invoke_ai_banner.png" align="center">
|
||||||
|
|
||||||
|
Invoke-AI is a community of software developers, researchers, and user
|
||||||
|
interface experts who have come together on a voluntary basis to build
|
||||||
|
software tools which support cutting edge AI text-to-image
|
||||||
|
applications. This community is open to anyone who wishes to
|
||||||
|
contribute to the effort and has the skill and time to do so.
|
||||||
|
|
||||||
|
# Our Values
|
||||||
|
|
||||||
|
The InvokeAI team is a diverse community which includes individuals
|
||||||
|
from various parts of the world and many walks of life. Despite our
|
||||||
|
differences, we share a number of core values which we ask prospective
|
||||||
|
contributors to understand and respect. We believe:
|
||||||
|
|
||||||
|
1. That Open Source Software is a positive force in the world. We
|
||||||
|
create software that can be used, reused, and redistributed, without
|
||||||
|
restrictions, under a straightforward Open Source license (MIT). We
|
||||||
|
believe that Open Source benefits society as a whole by increasing the
|
||||||
|
availability of high quality software to all.
|
||||||
|
|
||||||
|
2. That those who create software should receive proper attribution
|
||||||
|
for their creative work. While we support the exchange and reuse of
|
||||||
|
Open Source Software, we feel strongly that the original authors of a
|
||||||
|
piece of code should receive credit for their contribution, and we
|
||||||
|
endeavor to do so whenever possible.
|
||||||
|
|
||||||
|
3. That there is moral ambiguity surrounding AI-assisted art. We are
|
||||||
|
aware of the moral and ethical issues surrounding the release of the
|
||||||
|
Stable Diffusion model and similar products. We are aware that, due to
|
||||||
|
the composition of their training sets, current AI-generated image
|
||||||
|
models are biased against certain ethnic groups, cultural concepts of
|
||||||
|
beauty, ethnic stereotypes, and gender roles.
|
||||||
|
|
||||||
|
1. We recognize the potential for harm to these groups that these biases
|
||||||
|
represent and trust that future AI models will take steps towards
|
||||||
|
reducing or eliminating the biases noted above, respect and give due
|
||||||
|
credit to the artists whose work is sourced, and call on developers
|
||||||
|
and users to favor these models over the older ones as they become
|
||||||
|
available.
|
||||||
|
|
||||||
|
4. We are deeply committed to ensuring that this technology benefits
|
||||||
|
everyone, including artists. We see AI art not as a replacement for
|
||||||
|
the artist, but rather as a tool to empower them. With that
|
||||||
|
in mind, we are constantly debating how to build systems that put
|
||||||
|
artists’ needs first: tools which can be readily integrated into an
|
||||||
|
artist’s existing workflows and practices, enhancing their work and
|
||||||
|
helping them to push it further. Every decision we take as a team,
|
||||||
|
which includes several artists, aims to build towards that goal.
|
||||||
|
|
||||||
|
5. That artificial intelligence can be a force for good in the world,
|
||||||
|
but must be used responsibly. Artificial intelligence technologies
|
||||||
|
have the potential to improve society, in everything from cancer care,
|
||||||
|
to customer service, to creative writing.
|
||||||
|
|
||||||
|
1. While we do not believe that software should arbitrarily limit what
|
||||||
|
users can do with it, we recognize that when used irresponsibly, AI
|
||||||
|
has the potential to do much harm. Our Discord server is actively
|
||||||
|
moderated in order to minimize the potential of harm from
|
||||||
|
user-contributed images. In addition, we ask users of our software to
|
||||||
|
refrain from using it in any way that would cause mental, emotional or
|
||||||
|
physical harm to individuals and vulnerable populations including (but
|
||||||
|
not limited to) women; minors; ethnic minorities; religious groups;
|
||||||
|
members of LGBTQIA communities; and people with disabilities or
|
||||||
|
impairments.
|
||||||
|
|
||||||
|
2. Note that some of the image generation AI models which the Invoke-AI
|
||||||
|
toolkit supports carry licensing agreements which impose restrictions
|
||||||
|
on how the model is used. We ask that our users read and agree to
|
||||||
|
these terms if they wish to make use of these models. These agreements
|
||||||
|
are distinct from the MIT license which applies to the InvokeAI
|
||||||
|
software and source code.
|
||||||
|
|
||||||
|
6. That mutual respect is key to a healthy software development
|
||||||
|
community. Members of the InvokeAI community are expected to treat
|
||||||
|
each other with respect, beneficence, and empathy. Each of us has a
|
||||||
|
different background and a unique set of skills. We strive to help
|
||||||
|
each other grow and gain new skills, and we apportion expectations in
|
||||||
|
a way that balances the members' time, skillset, and interest
|
||||||
|
area. Disputes are resolved by open and honest communication.
|
||||||
|
|
||||||
|
## Signature
|
||||||
|
|
||||||
|
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, **keturn**, and **ebr** (Eugene Brodsky). Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.
|
||||||
214
README.md
@@ -1,21 +1,17 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
# InvokeAI: A Stable Diffusion Toolkit
|
# InvokeAI: A Stable Diffusion Toolkit
|
||||||
|
|
||||||
_Formerly known as lstein/stable-diffusion_
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
[![discord badge]][discord link]
|
[![discord badge]][discord link]
|
||||||
|
|
||||||
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
|
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
|
||||||
|
|
||||||
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
|
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
|
||||||
|
|
||||||
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
|
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
|
||||||
|
|
||||||
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
|
|
||||||
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
|
|
||||||
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
|
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
|
||||||
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
|
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
|
||||||
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
|
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
|
||||||
@@ -28,28 +24,41 @@ _Formerly known as lstein/stable-diffusion_
|
|||||||
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
|
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
|
||||||
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
|
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
|
||||||
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
|
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
|
||||||
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
|
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
|
||||||
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
|
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
|
||||||
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
|
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
|
||||||
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
|
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
This is a fork of
|
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
|
||||||
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
|
|
||||||
the open source text-to-image generator. It provides a streamlined
|
|
||||||
process with various new features and options to aid the image
|
|
||||||
generation process. It runs on Windows, Mac and Linux machines, with
|
|
||||||
GPU cards with as little as 4 GB of RAM. It provides both a polished
|
|
||||||
Web interface (see below), and an easy-to-use command-line interface.
|
|
||||||
|
|
||||||
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
|
**Quick links**: [[How to Install](#installation)] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
|
||||||
|
|
||||||
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
|
_Note: InvokeAI is rapidly evolving. Please use the
|
||||||
|
|
||||||
|
|
||||||
_Note: This fork is rapidly evolving. Please use the
|
|
||||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
|
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
|
||||||
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
|
requests. Be sure to use the provided templates. They will help us diagnose issues faster._
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
</div>
|
||||||
|
|
||||||
|
# Getting Started with InvokeAI
|
||||||
|
|
||||||
|
For full installation and upgrade instructions, please see:
|
||||||
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
||||||
|
|
||||||
|
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
|
||||||
|
2. Download the .zip file for your OS (Windows/macOS/Linux).
|
||||||
|
3. Unzip the file.
|
||||||
|
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
|
||||||
|
5. Wait a while, until it is done.
|
||||||
|
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
|
||||||
|
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
|
||||||
|
8. Type `banana sushi` in the box on the top left and click `Invoke`
|
||||||
|
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
@@ -63,25 +72,31 @@ requests. Be sure to use the provided templates. They will help aid diagnose iss
|
|||||||
8. [Support](#support)
|
8. [Support](#support)
|
||||||
9. [Further Reading](#further-reading)
|
9. [Further Reading](#further-reading)
|
||||||
|
|
||||||
### Installation
|
## Installation
|
||||||
|
|
||||||
This fork is supported across multiple platforms. You can find individual installation instructions
|
This fork is supported across Linux, Windows and Macintosh. Linux
|
||||||
below.
|
users can use either an Nvidia-based card (with CUDA support) or an
|
||||||
|
AMD card (using the ROCm driver). For full installation and upgrade
|
||||||
- #### [Linux](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_LINUX/)
|
instructions, please see:
|
||||||
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
|
||||||
- #### [Windows](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_WINDOWS/)
|
|
||||||
|
|
||||||
- #### [Macintosh](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_MAC/)
|
|
||||||
|
|
||||||
### Hardware Requirements
|
### Hardware Requirements
|
||||||
|
|
||||||
|
InvokeAI is supported across Linux, Windows and macOS. Linux
|
||||||
|
users can use either an Nvidia-based card (with CUDA support) or an
|
||||||
|
AMD card (using the ROCm driver).
|
||||||
|
|
||||||
#### System
|
#### System
|
||||||
|
|
||||||
You wil need one of the following:
|
You will need one of the following:
|
||||||
|
|
||||||
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||||
- An Apple computer with an M1 chip.
|
- An Apple computer with an M1 chip.
|
||||||
|
- An AMD-based graphics card with 4GB or more VRAM memory. (Linux only)
|
||||||
|
|
||||||
|
We do not recommend the GTX 1650 or 1660 series video cards. They are
|
||||||
|
unable to run in half-precision mode and do not have sufficient VRAM
|
||||||
|
to render 512x512 images.
|
||||||
|
|
||||||
#### Memory
|
#### Memory
|
||||||
|
|
||||||
@@ -91,96 +106,48 @@ You wil need one of the following:
|
|||||||
|
|
||||||
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
||||||
|
|
||||||
**Note**
|
## Features
|
||||||
|
|
||||||
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
|
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
|
||||||
run the dream script in full-precision mode as shown below.
|
|
||||||
|
|
||||||
Similarly, specify full-precision mode on Apple M1 hardware.
|
### *Web Server & UI*
|
||||||
|
|
||||||
Precision is auto configured based on the device. If however you encounter
|
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
|
||||||
errors like 'expected type Float but found Half' or 'not implemented for Half'
|
|
||||||
you can try starting `invoke.py` with the `--precision=float32` flag:
|
|
||||||
|
|
||||||
```bash
|
### *Unified Canvas*
|
||||||
(invokeai) ~/InvokeAI$ python scripts/invoke.py --precision=float32
|
|
||||||
```
|
|
||||||
|
|
||||||
### Features
|
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
|
||||||
|
|
||||||
#### Major Features
|
### *Advanced Prompt Syntax*
|
||||||
|
|
||||||
- [Web Server](https://invoke-ai.github.io/InvokeAI/features/WEB/)
|
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
|
||||||
- [Interactive Command Line Interface](https://invoke-ai.github.io/InvokeAI/features/CLI/)
|
|
||||||
- [Image To Image](https://invoke-ai.github.io/InvokeAI/features/IMG2IMG/)
|
|
||||||
- [Inpainting Support](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
|
|
||||||
- [Outpainting Support](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/)
|
|
||||||
- [Upscaling, face-restoration and outpainting](https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/)
|
|
||||||
- [Reading Prompts From File](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#reading-prompts-from-a-file)
|
|
||||||
- [Prompt Blending](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-blending)
|
|
||||||
- [Thresholding and Perlin Noise Initialization Options](https://invoke-ai.github.io/InvokeAI/features/OTHER/#thresholding-and-perlin-noise-initialization-options)
|
|
||||||
- [Negative/Unconditioned Prompts](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts)
|
|
||||||
- [Variations](https://invoke-ai.github.io/InvokeAI/features/VARIATIONS/)
|
|
||||||
- [Personalizing Text-to-Image Generation](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
|
|
||||||
- [Simplified API for text to image generation](https://invoke-ai.github.io/InvokeAI/features/OTHER/#simplified-api)
|
|
||||||
|
|
||||||
#### Other Features
|
### *Command Line Interface*
|
||||||
|
|
||||||
- [Google Colab](https://invoke-ai.github.io/InvokeAI/features/OTHER/#google-colab)
|
For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.
|
||||||
- [Seamless Tiling](https://invoke-ai.github.io/InvokeAI/features/OTHER/#seamless-tiling)
|
|
||||||
- [Shortcut: Reusing Seeds](https://invoke-ai.github.io/InvokeAI/features/OTHER/#shortcuts-reusing-seeds)
|
### Other features
|
||||||
- [Preload Models](https://invoke-ai.github.io/InvokeAI/features/OTHER/#preload-models)
|
|
||||||
|
- *Support for both ckpt and diffusers models*
|
||||||
|
- *SD 2.0, 2.1 support*
|
||||||
|
- *Noise Control & Tresholding*
|
||||||
|
- *Popular Sampler Support*
|
||||||
|
- *Upscaling & Face Restoration Tools*
|
||||||
|
- *Embedding Manager & Support*
|
||||||
|
- *Model Manager & Support*
|
||||||
|
|
||||||
|
### Coming Soon
|
||||||
|
|
||||||
|
- *Node-Based Architecture & UI*
|
||||||
|
- And more...
|
||||||
|
|
||||||
### Latest Changes
|
### Latest Changes
|
||||||
|
|
||||||
### v2.1.0 major changes <small>(2 November 2022)</small>
|
For our latest changes, view our [Release
|
||||||
|
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
|
||||||
|
[CHANGELOG](docs/CHANGELOG.md).
|
||||||
|
|
||||||
- [Inpainting](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/) support in the WebGUI
|
## Troubleshooting
|
||||||
- Greatly improved navigation and user experience in the [WebGUI](https://invoke-ai.github.io/InvokeAI/features/WEB/)
|
|
||||||
- The prompt syntax has been enhanced with [prompt weighting, cross-attention and prompt merging](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/).
|
|
||||||
- You can now load [multiple models and switch among them quickly](https://docs.google.com/presentation/d/1WywGA1rny7bpFh7CLSdTr4nNpVKdlUeT0Bj0jCsILyU/edit?usp=sharing) without leaving the CLI.
|
|
||||||
- The installation process (via `scripts/preload_models.py`) now lets you select among several popular [Stable Diffusion models](https://invoke-ai.github.io/InvokeAI/installation/INSTALLING_MODELS/) and downloads and installs them on your behalf. Among other models, this script will install the current Stable Diffusion 1.5 model as well as a StabilityAI variable autoencoder (VAE) which improves face generation.
|
|
||||||
- Tired of struggling with photoeditors to get the masked region of for inpainting just right? Let the AI make the mask for you using [text masking](https://docs.google.com/presentation/d/1pWoY510hCVjz0M6X9CBbTznZgW2W5BYNKrmZm7B45q8/edit#slide=id.p). This feature allows you to specify the part of the image to paint over using just English-language phrases.
|
|
||||||
- Tired of seeing the head of your subjects cropped off? Uncrop them in the CLI with the [outcrop feature](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/#outcrop).
|
|
||||||
- Tired of seeing your subject's bodies duplicated or mangled when generating larger-dimension images? Check out the `--hires` option in the CLI, or select the corresponding toggle in the WebGUI.
|
|
||||||
- We now support textual inversion and fine-tune .bin styles and subjects from the Hugging Face archive of [SD Concepts](https://huggingface.co/sd-concepts-library). Load the .bin file using the `--embedding_path` option. (The next version will support merging and loading of multiple simultaneous models).
|
|
||||||
<a href="https://invoke-ai.github.io/InvokeAI/CHANGELOG/>Complete Changelog</a>
|
|
||||||
|
|
||||||
- v2.0.1 (13 October 2022)
|
|
||||||
- fix noisy images at high step count when using k* samplers
|
|
||||||
- dream.py script now calls invoke.py module directly rather than
|
|
||||||
via a new python process (which could break the environment)
|
|
||||||
|
|
||||||
- v2.0.0 (9 October 2022)
|
|
||||||
|
|
||||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
|
|
||||||
for backward compatibility.
|
|
||||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
|
||||||
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/INPAINTING/">inpainting</a> and <a href="https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/">outpainting</a>
|
|
||||||
- img2img runs on all k* samplers
|
|
||||||
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative prompts</a>
|
|
||||||
- Support for CodeFormer face reconstruction
|
|
||||||
- Support for Textual Inversion on Macintoshes
|
|
||||||
- Support in both WebGUI and CLI for <a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing of previously-generated images</a>
|
|
||||||
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
|
|
||||||
and "embiggen" upscaling. See the `!fix` command.
|
|
||||||
- New `--hires` option on `invoke>` line allows <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/#txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
|
|
||||||
- New `--perlin` and `--threshold` options allow you to add and control variation
|
|
||||||
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
|
|
||||||
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
|
|
||||||
and tweaking of previous settings.
|
|
||||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
|
|
||||||
- Improved <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line completion behavior</a>.
|
|
||||||
New commands added:
|
|
||||||
- List command-line history with `!history`
|
|
||||||
- Search command-line history with `!search`
|
|
||||||
- Clear history with `!clear`
|
|
||||||
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
|
||||||
configure. To switch away from auto use the new flag like `--precision=float32`.
|
|
||||||
|
|
||||||
For older changelogs, please visit the **[CHANGELOG](https://invoke-ai.github.io/InvokeAI/CHANGELOG#v114-11-september-2022)**.
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
||||||
problems and other issues.
|
problems and other issues.
|
||||||
@@ -188,14 +155,19 @@ problems and other issues.
|
|||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
||||||
cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how
|
cleanup, testing, or code reviews, is very much encouraged to do so.
|
||||||
to contribute to GitHub projects, here is a
|
|
||||||
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
|
|
||||||
|
|
||||||
A full set of contribution guidelines, along with templates, are in progress, but for now the most
|
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
|
||||||
important thing is to **make your pull request against the "development" branch**, and not against
|
|
||||||
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
|
If you are unfamiliar with how
|
||||||
changes.
|
to contribute to GitHub projects, here is a
|
||||||
|
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
|
||||||
|
|
||||||
|
We hope you enjoy using our software as much as we enjoy creating it,
|
||||||
|
and we hope that some of those of you who are reading this will elect
|
||||||
|
to become part of our community.
|
||||||
|
|
||||||
|
Welcome to InvokeAI!
|
||||||
|
|
||||||
### Contributors
|
### Contributors
|
||||||
|
|
||||||
@@ -205,13 +177,7 @@ their time, hard work and effort.
|
|||||||
|
|
||||||
### Support
|
### Support
|
||||||
|
|
||||||
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
|
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
|
||||||
email if you use and like the script.
|
|
||||||
|
|
||||||
Original portions of the software are Copyright (c) 2020
|
Original portions of the software are Copyright (c) 2023 by respective contributors.
|
||||||
[Lincoln D. Stein](https://github.com/lstein)
|
|
||||||
|
|
||||||
### Further Reading
|
|
||||||
|
|
||||||
Please see the original README for more information on this software and underlying algorithm,
|
|
||||||
located in the file [README-CompViz.md](https://invoke-ai.github.io/InvokeAI/other/README-CompViz/).
|
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
|
|||||||
|
|
||||||
# Uses
|
# Uses
|
||||||
|
|
||||||
## Direct Use
|
## Direct Use
|
||||||
The model is intended for research purposes only. Possible research areas and
|
The model is intended for research purposes only. Possible research areas and
|
||||||
tasks include
|
tasks include
|
||||||
|
|
||||||
@@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
|
|||||||
considerations.
|
considerations.
|
||||||
|
|
||||||
### Bias
|
### Bias
|
||||||
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
||||||
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
|
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
|
||||||
which consists of images that are primarily limited to English descriptions.
|
which consists of images that are primarily limited to English descriptions.
|
||||||
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
|
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
|
||||||
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
|
||||||
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
|
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
|
||||||
|
|
||||||
|
|
||||||
@@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
|
|||||||
- LAION-2B (en) and subsets thereof (see next section)
|
- LAION-2B (en) and subsets thereof (see next section)
|
||||||
|
|
||||||
**Training Procedure**
|
**Training Procedure**
|
||||||
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
|
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
|
||||||
|
|
||||||
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
|
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
|
||||||
- Text prompts are encoded through a ViT-L/14 text-encoder.
|
- Text prompts are encoded through a ViT-L/14 text-encoder.
|
||||||
@@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
|
|||||||
- **Batch:** 32 x 8 x 2 x 4 = 2048
|
- **Batch:** 32 x 8 x 2 x 4 = 2048
|
||||||
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
|
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
|
||||||
|
|
||||||
## Evaluation Results
|
## Evaluation Results
|
||||||
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
||||||
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
||||||
steps show the relative improvements of the checkpoints:
|
steps show the relative improvements of the checkpoints:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
|
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
|
||||||
## Environmental Impact
|
## Environmental Impact
|
||||||
|
|||||||
BIN
binary_installer/WinLongPathsEnabled.reg
Normal file
164
binary_installer/install.bat.in
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
@echo off
|
||||||
|
|
||||||
|
@rem This script will install git (if not found on the PATH variable)
|
||||||
|
@rem using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||||
|
@rem For users who already have git, this step will be skipped.
|
||||||
|
|
||||||
|
@rem Next, it'll download the project's source code.
|
||||||
|
@rem Then it will download a self-contained, standalone Python and unpack it.
|
||||||
|
@rem Finally, it'll create the Python virtual environment and preload the models.
|
||||||
|
|
||||||
|
@rem This enables a user to install this project without manually installing git or Python
|
||||||
|
|
||||||
|
@rem change to the script's directory
|
||||||
|
PUSHD "%~dp0"
|
||||||
|
|
||||||
|
set "no_cache_dir=--no-cache-dir"
|
||||||
|
if "%1" == "use-cache" (
|
||||||
|
set "no_cache_dir="
|
||||||
|
)
|
||||||
|
|
||||||
|
echo ***** Installing InvokeAI.. *****
|
||||||
|
@rem Config
|
||||||
|
set INSTALL_ENV_DIR=%cd%\installer_files\env
|
||||||
|
@rem https://mamba.readthedocs.io/en/latest/installation.html
|
||||||
|
set MICROMAMBA_DOWNLOAD_URL=https://github.com/cmdr2/stable-diffusion-ui/releases/download/v1.1/micromamba.exe
|
||||||
|
set RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||||
|
set RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
||||||
|
set PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||||
|
set PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-x86_64-pc-windows-msvc-shared-install_only.tar.gz
|
||||||
|
|
||||||
|
set PACKAGES_TO_INSTALL=
|
||||||
|
|
||||||
|
call git --version >.tmp1 2>.tmp2
|
||||||
|
if "%ERRORLEVEL%" NEQ "0" set PACKAGES_TO_INSTALL=%PACKAGES_TO_INSTALL% git
|
||||||
|
|
||||||
|
@rem Cleanup
|
||||||
|
del /q .tmp1 .tmp2
|
||||||
|
|
||||||
|
@rem (if necessary) install git into a contained environment
|
||||||
|
if "%PACKAGES_TO_INSTALL%" NEQ "" (
|
||||||
|
@rem download micromamba
|
||||||
|
echo ***** Downloading micromamba from %MICROMAMBA_DOWNLOAD_URL% to micromamba.exe *****
|
||||||
|
|
||||||
|
call curl -L "%MICROMAMBA_DOWNLOAD_URL%" > micromamba.exe
|
||||||
|
|
||||||
|
@rem test the mamba binary
|
||||||
|
echo ***** Micromamba version: *****
|
||||||
|
call micromamba.exe --version
|
||||||
|
|
||||||
|
@rem create the installer env
|
||||||
|
if not exist "%INSTALL_ENV_DIR%" (
|
||||||
|
call micromamba.exe create -y --prefix "%INSTALL_ENV_DIR%"
|
||||||
|
)
|
||||||
|
|
||||||
|
echo ***** Packages to install:%PACKAGES_TO_INSTALL% *****
|
||||||
|
|
||||||
|
call micromamba.exe install -y --prefix "%INSTALL_ENV_DIR%" -c conda-forge %PACKAGES_TO_INSTALL%
|
||||||
|
|
||||||
|
if not exist "%INSTALL_ENV_DIR%" (
|
||||||
|
echo ----- There was a problem while installing "%PACKAGES_TO_INSTALL%" using micromamba. Cannot continue. -----
|
||||||
|
pause
|
||||||
|
exit /b
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
del /q micromamba.exe
|
||||||
|
|
||||||
|
@rem For 'git' only
|
||||||
|
set PATH=%INSTALL_ENV_DIR%\Library\bin;%PATH%
|
||||||
|
|
||||||
|
@rem Download/unpack/clean up InvokeAI release sourceball
|
||||||
|
set err_msg=----- InvokeAI source download failed -----
|
||||||
|
echo Trying to download "%RELEASE_URL%%RELEASE_SOURCEBALL%"
|
||||||
|
curl -L %RELEASE_URL%%RELEASE_SOURCEBALL% --output InvokeAI.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- InvokeAI source unpack failed -----
|
||||||
|
tar -zxf InvokeAI.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
del /q InvokeAI.tgz
|
||||||
|
|
||||||
|
set err_msg=----- InvokeAI source copy failed -----
|
||||||
|
cd InvokeAI-*
|
||||||
|
xcopy . .. /e /h
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
@rem cleanup
|
||||||
|
for /f %%i in ('dir /b InvokeAI-*') do rd /s /q %%i
|
||||||
|
rd /s /q .dev_scripts .github docker-build tests
|
||||||
|
del /q requirements.in requirements-mkdocs.txt shell.nix
|
||||||
|
|
||||||
|
echo ***** Unpacked InvokeAI source *****
|
||||||
|
|
||||||
|
@rem Download/unpack/clean up python-build-standalone
|
||||||
|
set err_msg=----- Python download failed -----
|
||||||
|
curl -L %PYTHON_BUILD_STANDALONE_URL%/%PYTHON_BUILD_STANDALONE% --output python.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- Python unpack failed -----
|
||||||
|
tar -zxf python.tgz
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
del /q python.tgz
|
||||||
|
|
||||||
|
echo ***** Unpacked python-build-standalone *****
|
||||||
|
|
||||||
|
@rem create venv
|
||||||
|
set err_msg=----- problem creating venv -----
|
||||||
|
.\python\python -E -s -m venv .venv
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
call .venv\Scripts\activate.bat
|
||||||
|
|
||||||
|
echo ***** Created Python virtual environment *****
|
||||||
|
|
||||||
|
@rem Print venv's Python version
|
||||||
|
set err_msg=----- problem calling venv's python -----
|
||||||
|
echo We're running under
|
||||||
|
.venv\Scripts\python --version
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- pip update failed -----
|
||||||
|
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location --upgrade pip wheel
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
echo ***** Updated pip and wheel *****
|
||||||
|
|
||||||
|
set err_msg=----- requirements file copy failed -----
|
||||||
|
copy binary_installer\py3.10-windows-x86_64-cuda-reqs.txt requirements.txt
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
set err_msg=----- main pip install failed -----
|
||||||
|
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -r requirements.txt
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
echo ***** Installed Python dependencies *****
|
||||||
|
|
||||||
|
set err_msg=----- InvokeAI setup failed -----
|
||||||
|
.venv\Scripts\python -m pip install %no_cache_dir% --no-warn-script-location -e .
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
|
||||||
|
copy binary_installer\invoke.bat.in .\invoke.bat
|
||||||
|
echo ***** Installed invoke launcher script ******
|
||||||
|
|
||||||
|
@rem more cleanup
|
||||||
|
rd /s /q binary_installer installer_files
|
||||||
|
|
||||||
|
@rem preload the models
|
||||||
|
call .venv\Scripts\python scripts\configure_invokeai.py
|
||||||
|
set err_msg=----- model download clone failed -----
|
||||||
|
if %errorlevel% neq 0 goto err_exit
|
||||||
|
deactivate
|
||||||
|
|
||||||
|
echo ***** Finished downloading models *****
|
||||||
|
|
||||||
|
echo All done! Execute the file invoke.bat in this directory to start InvokeAI
|
||||||
|
pause
|
||||||
|
exit
|
||||||
|
|
||||||
|
:err_exit
|
||||||
|
echo %err_msg%
|
||||||
|
pause
|
||||||
|
exit
|
||||||
235
binary_installer/install.sh.in
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# ensure we're in the correct folder in case user's CWD is somewhere else
|
||||||
|
scriptdir=$(dirname "$0")
|
||||||
|
cd "$scriptdir"
|
||||||
|
|
||||||
|
set -euo pipefail
|
||||||
|
IFS=$'\n\t'
|
||||||
|
|
||||||
|
function _err_exit {
|
||||||
|
if test "$1" -ne 0
|
||||||
|
then
|
||||||
|
echo -e "Error code $1; Error caught was '$2'"
|
||||||
|
read -p "Press any key to exit..."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# This script will install git (if not found on the PATH variable)
|
||||||
|
# using micromamba (an 8mb static-linked single-file binary, conda replacement).
|
||||||
|
# For users who already have git, this step will be skipped.
|
||||||
|
|
||||||
|
# Next, it'll download the project's source code.
|
||||||
|
# Then it will download a self-contained, standalone Python and unpack it.
|
||||||
|
# Finally, it'll create the Python virtual environment and preload the models.
|
||||||
|
|
||||||
|
# This enables a user to install this project without manually installing git or Python
|
||||||
|
|
||||||
|
echo -e "\n***** Installing InvokeAI into $(pwd)... *****\n"
|
||||||
|
|
||||||
|
export no_cache_dir="--no-cache-dir"
|
||||||
|
if [ $# -ge 1 ]; then
|
||||||
|
if [ "$1" = "use-cache" ]; then
|
||||||
|
export no_cache_dir=""
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
OS_NAME=$(uname -s)
|
||||||
|
case "${OS_NAME}" in
|
||||||
|
Linux*) OS_NAME="linux";;
|
||||||
|
Darwin*) OS_NAME="darwin";;
|
||||||
|
*) echo -e "\n----- Unknown OS: $OS_NAME! This script runs only on Linux or macOS -----\n" && exit
|
||||||
|
esac
|
||||||
|
|
||||||
|
OS_ARCH=$(uname -m)
|
||||||
|
case "${OS_ARCH}" in
|
||||||
|
x86_64*) ;;
|
||||||
|
arm64*) ;;
|
||||||
|
*) echo -e "\n----- Unknown system architecture: $OS_ARCH! This script runs only on x86_64 or arm64 -----\n" && exit
|
||||||
|
esac
|
||||||
|
|
||||||
|
# https://mamba.readthedocs.io/en/latest/installation.html
|
||||||
|
MAMBA_OS_NAME=$OS_NAME
|
||||||
|
MAMBA_ARCH=$OS_ARCH
|
||||||
|
if [ "$OS_NAME" == "darwin" ]; then
|
||||||
|
MAMBA_OS_NAME="osx"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$OS_ARCH" == "linux" ]; then
|
||||||
|
MAMBA_ARCH="aarch64"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$OS_ARCH" == "x86_64" ]; then
|
||||||
|
MAMBA_ARCH="64"
|
||||||
|
fi
|
||||||
|
|
||||||
|
PY_ARCH=$OS_ARCH
|
||||||
|
if [ "$OS_ARCH" == "arm64" ]; then
|
||||||
|
PY_ARCH="aarch64"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Compute device ('cd' segment of reqs files) detect goes here
|
||||||
|
# This needs a ton of work
|
||||||
|
# Suggestions:
|
||||||
|
# - lspci
|
||||||
|
# - check $PATH for nvidia-smi, gtt CUDA/GPU version from output
|
||||||
|
# - Surely there's a similar utility for AMD?
|
||||||
|
CD="cuda"
|
||||||
|
if [ "$OS_NAME" == "darwin" ] && [ "$OS_ARCH" == "arm64" ]; then
|
||||||
|
CD="mps"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# config
|
||||||
|
INSTALL_ENV_DIR="$(pwd)/installer_files/env"
|
||||||
|
MICROMAMBA_DOWNLOAD_URL="https://micro.mamba.pm/api/micromamba/${MAMBA_OS_NAME}-${MAMBA_ARCH}/latest"
|
||||||
|
RELEASE_URL=https://github.com/invoke-ai/InvokeAI
|
||||||
|
RELEASE_SOURCEBALL=/archive/refs/heads/main.tar.gz
|
||||||
|
PYTHON_BUILD_STANDALONE_URL=https://github.com/indygreg/python-build-standalone/releases/download
|
||||||
|
if [ "$OS_NAME" == "darwin" ]; then
|
||||||
|
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-apple-darwin-install_only.tar.gz
|
||||||
|
elif [ "$OS_NAME" == "linux" ]; then
|
||||||
|
PYTHON_BUILD_STANDALONE=20221002/cpython-3.10.7+20221002-${PY_ARCH}-unknown-linux-gnu-install_only.tar.gz
|
||||||
|
fi
|
||||||
|
echo "INSTALLING $RELEASE_SOURCEBALL FROM $RELEASE_URL"
|
||||||
|
|
||||||
|
PACKAGES_TO_INSTALL=""
|
||||||
|
|
||||||
|
if ! hash "git" &>/dev/null; then PACKAGES_TO_INSTALL="$PACKAGES_TO_INSTALL git"; fi
|
||||||
|
|
||||||
|
# (if necessary) install git and conda into a contained environment
|
||||||
|
if [ "$PACKAGES_TO_INSTALL" != "" ]; then
|
||||||
|
# download micromamba
|
||||||
|
echo -e "\n***** Downloading micromamba from $MICROMAMBA_DOWNLOAD_URL to micromamba *****\n"
|
||||||
|
|
||||||
|
curl -L "$MICROMAMBA_DOWNLOAD_URL" | tar -xvjO bin/micromamba > micromamba
|
||||||
|
|
||||||
|
chmod u+x ./micromamba
|
||||||
|
|
||||||
|
# test the mamba binary
|
||||||
|
echo -e "\n***** Micromamba version: *****\n"
|
||||||
|
./micromamba --version
|
||||||
|
|
||||||
|
# create the installer env
|
||||||
|
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||||
|
./micromamba create -y --prefix "$INSTALL_ENV_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo -e "\n***** Packages to install:$PACKAGES_TO_INSTALL *****\n"
|
||||||
|
|
||||||
|
./micromamba install -y --prefix "$INSTALL_ENV_DIR" -c conda-forge "$PACKAGES_TO_INSTALL"
|
||||||
|
|
||||||
|
if [ ! -e "$INSTALL_ENV_DIR" ]; then
|
||||||
|
echo -e "\n----- There was a problem while initializing micromamba. Cannot continue. -----\n"
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
rm -f micromamba.exe
|
||||||
|
|
||||||
|
export PATH="$INSTALL_ENV_DIR/bin:$PATH"
|
||||||
|
|
||||||
|
# Download/unpack/clean up InvokeAI release sourceball
|
||||||
|
_err_msg="\n----- InvokeAI source download failed -----\n"
|
||||||
|
curl -L $RELEASE_URL/$RELEASE_SOURCEBALL --output InvokeAI.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
_err_msg="\n----- InvokeAI source unpack failed -----\n"
|
||||||
|
tar -zxf InvokeAI.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
rm -f InvokeAI.tgz
|
||||||
|
|
||||||
|
_err_msg="\n----- InvokeAI source copy failed -----\n"
|
||||||
|
cd InvokeAI-*
|
||||||
|
cp -r . ..
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
# cleanup
|
||||||
|
rm -rf InvokeAI-*/
|
||||||
|
rm -rf .dev_scripts/ .github/ docker-build/ tests/ requirements.in requirements-mkdocs.txt shell.nix
|
||||||
|
|
||||||
|
echo -e "\n***** Unpacked InvokeAI source *****\n"
|
||||||
|
|
||||||
|
# Download/unpack/clean up python-build-standalone
|
||||||
|
_err_msg="\n----- Python download failed -----\n"
|
||||||
|
curl -L $PYTHON_BUILD_STANDALONE_URL/$PYTHON_BUILD_STANDALONE --output python.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
_err_msg="\n----- Python unpack failed -----\n"
|
||||||
|
tar -zxf python.tgz
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
rm -f python.tgz
|
||||||
|
|
||||||
|
echo -e "\n***** Unpacked python-build-standalone *****\n"
|
||||||
|
|
||||||
|
# create venv
|
||||||
|
_err_msg="\n----- problem creating venv -----\n"
|
||||||
|
|
||||||
|
if [ "$OS_NAME" == "darwin" ]; then
|
||||||
|
# patch sysconfig so that extensions can build properly
|
||||||
|
# adapted from https://github.com/cashapp/hermit-packages/commit/fcba384663892f4d9cfb35e8639ff7a28166ee43
|
||||||
|
PYTHON_INSTALL_DIR="$(pwd)/python"
|
||||||
|
SYSCONFIG="$(echo python/lib/python*/_sysconfigdata_*.py)"
|
||||||
|
TMPFILE="$(mktemp)"
|
||||||
|
chmod +w "${SYSCONFIG}"
|
||||||
|
cp "${SYSCONFIG}" "${TMPFILE}"
|
||||||
|
sed "s,'/install,'${PYTHON_INSTALL_DIR},g" "${TMPFILE}" > "${SYSCONFIG}"
|
||||||
|
rm -f "${TMPFILE}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
./python/bin/python3 -E -s -m venv .venv
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
source .venv/bin/activate
|
||||||
|
|
||||||
|
echo -e "\n***** Created Python virtual environment *****\n"
|
||||||
|
|
||||||
|
# Print venv's Python version
|
||||||
|
_err_msg="\n----- problem calling venv's python -----\n"
|
||||||
|
echo -e "We're running under"
|
||||||
|
.venv/bin/python3 --version
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
_err_msg="\n----- pip update failed -----\n"
|
||||||
|
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location --upgrade pip
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
echo -e "\n***** Updated pip *****\n"
|
||||||
|
|
||||||
|
_err_msg="\n----- requirements file copy failed -----\n"
|
||||||
|
cp binary_installer/py3.10-${OS_NAME}-"${OS_ARCH}"-${CD}-reqs.txt requirements.txt
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
_err_msg="\n----- main pip install failed -----\n"
|
||||||
|
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -r requirements.txt
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
echo -e "\n***** Installed Python dependencies *****\n"
|
||||||
|
|
||||||
|
_err_msg="\n----- InvokeAI setup failed -----\n"
|
||||||
|
.venv/bin/python3 -m pip install $no_cache_dir --no-warn-script-location -e .
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
|
||||||
|
echo -e "\n***** Installed InvokeAI *****\n"
|
||||||
|
|
||||||
|
cp binary_installer/invoke.sh.in ./invoke.sh
|
||||||
|
chmod a+rx ./invoke.sh
|
||||||
|
echo -e "\n***** Installed invoke launcher script ******\n"
|
||||||
|
|
||||||
|
# more cleanup
|
||||||
|
rm -rf binary_installer/ installer_files/
|
||||||
|
|
||||||
|
# preload the models
|
||||||
|
.venv/bin/python3 scripts/configure_invokeai.py
|
||||||
|
_err_msg="\n----- model download clone failed -----\n"
|
||||||
|
_err_exit $? _err_msg
|
||||||
|
deactivate
|
||||||
|
|
||||||
|
echo -e "\n***** Finished downloading models *****\n"
|
||||||
|
|
||||||
|
echo "All done! Run the command"
|
||||||
|
echo " $scriptdir/invoke.sh"
|
||||||
|
echo "to start InvokeAI."
|
||||||
|
read -p "Press any key to exit..."
|
||||||
|
exit
|
||||||
36
binary_installer/invoke.bat.in
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
@echo off
|
||||||
|
|
||||||
|
PUSHD "%~dp0"
|
||||||
|
call .venv\Scripts\activate.bat
|
||||||
|
|
||||||
|
echo Do you want to generate images using the
|
||||||
|
echo 1. command-line
|
||||||
|
echo 2. browser-based UI
|
||||||
|
echo OR
|
||||||
|
echo 3. open the developer console
|
||||||
|
set /p choice="Please enter 1, 2 or 3: "
|
||||||
|
if /i "%choice%" == "1" (
|
||||||
|
echo Starting the InvokeAI command-line.
|
||||||
|
.venv\Scripts\python scripts\invoke.py %*
|
||||||
|
) else if /i "%choice%" == "2" (
|
||||||
|
echo Starting the InvokeAI browser-based UI.
|
||||||
|
.venv\Scripts\python scripts\invoke.py --web %*
|
||||||
|
) else if /i "%choice%" == "3" (
|
||||||
|
echo Developer Console
|
||||||
|
echo Python command is:
|
||||||
|
where python
|
||||||
|
echo Python version is:
|
||||||
|
python --version
|
||||||
|
echo *************************
|
||||||
|
echo You are now in the system shell, with the local InvokeAI Python virtual environment activated,
|
||||||
|
echo so that you can troubleshoot this InvokeAI installation as necessary.
|
||||||
|
echo *************************
|
||||||
|
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
|
||||||
|
call cmd /k
|
||||||
|
) else (
|
||||||
|
echo Invalid selection
|
||||||
|
pause
|
||||||
|
exit /b
|
||||||
|
)
|
||||||
|
|
||||||
|
deactivate
|
||||||
46
binary_installer/invoke.sh.in
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
|
||||||
|
set -eu
|
||||||
|
|
||||||
|
. .venv/bin/activate
|
||||||
|
|
||||||
|
# set required env var for torch on mac MPS
|
||||||
|
if [ "$(uname -s)" == "Darwin" ]; then
|
||||||
|
export PYTORCH_ENABLE_MPS_FALLBACK=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Do you want to generate images using the"
|
||||||
|
echo "1. command-line"
|
||||||
|
echo "2. browser-based UI"
|
||||||
|
echo "OR"
|
||||||
|
echo "3. open the developer console"
|
||||||
|
echo "Please enter 1, 2, or 3:"
|
||||||
|
read choice
|
||||||
|
|
||||||
|
case $choice in
|
||||||
|
1)
|
||||||
|
printf "\nStarting the InvokeAI command-line..\n";
|
||||||
|
.venv/bin/python scripts/invoke.py $*;
|
||||||
|
;;
|
||||||
|
2)
|
||||||
|
printf "\nStarting the InvokeAI browser-based UI..\n";
|
||||||
|
.venv/bin/python scripts/invoke.py --web $*;
|
||||||
|
;;
|
||||||
|
3)
|
||||||
|
printf "\nDeveloper Console:\n";
|
||||||
|
printf "Python command is:\n\t";
|
||||||
|
which python;
|
||||||
|
printf "Python version is:\n\t";
|
||||||
|
python --version;
|
||||||
|
echo "*************************"
|
||||||
|
echo "You are now in your user shell ($SHELL) with the local InvokeAI Python virtual environment activated,";
|
||||||
|
echo "so that you can troubleshoot this InvokeAI installation as necessary.";
|
||||||
|
printf "*************************\n"
|
||||||
|
echo "*** Type \`exit\` to quit this shell and deactivate the Python virtual environment *** ";
|
||||||
|
/usr/bin/env "$SHELL";
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "Invalid selection";
|
||||||
|
exit
|
||||||
|
;;
|
||||||
|
esac
|
||||||
2097
binary_installer/py3.10-darwin-arm64-mps-reqs.txt
Normal file
2077
binary_installer/py3.10-darwin-x86_64-cpu-reqs.txt
Normal file
2103
binary_installer/py3.10-linux-x86_64-cuda-reqs.txt
Normal file
2109
binary_installer/py3.10-windows-x86_64-cuda-reqs.txt
Normal file
17
binary_installer/readme.txt
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
InvokeAI
|
||||||
|
|
||||||
|
Project homepage: https://github.com/invoke-ai/InvokeAI
|
||||||
|
|
||||||
|
Installation on Windows:
|
||||||
|
NOTE: You might need to enable Windows Long Paths. If you're not sure,
|
||||||
|
then you almost certainly need to. Simply double-click the 'WinLongPathsEnabled.reg'
|
||||||
|
file. Note that you will need to have admin privileges in order to
|
||||||
|
do this.
|
||||||
|
|
||||||
|
Please double-click the 'install.bat' file (while keeping it inside the invokeAI folder).
|
||||||
|
|
||||||
|
Installation on Linux and Mac:
|
||||||
|
Please open the terminal, and run './install.sh' (while keeping it inside the invokeAI folder).
|
||||||
|
|
||||||
|
After installation, please run the 'invoke.bat' file (on Windows) or 'invoke.sh'
|
||||||
|
file (on Linux/Mac) to start InvokeAI.
|
||||||
33
binary_installer/requirements.in
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
--prefer-binary
|
||||||
|
--extra-index-url https://download.pytorch.org/whl/torch_stable.html
|
||||||
|
--extra-index-url https://download.pytorch.org/whl/cu116
|
||||||
|
--trusted-host https://download.pytorch.org
|
||||||
|
accelerate~=0.15
|
||||||
|
albumentations
|
||||||
|
diffusers[torch]~=0.11
|
||||||
|
einops
|
||||||
|
eventlet
|
||||||
|
flask_cors
|
||||||
|
flask_socketio
|
||||||
|
flaskwebgui==1.0.3
|
||||||
|
getpass_asterisk
|
||||||
|
imageio-ffmpeg
|
||||||
|
pyreadline3
|
||||||
|
realesrgan
|
||||||
|
send2trash
|
||||||
|
streamlit
|
||||||
|
taming-transformers-rom1504
|
||||||
|
test-tube
|
||||||
|
torch-fidelity
|
||||||
|
torch==1.12.1 ; platform_system == 'Darwin'
|
||||||
|
torch==1.12.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
||||||
|
torchvision==0.13.1 ; platform_system == 'Darwin'
|
||||||
|
torchvision==0.13.0+cu116 ; platform_system == 'Linux' or platform_system == 'Windows'
|
||||||
|
transformers
|
||||||
|
picklescan
|
||||||
|
https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip
|
||||||
|
https://github.com/invoke-ai/clipseg/archive/1f754751c85d7d4255fa681f4491ff5711c1c288.zip
|
||||||
|
https://github.com/invoke-ai/GFPGAN/archive/3f5d2397361199bc4a91c08bb7d80f04d7805615.zip ; platform_system=='Windows'
|
||||||
|
https://github.com/invoke-ai/GFPGAN/archive/c796277a1cf77954e5fc0b288d7062d162894248.zip ; platform_system=='Linux' or platform_system=='Darwin'
|
||||||
|
https://github.com/Birch-san/k-diffusion/archive/363386981fee88620709cf8f6f2eea167bd6cd74.zip
|
||||||
|
https://github.com/invoke-ai/PyPatchMatch/archive/129863937a8ab37f6bbcec327c994c0f932abdbc.zip
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
FROM ubuntu AS get_miniconda
|
|
||||||
|
|
||||||
SHELL ["/bin/bash", "-c"]
|
|
||||||
|
|
||||||
# install wget
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y \
|
|
||||||
wget \
|
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# download and install miniconda
|
|
||||||
ARG conda_version=py39_4.12.0-Linux-x86_64
|
|
||||||
ARG conda_prefix=/opt/conda
|
|
||||||
RUN wget --progress=dot:giga -O /miniconda.sh \
|
|
||||||
https://repo.anaconda.com/miniconda/Miniconda3-${conda_version}.sh \
|
|
||||||
&& bash /miniconda.sh -b -p ${conda_prefix} \
|
|
||||||
&& rm -f /miniconda.sh
|
|
||||||
|
|
||||||
FROM ubuntu AS invokeai
|
|
||||||
|
|
||||||
# use bash
|
|
||||||
SHELL [ "/bin/bash", "-c" ]
|
|
||||||
|
|
||||||
# clean bashrc
|
|
||||||
RUN echo "" > ~/.bashrc
|
|
||||||
|
|
||||||
# Install necesarry packages
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y \
|
|
||||||
--no-install-recommends \
|
|
||||||
gcc \
|
|
||||||
git \
|
|
||||||
libgl1-mesa-glx \
|
|
||||||
libglib2.0-0 \
|
|
||||||
pip \
|
|
||||||
python3 \
|
|
||||||
python3-dev \
|
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# clone repository and create symlinks
|
|
||||||
ARG invokeai_git=https://github.com/invoke-ai/InvokeAI.git
|
|
||||||
ARG project_name=invokeai
|
|
||||||
RUN git clone ${invokeai_git} /${project_name} \
|
|
||||||
&& mkdir /${project_name}/models/ldm/stable-diffusion-v1 \
|
|
||||||
&& ln -s /data/models/sd-v1-4.ckpt /${project_name}/models/ldm/stable-diffusion-v1/model.ckpt \
|
|
||||||
&& ln -s /data/outputs/ /${project_name}/outputs
|
|
||||||
|
|
||||||
# set workdir
|
|
||||||
WORKDIR /${project_name}
|
|
||||||
|
|
||||||
# install conda env and preload models
|
|
||||||
ARG conda_prefix=/opt/conda
|
|
||||||
ARG conda_env_file=environment.yml
|
|
||||||
COPY --from=get_miniconda ${conda_prefix} ${conda_prefix}
|
|
||||||
RUN source ${conda_prefix}/etc/profile.d/conda.sh \
|
|
||||||
&& conda init bash \
|
|
||||||
&& source ~/.bashrc \
|
|
||||||
&& conda env create \
|
|
||||||
--name ${project_name} \
|
|
||||||
--file ${conda_env_file} \
|
|
||||||
&& rm -Rf ~/.cache \
|
|
||||||
&& conda clean -afy \
|
|
||||||
&& echo "conda activate ${project_name}" >> ~/.bashrc \
|
|
||||||
&& ln -s /data/models/GFPGANv1.4.pth ./src/gfpgan/experiments/pretrained_models/GFPGANv1.4.pth \
|
|
||||||
&& conda activate ${project_name} \
|
|
||||||
&& python scripts/preload_models.py
|
|
||||||
|
|
||||||
# Copy entrypoint and set env
|
|
||||||
ENV CONDA_PREFIX=${conda_prefix}
|
|
||||||
ENV PROJECT_NAME=${project_name}
|
|
||||||
COPY docker-build/entrypoint.sh /
|
|
||||||
ENTRYPOINT [ "/entrypoint.sh" ]
|
|
||||||
@@ -1,81 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
set -e
|
|
||||||
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoint!!!
|
|
||||||
# configure values by using env when executing build.sh
|
|
||||||
# f.e. env ARCH=aarch64 GITHUB_INVOKE_AI=https://github.com/yourname/yourfork.git ./build.sh
|
|
||||||
|
|
||||||
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
|
|
||||||
|
|
||||||
invokeai_conda_version=${INVOKEAI_CONDA_VERSION:-py39_4.12.0-${platform/\//-}}
|
|
||||||
invokeai_conda_prefix=${INVOKEAI_CONDA_PREFIX:-\/opt\/conda}
|
|
||||||
invokeai_conda_env_file=${INVOKEAI_CONDA_ENV_FILE:-environment.yml}
|
|
||||||
invokeai_git=${INVOKEAI_GIT:-https://github.com/invoke-ai/InvokeAI.git}
|
|
||||||
huggingface_token=${HUGGINGFACE_TOKEN?}
|
|
||||||
|
|
||||||
# print the settings
|
|
||||||
echo "You are using these values:"
|
|
||||||
echo -e "project_name:\t\t ${project_name}"
|
|
||||||
echo -e "volumename:\t\t ${volumename}"
|
|
||||||
echo -e "arch:\t\t\t ${arch}"
|
|
||||||
echo -e "platform:\t\t ${platform}"
|
|
||||||
echo -e "invokeai_conda_version:\t ${invokeai_conda_version}"
|
|
||||||
echo -e "invokeai_conda_prefix:\t ${invokeai_conda_prefix}"
|
|
||||||
echo -e "invokeai_conda_env_file: ${invokeai_conda_env_file}"
|
|
||||||
echo -e "invokeai_git:\t\t ${invokeai_git}"
|
|
||||||
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
|
|
||||||
|
|
||||||
_runAlpine() {
|
|
||||||
docker run \
|
|
||||||
--rm \
|
|
||||||
--interactive \
|
|
||||||
--tty \
|
|
||||||
--mount source="$volumename",target=/data \
|
|
||||||
--workdir /data \
|
|
||||||
alpine "$@"
|
|
||||||
}
|
|
||||||
|
|
||||||
_copyCheckpoints() {
|
|
||||||
echo "creating subfolders for models and outputs"
|
|
||||||
_runAlpine mkdir models
|
|
||||||
_runAlpine mkdir outputs
|
|
||||||
echo -n "downloading sd-v1-4.ckpt"
|
|
||||||
_runAlpine wget --header="Authorization: Bearer ${huggingface_token}" -O models/sd-v1-4.ckpt https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt
|
|
||||||
echo "done"
|
|
||||||
echo "downloading GFPGANv1.4.pth"
|
|
||||||
_runAlpine wget -O models/GFPGANv1.4.pth https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
|
|
||||||
}
|
|
||||||
|
|
||||||
_checkVolumeContent() {
|
|
||||||
_runAlpine ls -lhA /data/models
|
|
||||||
}
|
|
||||||
|
|
||||||
_getModelMd5s() {
|
|
||||||
_runAlpine \
|
|
||||||
alpine sh -c "md5sum /data/models/*"
|
|
||||||
}
|
|
||||||
|
|
||||||
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
|
|
||||||
echo "Volume already exists"
|
|
||||||
if [[ -z "$(_checkVolumeContent)" ]]; then
|
|
||||||
echo "looks empty, copying checkpoint"
|
|
||||||
_copyCheckpoints
|
|
||||||
fi
|
|
||||||
echo "Models in ${volumename}:"
|
|
||||||
_checkVolumeContent
|
|
||||||
else
|
|
||||||
echo -n "createing docker volume "
|
|
||||||
docker volume create "${volumename}"
|
|
||||||
_copyCheckpoints
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build Container
|
|
||||||
docker build \
|
|
||||||
--platform="${platform}" \
|
|
||||||
--tag "${invokeai_tag}" \
|
|
||||||
--build-arg project_name="${project_name}" \
|
|
||||||
--build-arg conda_version="${invokeai_conda_version}" \
|
|
||||||
--build-arg conda_prefix="${invokeai_conda_prefix}" \
|
|
||||||
--build-arg conda_env_file="${invokeai_conda_env_file}" \
|
|
||||||
--build-arg invokeai_git="${invokeai_git}" \
|
|
||||||
--file ./docker-build/Dockerfile \
|
|
||||||
.
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
source "${CONDA_PREFIX}/etc/profile.d/conda.sh"
|
|
||||||
conda activate "${PROJECT_NAME}"
|
|
||||||
|
|
||||||
python scripts/invoke.py \
|
|
||||||
${@:---web --host=0.0.0.0}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
project_name=${PROJECT_NAME:-invokeai}
|
|
||||||
volumename=${VOLUMENAME:-${project_name}_data}
|
|
||||||
arch=${ARCH:-x86_64}
|
|
||||||
platform=${PLATFORM:-Linux/${arch}}
|
|
||||||
invokeai_tag=${INVOKEAI_TAG:-${project_name}-${arch}}
|
|
||||||
|
|
||||||
export project_name
|
|
||||||
export volumename
|
|
||||||
export arch
|
|
||||||
export platform
|
|
||||||
export invokeai_tag
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
|
|
||||||
|
|
||||||
docker run \
|
|
||||||
--interactive \
|
|
||||||
--tty \
|
|
||||||
--rm \
|
|
||||||
--platform "$platform" \
|
|
||||||
--name "$project_name" \
|
|
||||||
--hostname "$project_name" \
|
|
||||||
--mount source="$volumename",target=/data \
|
|
||||||
--publish 9090:9090 \
|
|
||||||
"$invokeai_tag" ${1:+$@}
|
|
||||||
86
docker/Dockerfile
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
ARG PYTHON_VERSION=3.9
|
||||||
|
##################
|
||||||
|
## base image ##
|
||||||
|
##################
|
||||||
|
FROM python:${PYTHON_VERSION}-slim AS python-base
|
||||||
|
|
||||||
|
# prepare for buildkit cache
|
||||||
|
RUN rm -f /etc/apt/apt.conf.d/docker-clean
|
||||||
|
|
||||||
|
# Install necesarry packages
|
||||||
|
RUN \
|
||||||
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
apt-get update \
|
||||||
|
&& apt-get install \
|
||||||
|
-yqq \
|
||||||
|
--no-install-recommends \
|
||||||
|
libgl1-mesa-glx=20.3.* \
|
||||||
|
libglib2.0-0=2.66.* \
|
||||||
|
libopencv-dev=4.5.* \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# set working directory and path
|
||||||
|
ARG APPDIR=/usr/src
|
||||||
|
ARG APPNAME=InvokeAI
|
||||||
|
WORKDIR ${APPDIR}
|
||||||
|
ENV PATH=${APPDIR}/${APPNAME}/bin:$PATH
|
||||||
|
|
||||||
|
#######################
|
||||||
|
## build pyproject ##
|
||||||
|
#######################
|
||||||
|
FROM python-base AS pyproject-builder
|
||||||
|
ENV PIP_USE_PEP517=1
|
||||||
|
|
||||||
|
# prepare for buildkit cache
|
||||||
|
ARG PIP_CACHE_DIR=/var/cache/buildkit/pip
|
||||||
|
ENV PIP_CACHE_DIR ${PIP_CACHE_DIR}
|
||||||
|
RUN mkdir -p ${PIP_CACHE_DIR}
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
RUN \
|
||||||
|
--mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||||
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
apt-get update \
|
||||||
|
&& apt-get install \
|
||||||
|
-yqq \
|
||||||
|
--no-install-recommends \
|
||||||
|
build-essential=12.9 \
|
||||||
|
gcc=4:10.2.* \
|
||||||
|
python3-dev=3.9.* \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# create virtual environment
|
||||||
|
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||||
|
python3 -m venv "${APPNAME}" \
|
||||||
|
--upgrade-deps
|
||||||
|
|
||||||
|
# copy sources
|
||||||
|
COPY --link . .
|
||||||
|
|
||||||
|
# install pyproject.toml
|
||||||
|
ARG PIP_EXTRA_INDEX_URL
|
||||||
|
ENV PIP_EXTRA_INDEX_URL ${PIP_EXTRA_INDEX_URL}
|
||||||
|
ARG PIP_PACKAGE=.
|
||||||
|
RUN --mount=type=cache,target=${PIP_CACHE_DIR} \
|
||||||
|
"${APPDIR}/${APPNAME}/bin/pip" install ${PIP_PACKAGE}
|
||||||
|
|
||||||
|
# build patchmatch
|
||||||
|
RUN python3 -c "from patchmatch import patch_match"
|
||||||
|
|
||||||
|
#####################
|
||||||
|
## runtime image ##
|
||||||
|
#####################
|
||||||
|
FROM python-base AS runtime
|
||||||
|
|
||||||
|
# setup environment
|
||||||
|
COPY --from=pyproject-builder --link ${APPDIR}/${APPNAME} ${APPDIR}/${APPNAME}
|
||||||
|
ENV INVOKEAI_ROOT=/data
|
||||||
|
ENV INVOKE_MODEL_RECONFIGURE="--yes --default_only"
|
||||||
|
|
||||||
|
# set Entrypoint and default CMD
|
||||||
|
ENTRYPOINT [ "invokeai" ]
|
||||||
|
CMD [ "--web", "--host=0.0.0.0" ]
|
||||||
|
VOLUME [ "/data" ]
|
||||||
|
|
||||||
|
LABEL org.opencontainers.image.authors="mauwii@outlook.de"
|
||||||
44
docker/build.sh
Executable file
@@ -0,0 +1,44 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#setup
|
||||||
|
# Some possible pip extra-index urls (cuda 11.7 is available without extra url):
|
||||||
|
# CUDA 11.6: https://download.pytorch.org/whl/cu116
|
||||||
|
# ROCm 5.2: https://download.pytorch.org/whl/rocm5.2
|
||||||
|
# CPU: https://download.pytorch.org/whl/cpu
|
||||||
|
# as found on https://pytorch.org/get-started/locally/
|
||||||
|
|
||||||
|
SCRIPTDIR=$(dirname "$0")
|
||||||
|
cd "$SCRIPTDIR" || exit 1
|
||||||
|
|
||||||
|
source ./env.sh
|
||||||
|
|
||||||
|
DOCKERFILE=${INVOKE_DOCKERFILE:-Dockerfile}
|
||||||
|
|
||||||
|
# print the settings
|
||||||
|
echo -e "You are using these values:\n"
|
||||||
|
echo -e "Dockerfile:\t\t${DOCKERFILE}"
|
||||||
|
echo -e "index-url:\t\t${PIP_EXTRA_INDEX_URL:-none}"
|
||||||
|
echo -e "Volumename:\t\t${VOLUMENAME}"
|
||||||
|
echo -e "Platform:\t\t${PLATFORM}"
|
||||||
|
echo -e "Registry:\t\t${CONTAINER_REGISTRY}"
|
||||||
|
echo -e "Repository:\t\t${CONTAINER_REPOSITORY}"
|
||||||
|
echo -e "Container Tag:\t\t${CONTAINER_TAG}"
|
||||||
|
echo -e "Container Image:\t${CONTAINER_IMAGE}\n"
|
||||||
|
|
||||||
|
# Create docker volume
|
||||||
|
if [[ -n "$(docker volume ls -f name="${VOLUMENAME}" -q)" ]]; then
|
||||||
|
echo -e "Volume already exists\n"
|
||||||
|
else
|
||||||
|
echo -n "createing docker volume "
|
||||||
|
docker volume create "${VOLUMENAME}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build Container
|
||||||
|
DOCKER_BUILDKIT=1 docker build \
|
||||||
|
--platform="${PLATFORM}" \
|
||||||
|
--tag="${CONTAINER_IMAGE}" \
|
||||||
|
${PIP_EXTRA_INDEX_URL:+--build-arg="PIP_EXTRA_INDEX_URL=${PIP_EXTRA_INDEX_URL}"} \
|
||||||
|
${PIP_PACKAGE:+--build-arg="PIP_PACKAGE=${PIP_PACKAGE}"} \
|
||||||
|
--file="${DOCKERFILE}" \
|
||||||
|
..
|
||||||
38
docker/env.sh
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
if [[ -z "$PIP_EXTRA_INDEX_URL" ]]; then
|
||||||
|
# Decide which container flavor to build if not specified
|
||||||
|
if [[ -z "$CONTAINER_FLAVOR" ]] && python -c "import torch" &>/dev/null; then
|
||||||
|
# Check for CUDA and ROCm
|
||||||
|
CUDA_AVAILABLE=$(python -c "import torch;print(torch.cuda.is_available())")
|
||||||
|
ROCM_AVAILABLE=$(python -c "import torch;print(torch.version.hip is not None)")
|
||||||
|
if [[ "$(uname -s)" != "Darwin" && "${CUDA_AVAILABLE}" == "True" ]]; then
|
||||||
|
CONTAINER_FLAVOR="cuda"
|
||||||
|
elif [[ "$(uname -s)" != "Darwin" && "${ROCM_AVAILABLE}" == "True" ]]; then
|
||||||
|
CONTAINER_FLAVOR="rocm"
|
||||||
|
else
|
||||||
|
CONTAINER_FLAVOR="cpu"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
# Set PIP_EXTRA_INDEX_URL based on container flavor
|
||||||
|
if [[ "$CONTAINER_FLAVOR" == "rocm" ]]; then
|
||||||
|
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/rocm"
|
||||||
|
elif [[ "$CONTAINER_FLAVOR" == "cpu" ]]; then
|
||||||
|
PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cpu"
|
||||||
|
# elif [[ -z "$CONTAINER_FLAVOR" || "$CONTAINER_FLAVOR" == "cuda" ]]; then
|
||||||
|
# PIP_PACKAGE=${PIP_PACKAGE-".[xformers]"}
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Variables shared by build.sh and run.sh
|
||||||
|
REPOSITORY_NAME="${REPOSITORY_NAME-$(basename "$(git rev-parse --show-toplevel)")}"
|
||||||
|
VOLUMENAME="${VOLUMENAME-"${REPOSITORY_NAME,,}_data"}"
|
||||||
|
ARCH="${ARCH-$(uname -m)}"
|
||||||
|
PLATFORM="${PLATFORM-Linux/${ARCH}}"
|
||||||
|
INVOKEAI_BRANCH="${INVOKEAI_BRANCH-$(git branch --show)}"
|
||||||
|
CONTAINER_REGISTRY="${CONTAINER_REGISTRY-"ghcr.io"}"
|
||||||
|
CONTAINER_REPOSITORY="${CONTAINER_REPOSITORY-"$(whoami)/${REPOSITORY_NAME}"}"
|
||||||
|
CONTAINER_FLAVOR="${CONTAINER_FLAVOR-cuda}"
|
||||||
|
CONTAINER_TAG="${CONTAINER_TAG-"${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}"}"
|
||||||
|
CONTAINER_IMAGE="${CONTAINER_REGISTRY}/${CONTAINER_REPOSITORY}:${CONTAINER_TAG}"
|
||||||
|
CONTAINER_IMAGE="${CONTAINER_IMAGE,,}"
|
||||||
31
docker/run.sh
Executable file
@@ -0,0 +1,31 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# How to use: https://invoke-ai.github.io/InvokeAI/installation/INSTALL_DOCKER/#run-the-container
|
||||||
|
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
|
||||||
|
|
||||||
|
SCRIPTDIR=$(dirname "$0")
|
||||||
|
cd "$SCRIPTDIR" || exit 1
|
||||||
|
|
||||||
|
source ./env.sh
|
||||||
|
|
||||||
|
echo -e "You are using these values:\n"
|
||||||
|
echo -e "Volumename:\t${VOLUMENAME}"
|
||||||
|
echo -e "Invokeai_tag:\t${CONTAINER_IMAGE}"
|
||||||
|
echo -e "local Models:\t${MODELSPATH:-unset}\n"
|
||||||
|
|
||||||
|
docker run \
|
||||||
|
--interactive \
|
||||||
|
--tty \
|
||||||
|
--rm \
|
||||||
|
--platform="${PLATFORM}" \
|
||||||
|
--name="${REPOSITORY_NAME,,}" \
|
||||||
|
--hostname="${REPOSITORY_NAME,,}" \
|
||||||
|
--mount=source="${VOLUMENAME}",target=/data \
|
||||||
|
${MODELSPATH:+-u "$(id -u):$(id -g)"} \
|
||||||
|
${MODELSPATH:+--mount="type=bind,source=${MODELSPATH},target=/data/models"} \
|
||||||
|
${HUGGING_FACE_HUB_TOKEN:+--env="HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}"} \
|
||||||
|
--publish=9090:9090 \
|
||||||
|
--cap-add=sys_nice \
|
||||||
|
${GPU_FLAGS:+--gpus="${GPU_FLAGS}"} \
|
||||||
|
"${CONTAINER_IMAGE}" ${1:+$@}
|
||||||
@@ -4,133 +4,425 @@ title: Changelog
|
|||||||
|
|
||||||
# :octicons-log-16: **Changelog**
|
# :octicons-log-16: **Changelog**
|
||||||
|
|
||||||
## v2.1.0 (2 November 2022)
|
## v2.3.0 <small>(15 January 2023)</small>
|
||||||
- update mac instructions to use invokeai for env name by @willwillems in https://github.com/invoke-ai/InvokeAI/pull/1030
|
|
||||||
- Update .gitignore by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/1040
|
|
||||||
- reintroduce fix for m1 from https://github.com/invoke-ai/InvokeAI/pull/579 missing after merge by @skurovec in https://github.com/invoke-ai/InvokeAI/pull/1056
|
|
||||||
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in https://github.com/invoke-ai/InvokeAI/pull/1060
|
|
||||||
- Print out the device type which is used by @manzke in https://github.com/invoke-ai/InvokeAI/pull/1073
|
|
||||||
- Hires Addition by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/1063
|
|
||||||
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by @skurovec in https://github.com/invoke-ai/InvokeAI/pull/1081
|
|
||||||
- Forward dream.py to invoke.py using the same interpreter, add deprecation warning by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1077
|
|
||||||
- fix noisy images at high step counts by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1086
|
|
||||||
- Generalize facetool strength argument by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1078
|
|
||||||
- Enable fast switching among models at the invoke> command line by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1066
|
|
||||||
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in https://github.com/invoke-ai/InvokeAI/pull/1095
|
|
||||||
- Update generate.py by @unreleased in https://github.com/invoke-ai/InvokeAI/pull/1109
|
|
||||||
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in https://github.com/invoke-ai/InvokeAI/pull/1125
|
|
||||||
- Fixed documentation typos and resolved merge conflicts by @rupeshs in https://github.com/invoke-ai/InvokeAI/pull/1123
|
|
||||||
- Fix broken doc links, fix malaprop in the project subtitle by @majick in https://github.com/invoke-ai/InvokeAI/pull/1131
|
|
||||||
- Only output facetool parameters if enhancing faces by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1119
|
|
||||||
- Update gitignore to ignore codeformer weights at new location by @spezialspezial in https://github.com/invoke-ai/InvokeAI/pull/1136
|
|
||||||
- fix links to point to invoke-ai.github.io #1117 by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1143
|
|
||||||
- Rework-mkdocs by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1144
|
|
||||||
- add option to CLI and pngwriter that allows user to set PNG compression level by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1127
|
|
||||||
- Fix img2img DDIM index out of bound by @wfng92 in https://github.com/invoke-ai/InvokeAI/pull/1137
|
|
||||||
- Fix gh actions by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1128
|
|
||||||
- update mac instructions to use invokeai for env name by @willwillems in https://github.com/invoke-ai/InvokeAI/pull/1030
|
|
||||||
- Update .gitignore by @blessedcoolant in https://github.com/invoke-ai/InvokeAI/pull/1040
|
|
||||||
- reintroduce fix for m1 from https://github.com/invoke-ai/InvokeAI/pull/579 missing after merge by @skurovec in https://github.com/invoke-ai/InvokeAI/pull/1056
|
|
||||||
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in https://github.com/invoke-ai/InvokeAI/pull/1060
|
|
||||||
- Print out the device type which is used by @manzke in https://github.com/invoke-ai/InvokeAI/pull/1073
|
|
||||||
- Hires Addition by @hipsterusername in https://github.com/invoke-ai/InvokeAI/pull/1063
|
|
||||||
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by @skurovec in https://github.com/invoke-ai/InvokeAI/pull/1081
|
|
||||||
- Forward dream.py to invoke.py using the same interpreter, add deprecation warning by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1077
|
|
||||||
- fix noisy images at high step counts by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1086
|
|
||||||
- Generalize facetool strength argument by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1078
|
|
||||||
- Enable fast switching among models at the invoke> command line by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1066
|
|
||||||
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in https://github.com/invoke-ai/InvokeAI/pull/1095
|
|
||||||
- Fixed documentation typos and resolved merge conflicts by @rupeshs in https://github.com/invoke-ai/InvokeAI/pull/1123
|
|
||||||
- Only output facetool parameters if enhancing faces by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1119
|
|
||||||
- add option to CLI and pngwriter that allows user to set PNG compression level by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1127
|
|
||||||
- Fix img2img DDIM index out of bound by @wfng92 in https://github.com/invoke-ai/InvokeAI/pull/1137
|
|
||||||
- Add text prompt to inpaint mask support by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1133
|
|
||||||
- Respect http[s] protocol when making socket.io middleware by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/976
|
|
||||||
- WebUI: Adds Codeformer support by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1151
|
|
||||||
- Skips normalizing prompts for web UI metadata by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1165
|
|
||||||
- Add Asymmetric Tiling by @carson-katri in https://github.com/invoke-ai/InvokeAI/pull/1132
|
|
||||||
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1172
|
|
||||||
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1175
|
|
||||||
- Flips channels using array slicing instead of using OpenCV by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1178
|
|
||||||
- Fix typo in docs: s/Formally/Formerly by @noodlebox in https://github.com/invoke-ai/InvokeAI/pull/1176
|
|
||||||
- fix clipseg loading problems by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1177
|
|
||||||
- Correct color channels in upscale using array slicing by @wfng92 in https://github.com/invoke-ai/InvokeAI/pull/1181
|
|
||||||
- Web UI: Filters existing images when adding new images; Fixes #1085 by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1171
|
|
||||||
- fix a number of bugs in textual inversion by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1190
|
|
||||||
- Improve !fetch, add !replay command by @ArDiouscuros in https://github.com/invoke-ai/InvokeAI/pull/882
|
|
||||||
- Fix generation of image with s>1000 by @holstvoogd in https://github.com/invoke-ai/InvokeAI/pull/951
|
|
||||||
- Web UI: Gallery improvements by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1198
|
|
||||||
- Update CLI.md by @krummrey in https://github.com/invoke-ai/InvokeAI/pull/1211
|
|
||||||
- outcropping improvements by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1207
|
|
||||||
- add support for loading VAE autoencoders by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1216
|
|
||||||
- remove duplicate fix_func for MPS by @wfng92 in https://github.com/invoke-ai/InvokeAI/pull/1210
|
|
||||||
- Metadata storage and retrieval fixes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1204
|
|
||||||
- nix: add shell.nix file by @Cloudef in https://github.com/invoke-ai/InvokeAI/pull/1170
|
|
||||||
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1185
|
|
||||||
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1187
|
|
||||||
- Allow user to generate images with initial noise as on M1 / mps system by @ArDiouscuros in https://github.com/invoke-ai/InvokeAI/pull/981
|
|
||||||
- feat: adding filename format template by @plucked in https://github.com/invoke-ai/InvokeAI/pull/968
|
|
||||||
- Web UI: Fixes broken bundle by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1242
|
|
||||||
- Support runwayML custom inpainting model by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1243
|
|
||||||
- Update IMG2IMG.md by @talitore in https://github.com/invoke-ai/InvokeAI/pull/1262
|
|
||||||
- New dockerfile - including a build- and a run- script as well as a GH-Action by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1233
|
|
||||||
- cut over from karras to model noise schedule for higher steps by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1222
|
|
||||||
- Prompt tweaks by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1268
|
|
||||||
- Outpainting implementation by @Kyle0654 in https://github.com/invoke-ai/InvokeAI/pull/1251
|
|
||||||
- fixing aspect ratio on hires by @tjennings in https://github.com/invoke-ai/InvokeAI/pull/1249
|
|
||||||
- Fix-build-container-action by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1274
|
|
||||||
- handle all unicode characters by @damian0815 in https://github.com/invoke-ai/InvokeAI/pull/1276
|
|
||||||
- adds models.user.yml to .gitignore by @JakeHL in https://github.com/invoke-ai/InvokeAI/pull/1281
|
|
||||||
- remove debug branch, set fail-fast to false by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1284
|
|
||||||
- Protect-secrets-on-pr by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1285
|
|
||||||
- Web UI: Adds initial inpainting implementation by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1225
|
|
||||||
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1289
|
|
||||||
- Use proper authentication to download model by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1287
|
|
||||||
- Prevent indexing error for mode RGB by @spezialspezial in https://github.com/invoke-ai/InvokeAI/pull/1294
|
|
||||||
- Integrate sd-v1-5 model into test matrix (easily expandable), remove unecesarry caches by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1293
|
|
||||||
- add --no-interactive to preload_models step by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1302
|
|
||||||
- 1-click installer and updater. Uses micromamba to install git and conda into a contained environment (if necessary) before running the normal installation script by @cmdr2 in https://github.com/invoke-ai/InvokeAI/pull/1253
|
|
||||||
- preload_models.py script downloads the weight files by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1290
|
|
||||||
|
|
||||||
## v2.0.1 (13 October 2022)
|
**Transition to diffusers
|
||||||
|
|
||||||
- fix noisy images at high step count when using k* samplers
|
Version 2.3 provides support for both the traditional `.ckpt` weight
|
||||||
- dream.py script now calls invoke.py module directly rather than
|
checkpoint files as well as the HuggingFace `diffusers` format. This
|
||||||
via a new python process (which could break the environment)
|
introduces several changes you should know about.
|
||||||
|
|
||||||
|
1. The models.yaml format has been updated. There are now two
|
||||||
|
different type of configuration stanza. The traditional ckpt
|
||||||
|
one will look like this, with a `format` of `ckpt` and a
|
||||||
|
`weights` field that points to the absolute or ROOTDIR-relative
|
||||||
|
location of the ckpt file.
|
||||||
|
|
||||||
|
```
|
||||||
|
inpainting-1.5:
|
||||||
|
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
|
||||||
|
repo_id: runwayml/stable-diffusion-inpainting
|
||||||
|
format: ckpt
|
||||||
|
width: 512
|
||||||
|
height: 512
|
||||||
|
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
|
||||||
|
config: configs/stable-diffusion/v1-inpainting-inference.yaml
|
||||||
|
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||||
|
```
|
||||||
|
|
||||||
|
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
|
||||||
|
with a `format` of `diffusers` and a `repo_id` that points to the
|
||||||
|
repository ID of the model on HuggingFace:
|
||||||
|
|
||||||
|
```
|
||||||
|
stable-diffusion-2.1:
|
||||||
|
description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
|
||||||
|
repo_id: stabilityai/stable-diffusion-2-1
|
||||||
|
format: diffusers
|
||||||
|
```
|
||||||
|
|
||||||
|
A configuration stanza for a diffuers model stored locally should
|
||||||
|
look like this, with a `format` of `diffusers`, but a `path` field
|
||||||
|
that points at the directory that contains `model_index.json`:
|
||||||
|
|
||||||
|
```
|
||||||
|
waifu-diffusion:
|
||||||
|
description: Latest waifu diffusion 1.4
|
||||||
|
format: diffusers
|
||||||
|
path: models/diffusers/hakurei-haifu-diffusion-1.4
|
||||||
|
```
|
||||||
|
|
||||||
|
2. In order of precedence, InvokeAI will now use HF_HOME, then
|
||||||
|
XDG_CACHE_HOME, then finally default to `ROOTDIR/models` to
|
||||||
|
store HuggingFace diffusers models.
|
||||||
|
|
||||||
|
Consequently, the format of the models directory has changed to
|
||||||
|
mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
|
||||||
|
are not set, diffusers models are now automatically downloaded
|
||||||
|
and retrieved from the directory `ROOTDIR/models/diffusers`,
|
||||||
|
while other models are stored in the directory
|
||||||
|
`ROOTDIR/models/hub`. This organization is the same as that used
|
||||||
|
by HuggingFace for its cache management.
|
||||||
|
|
||||||
|
This allows you to share diffusers and ckpt model files easily with
|
||||||
|
other machine learning applications that use the HuggingFace
|
||||||
|
libraries. To do this, set the environment variable HF_HOME
|
||||||
|
before starting up InvokeAI to tell it what directory to
|
||||||
|
cache models in. To tell InvokeAI to use the standard HuggingFace
|
||||||
|
cache directory, you would set HF_HOME like this (Linux/Mac):
|
||||||
|
|
||||||
|
`export HF_HOME=~/.cache/huggingface`
|
||||||
|
|
||||||
|
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
|
||||||
|
environment variable if HF_HOME is not set; this path
|
||||||
|
takes precedence over `ROOTDIR/models` to allow for the same sharing
|
||||||
|
with other machine learning applications that use HuggingFace
|
||||||
|
libraries.
|
||||||
|
|
||||||
|
3. If you upgrade to InvokeAI 2.3.* from an earlier version, there
|
||||||
|
will be a one-time migration from the old models directory format
|
||||||
|
to the new one. You will see a message about this the first time
|
||||||
|
you start `invoke.py`.
|
||||||
|
|
||||||
|
4. Both the front end back ends of the model manager have been
|
||||||
|
rewritten to accommodate diffusers. You can import models using
|
||||||
|
their local file path, using their URLs, or their HuggingFace
|
||||||
|
repo_ids. On the command line, all these syntaxes work:
|
||||||
|
|
||||||
|
```
|
||||||
|
!import_model stabilityai/stable-diffusion-2-1-base
|
||||||
|
!import_model /opt/sd-models/sd-1.4.ckpt
|
||||||
|
!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
|
||||||
|
```
|
||||||
|
|
||||||
|
**KNOWN BUGS (15 January 2023)
|
||||||
|
|
||||||
|
1. On CUDA systems, the 768 pixel stable-diffusion-2.0 and
|
||||||
|
stable-diffusion-2.1 models can only be run as `diffusers` models
|
||||||
|
when the `xformer` library is installed and configured. Without
|
||||||
|
`xformers`, InvokeAI returns black images.
|
||||||
|
|
||||||
|
2. Inpainting and outpainting have regressed in quality.
|
||||||
|
|
||||||
|
Both these issues are being actively worked on.
|
||||||
|
|
||||||
|
## v2.2.4 <small>(11 December 2022)</small>
|
||||||
|
|
||||||
|
**the `invokeai` directory**
|
||||||
|
|
||||||
|
Previously there were two directories to worry about, the directory that
|
||||||
|
contained the InvokeAI source code and the launcher scripts, and the `invokeai`
|
||||||
|
directory that contained the models files, embeddings, configuration and
|
||||||
|
outputs. With the 2.2.4 release, this dual system is done away with, and
|
||||||
|
everything, including the `invoke.bat` and `invoke.sh` launcher scripts, now
|
||||||
|
live in a directory named `invokeai`. By default this directory is located in
|
||||||
|
your home directory (e.g. `\Users\yourname` on Windows), but you can select
|
||||||
|
where it goes at install time.
|
||||||
|
|
||||||
|
After installation, you can delete the install directory (the one that the zip
|
||||||
|
file creates when it unpacks). Do **not** delete or move the `invokeai`
|
||||||
|
directory!
|
||||||
|
|
||||||
|
**Initialization file `invokeai/invokeai.init`**
|
||||||
|
|
||||||
|
You can place frequently-used startup options in this file, such as the default
|
||||||
|
number of steps or your preferred sampler. To keep everything in one place, this
|
||||||
|
file has now been moved into the `invokeai` directory and is named
|
||||||
|
`invokeai.init`.
|
||||||
|
|
||||||
|
**To update from Version 2.2.3**
|
||||||
|
|
||||||
|
The easiest route is to download and unpack one of the 2.2.4 installer files.
|
||||||
|
When it asks you for the location of the `invokeai` runtime directory, respond
|
||||||
|
with the path to the directory that contains your 2.2.3 `invokeai`. That is, if
|
||||||
|
`invokeai` lives at `C:\Users\fred\invokeai`, then answer with `C:\Users\fred`
|
||||||
|
and answer "Y" when asked if you want to reuse the directory.
|
||||||
|
|
||||||
|
The `update.sh` (`update.bat`) script that came with the 2.2.3 source installer
|
||||||
|
does not know about the new directory layout and won't be fully functional.
|
||||||
|
|
||||||
|
**To update to 2.2.5 (and beyond) there's now an update path**
|
||||||
|
|
||||||
|
As they become available, you can update to more recent versions of InvokeAI
|
||||||
|
using an `update.sh` (`update.bat`) script located in the `invokeai` directory.
|
||||||
|
Running it without any arguments will install the most recent version of
|
||||||
|
InvokeAI. Alternatively, you can get set releases by running the `update.sh`
|
||||||
|
script with an argument in the command shell. This syntax accepts the path to
|
||||||
|
the desired release's zip file, which you can find by clicking on the green
|
||||||
|
"Code" button on this repository's home page.
|
||||||
|
|
||||||
|
**Other 2.2.4 Improvements**
|
||||||
|
|
||||||
|
- Fix InvokeAI GUI initialization by @addianto in #1687
|
||||||
|
- fix link in documentation by @lstein in #1728
|
||||||
|
- Fix broken link by @ShawnZhong in #1736
|
||||||
|
- Remove reference to binary installer by @lstein in #1731
|
||||||
|
- documentation fixes for 2.2.3 by @lstein in #1740
|
||||||
|
- Modify installer links to point closer to the source installer by @ebr in
|
||||||
|
#1745
|
||||||
|
- add documentation warning about 1650/60 cards by @lstein in #1753
|
||||||
|
- Fix Linux source URL in installation docs by @andybearman in #1756
|
||||||
|
- Make install instructions discoverable in readme by @damian0815 in #1752
|
||||||
|
- typo fix by @ofirkris in #1755
|
||||||
|
- Non-interactive model download (support HUGGINGFACE_TOKEN) by @ebr in #1578
|
||||||
|
- fix(srcinstall): shell installer - cp scripts instead of linking by @tildebyte
|
||||||
|
in #1765
|
||||||
|
- stability and usage improvements to binary & source installers by @lstein in
|
||||||
|
#1760
|
||||||
|
- fix off-by-one bug in cross-attention-control by @damian0815 in #1774
|
||||||
|
- Eventually update APP_VERSION to 2.2.3 by @spezialspezial in #1768
|
||||||
|
- invoke script cds to its location before running by @lstein in #1805
|
||||||
|
- Make PaperCut and VoxelArt models load again by @lstein in #1730
|
||||||
|
- Fix --embedding_directory / --embedding_path not working by @blessedcoolant in
|
||||||
|
#1817
|
||||||
|
- Clean up readme by @hipsterusername in #1820
|
||||||
|
- Optimized Docker build with support for external working directory by @ebr in
|
||||||
|
#1544
|
||||||
|
- disable pushing the cloud container by @mauwii in #1831
|
||||||
|
- Fix docker push github action and expand with additional metadata by @ebr in
|
||||||
|
#1837
|
||||||
|
- Fix Broken Link To Notebook by @VedantMadane in #1821
|
||||||
|
- Account for flat models by @spezialspezial in #1766
|
||||||
|
- Update invoke.bat.in isolate environment variables by @lynnewu in #1833
|
||||||
|
- Arch Linux Specific PatchMatch Instructions & fixing conda install on linux by
|
||||||
|
@SammCheese in #1848
|
||||||
|
- Make force free GPU memory work in img2img by @addianto in #1844
|
||||||
|
- New installer by @lstein
|
||||||
|
|
||||||
|
## v2.2.3 <small>(2 December 2022)</small>
|
||||||
|
|
||||||
|
!!! Note
|
||||||
|
|
||||||
|
This point release removes references to the binary installer from the
|
||||||
|
installation guide. The binary installer is not stable at the current
|
||||||
|
time. First time users are encouraged to use the "source" installer as
|
||||||
|
described in [Installing InvokeAI with the Source Installer](installation/deprecated_documentation/INSTALL_SOURCE.md)
|
||||||
|
|
||||||
|
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
|
||||||
|
robust workflow solution for creating AI-generated and human facilitated
|
||||||
|
compositions. Additional enhancements have been made as well, improving safety,
|
||||||
|
ease of use, and installation.
|
||||||
|
|
||||||
|
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
|
||||||
|
512x768 image (and less for smaller images), and is compatible with
|
||||||
|
Windows/Linux/Mac (M1 & M2).
|
||||||
|
|
||||||
|
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
|
||||||
|
introduces the main WebUI enhancement for version 2.2 -
|
||||||
|
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
|
||||||
|
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
|
||||||
|
potential for users to create and iterate on their creations. The following
|
||||||
|
sections describe what's new for InvokeAI.
|
||||||
|
|
||||||
|
## v2.2.2 <small>(30 November 2022)</small>
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
|
||||||
|
|
||||||
|
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
|
||||||
|
robust workflow solution for creating AI-generated and human facilitated
|
||||||
|
compositions. Additional enhancements have been made as well, improving safety,
|
||||||
|
ease of use, and installation.
|
||||||
|
|
||||||
|
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
|
||||||
|
512x768 image (and less for smaller images), and is compatible with
|
||||||
|
Windows/Linux/Mac (M1 & M2).
|
||||||
|
|
||||||
|
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
|
||||||
|
introduces the main WebUI enhancement for version 2.2 -
|
||||||
|
[The Unified Canvas](https://invoke-ai.github.io/InvokeAI/features/UNIFIED_CANVAS/).
|
||||||
|
This new workflow is the biggest enhancement added to the WebUI to date, and
|
||||||
|
unlocks a stunning amount of potential for users to create and iterate on their
|
||||||
|
creations. The following sections describe what's new for InvokeAI.
|
||||||
|
|
||||||
|
## v2.2.0 <small>(2 December 2022)</small>
|
||||||
|
|
||||||
|
With InvokeAI 2.2, this project now provides enthusiasts and professionals a
|
||||||
|
robust workflow solution for creating AI-generated and human facilitated
|
||||||
|
compositions. Additional enhancements have been made as well, improving safety,
|
||||||
|
ease of use, and installation.
|
||||||
|
|
||||||
|
Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a
|
||||||
|
512x768 image (and less for smaller images), and is compatible with
|
||||||
|
Windows/Linux/Mac (M1 & M2).
|
||||||
|
|
||||||
|
You can see the [release video](https://youtu.be/hIYBfDtKaus) here, which
|
||||||
|
introduces the main WebUI enhancement for version 2.2 -
|
||||||
|
[The Unified Canvas](features/UNIFIED_CANVAS.md). This new workflow is the
|
||||||
|
biggest enhancement added to the WebUI to date, and unlocks a stunning amount of
|
||||||
|
potential for users to create and iterate on their creations. The following
|
||||||
|
sections describe what's new for InvokeAI.
|
||||||
|
|
||||||
|
## v2.1.3 <small>(13 November 2022)</small>
|
||||||
|
|
||||||
|
- A choice of installer scripts that automate installation and configuration.
|
||||||
|
See
|
||||||
|
[Installation](installation/index.md).
|
||||||
|
- A streamlined manual installation process that works for both Conda and
|
||||||
|
PIP-only installs. See
|
||||||
|
[Manual Installation](installation/020_INSTALL_MANUAL.md).
|
||||||
|
- The ability to save frequently-used startup options (model to load, steps,
|
||||||
|
sampler, etc) in a `.invokeai` file. See
|
||||||
|
[Client](features/CLI.md)
|
||||||
|
- Support for AMD GPU cards (non-CUDA) on Linux machines.
|
||||||
|
- Multiple bugs and edge cases squashed.
|
||||||
|
|
||||||
|
## v2.1.0 <small>(2 November 2022)</small>
|
||||||
|
|
||||||
|
- update mac instructions to use invokeai for env name by @willwillems in #1030
|
||||||
|
- Update .gitignore by @blessedcoolant in #1040
|
||||||
|
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
|
||||||
|
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
|
||||||
|
- Print out the device type which is used by @manzke in #1073
|
||||||
|
- Hires Addition by @hipsterusername in #1063
|
||||||
|
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
||||||
|
@skurovec in #1081
|
||||||
|
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
||||||
|
warning by @db3000 in #1077
|
||||||
|
- fix noisy images at high step counts by @lstein in #1086
|
||||||
|
- Generalize facetool strength argument by @db3000 in #1078
|
||||||
|
- Enable fast switching among models at the invoke> command line by @lstein in
|
||||||
|
#1066
|
||||||
|
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
|
||||||
|
- Update generate.py by @unreleased in #1109
|
||||||
|
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in #1125
|
||||||
|
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
|
||||||
|
- Fix broken doc links, fix malaprop in the project subtitle by @majick in #1131
|
||||||
|
- Only output facetool parameters if enhancing faces by @db3000 in #1119
|
||||||
|
- Update gitignore to ignore codeformer weights at new location by
|
||||||
|
@spezialspezial in #1136
|
||||||
|
- fix links to point to invoke-ai.github.io #1117 by @mauwii in #1143
|
||||||
|
- Rework-mkdocs by @mauwii in #1144
|
||||||
|
- add option to CLI and pngwriter that allows user to set PNG compression level
|
||||||
|
by @lstein in #1127
|
||||||
|
- Fix img2img DDIM index out of bound by @wfng92 in #1137
|
||||||
|
- Fix gh actions by @mauwii in #1128
|
||||||
|
- update mac instructions to use invokeai for env name by @willwillems in #1030
|
||||||
|
- Update .gitignore by @blessedcoolant in #1040
|
||||||
|
- reintroduce fix for m1 from #579 missing after merge by @skurovec in #1056
|
||||||
|
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in #1060
|
||||||
|
- Print out the device type which is used by @manzke in #1073
|
||||||
|
- Hires Addition by @hipsterusername in #1063
|
||||||
|
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
||||||
|
@skurovec in #1081
|
||||||
|
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
||||||
|
warning by @db3000 in #1077
|
||||||
|
- fix noisy images at high step counts by @lstein in #1086
|
||||||
|
- Generalize facetool strength argument by @db3000 in #1078
|
||||||
|
- Enable fast switching among models at the invoke> command line by @lstein in
|
||||||
|
#1066
|
||||||
|
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in #1095
|
||||||
|
- Fixed documentation typos and resolved merge conflicts by @rupeshs in #1123
|
||||||
|
- Only output facetool parameters if enhancing faces by @db3000 in #1119
|
||||||
|
- add option to CLI and pngwriter that allows user to set PNG compression level
|
||||||
|
by @lstein in #1127
|
||||||
|
- Fix img2img DDIM index out of bound by @wfng92 in #1137
|
||||||
|
- Add text prompt to inpaint mask support by @lstein in #1133
|
||||||
|
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
|
||||||
|
#976
|
||||||
|
- WebUI: Adds Codeformer support by @psychedelicious in #1151
|
||||||
|
- Skips normalizing prompts for web UI metadata by @psychedelicious in #1165
|
||||||
|
- Add Asymmetric Tiling by @carson-katri in #1132
|
||||||
|
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in #1172
|
||||||
|
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
|
||||||
|
in #1175
|
||||||
|
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
|
||||||
|
in #1178
|
||||||
|
- Fix typo in docs: s/Formally/Formerly by @noodlebox in #1176
|
||||||
|
- fix clipseg loading problems by @lstein in #1177
|
||||||
|
- Correct color channels in upscale using array slicing by @wfng92 in #1181
|
||||||
|
- Web UI: Filters existing images when adding new images; Fixes #1085 by
|
||||||
|
@psychedelicious in #1171
|
||||||
|
- fix a number of bugs in textual inversion by @lstein in #1190
|
||||||
|
- Improve !fetch, add !replay command by @ArDiouscuros in #882
|
||||||
|
- Fix generation of image with s>1000 by @holstvoogd in #951
|
||||||
|
- Web UI: Gallery improvements by @psychedelicious in #1198
|
||||||
|
- Update CLI.md by @krummrey in #1211
|
||||||
|
- outcropping improvements by @lstein in #1207
|
||||||
|
- add support for loading VAE autoencoders by @lstein in #1216
|
||||||
|
- remove duplicate fix_func for MPS by @wfng92 in #1210
|
||||||
|
- Metadata storage and retrieval fixes by @lstein in #1204
|
||||||
|
- nix: add shell.nix file by @Cloudef in #1170
|
||||||
|
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in #1185
|
||||||
|
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in #1187
|
||||||
|
- Allow user to generate images with initial noise as on M1 / mps system by
|
||||||
|
@ArDiouscuros in #981
|
||||||
|
- feat: adding filename format template by @plucked in #968
|
||||||
|
- Web UI: Fixes broken bundle by @psychedelicious in #1242
|
||||||
|
- Support runwayML custom inpainting model by @lstein in #1243
|
||||||
|
- Update IMG2IMG.md by @talitore in #1262
|
||||||
|
- New dockerfile - including a build- and a run- script as well as a GH-Action
|
||||||
|
by @mauwii in #1233
|
||||||
|
- cut over from karras to model noise schedule for higher steps by @lstein in
|
||||||
|
#1222
|
||||||
|
- Prompt tweaks by @lstein in #1268
|
||||||
|
- Outpainting implementation by @Kyle0654 in #1251
|
||||||
|
- fixing aspect ratio on hires by @tjennings in #1249
|
||||||
|
- Fix-build-container-action by @mauwii in #1274
|
||||||
|
- handle all unicode characters by @damian0815 in #1276
|
||||||
|
- adds models.user.yml to .gitignore by @JakeHL in #1281
|
||||||
|
- remove debug branch, set fail-fast to false by @mauwii in #1284
|
||||||
|
- Protect-secrets-on-pr by @mauwii in #1285
|
||||||
|
- Web UI: Adds initial inpainting implementation by @psychedelicious in #1225
|
||||||
|
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in #1289
|
||||||
|
- Use proper authentication to download model by @mauwii in #1287
|
||||||
|
- Prevent indexing error for mode RGB by @spezialspezial in #1294
|
||||||
|
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
|
||||||
|
unecesarry caches by @mauwii in #1293
|
||||||
|
- add --no-interactive to configure_invokeai step by @mauwii in #1302
|
||||||
|
- 1-click installer and updater. Uses micromamba to install git and conda into a
|
||||||
|
contained environment (if necessary) before running the normal installation
|
||||||
|
script by @cmdr2 in #1253
|
||||||
|
- configure_invokeai.py script downloads the weight files by @lstein in #1290
|
||||||
|
|
||||||
|
## v2.0.1 <small>(13 October 2022)</small>
|
||||||
|
|
||||||
|
- fix noisy images at high step count when using k\* samplers
|
||||||
|
- dream.py script now calls invoke.py module directly rather than via a new
|
||||||
|
python process (which could break the environment)
|
||||||
|
|
||||||
## v2.0.0 <small>(9 October 2022)</small>
|
## v2.0.0 <small>(9 October 2022)</small>
|
||||||
|
|
||||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
|
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
||||||
for backward compatibility.
|
backward compatibility.
|
||||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
||||||
- Support for [inpainting](features/INPAINTING.md) and [outpainting](features/OUTPAINTING.md)
|
- Support for [inpainting](features/INPAINTING.md) and
|
||||||
- img2img runs on all k* samplers
|
[outpainting](features/OUTPAINTING.md)
|
||||||
- Support for [negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
|
- img2img runs on all k\* samplers
|
||||||
|
- Support for
|
||||||
|
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
|
||||||
- Support for CodeFormer face reconstruction
|
- Support for CodeFormer face reconstruction
|
||||||
- Support for Textual Inversion on Macintoshes
|
- Support for Textual Inversion on Macintoshes
|
||||||
- Support in both WebGUI and CLI for [post-processing of previously-generated images](features/POSTPROCESS.md)
|
- Support in both WebGUI and CLI for
|
||||||
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
|
[post-processing of previously-generated images](features/POSTPROCESS.md)
|
||||||
and "embiggen" upscaling. See the `!fix` command.
|
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
|
||||||
- New `--hires` option on `invoke>` line allows [larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img), at the cost of some performance.
|
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
|
||||||
- New `--perlin` and `--threshold` options allow you to add and control variation
|
- New `--hires` option on `invoke>` line allows
|
||||||
during image generation (see [Thresholding and Perlin Noise Initialization](features/OTHER.md#thresholding-and-perlin-noise-initialization-options))
|
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
|
||||||
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
|
at the cost of some performance.
|
||||||
and tweaking of previous settings.
|
- New `--perlin` and `--threshold` options allow you to add and control
|
||||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
|
variation during image generation (see
|
||||||
- Improved [command-line completion behavior](features/CLI.md)
|
[Thresholding and Perlin Noise Initialization](features/OTHER.md#thresholding-and-perlin-noise-initialization-options))
|
||||||
New commands added:
|
- Extensive metadata now written into PNG files, allowing reliable regeneration
|
||||||
|
of images and tweaking of previous settings.
|
||||||
|
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
|
||||||
|
platforms.
|
||||||
|
- Improved [command-line completion behavior](features/CLI.md) New commands
|
||||||
|
added:
|
||||||
- List command-line history with `!history`
|
- List command-line history with `!history`
|
||||||
- Search command-line history with `!search`
|
- Search command-line history with `!search`
|
||||||
- Clear history with `!clear`
|
- Clear history with `!clear`
|
||||||
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
||||||
configure. To switch away from auto use the new flag like `--precision=float32`.
|
configure. To switch away from auto use the new flag like
|
||||||
|
`--precision=float32`.
|
||||||
|
|
||||||
## v1.14 <small>(11 September 2022)</small>
|
## v1.14 <small>(11 September 2022)</small>
|
||||||
|
|
||||||
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
|
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
|
||||||
- Full support for Apple hardware with M1 or M2 chips.
|
- Full support for Apple hardware with M1 or M2 chips.
|
||||||
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
|
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
|
||||||
([prixt](https://github.com/prixt)).
|
([prixt](https://github.com/prixt)).
|
||||||
- Inpainting support.
|
- Inpainting support.
|
||||||
- Improved web server GUI.
|
- Improved web server GUI.
|
||||||
- Lots of code and documentation cleanups.
|
- Lots of code and documentation cleanups.
|
||||||
@@ -138,16 +430,17 @@ title: Changelog
|
|||||||
## v1.13 <small>(3 September 2022)</small>
|
## v1.13 <small>(3 September 2022)</small>
|
||||||
|
|
||||||
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
|
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
|
||||||
([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
|
([Kevin Gibbons](https://github.com/bakkot) and many contributors and
|
||||||
- Supports a Google Colab notebook for a standalone server running on Google hardware
|
reviewers)
|
||||||
[Arturo Mendivil](https://github.com/artmen1516)
|
- Supports a Google Colab notebook for a standalone server running on Google
|
||||||
|
hardware [Arturo Mendivil](https://github.com/artmen1516)
|
||||||
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
|
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
|
||||||
[Kevin Gibbons](https://github.com/bakkot)
|
[Kevin Gibbons](https://github.com/bakkot)
|
||||||
- WebUI supports incremental display of in-progress images during generation
|
- WebUI supports incremental display of in-progress images during generation
|
||||||
[Kevin Gibbons](https://github.com/bakkot)
|
[Kevin Gibbons](https://github.com/bakkot)
|
||||||
- A new configuration file scheme that allows new models (including upcoming
|
- A new configuration file scheme that allows new models (including upcoming
|
||||||
stable-diffusion-v1.5) to be added without altering the code.
|
stable-diffusion-v1.5) to be added without altering the code.
|
||||||
([David Wager](https://github.com/maddavid12))
|
([David Wager](https://github.com/maddavid12))
|
||||||
- Can specify --grid on invoke.py command line as the default.
|
- Can specify --grid on invoke.py command line as the default.
|
||||||
- Miscellaneous internal bug and stability fixes.
|
- Miscellaneous internal bug and stability fixes.
|
||||||
- Works on M1 Apple hardware.
|
- Works on M1 Apple hardware.
|
||||||
@@ -159,49 +452,59 @@ title: Changelog
|
|||||||
|
|
||||||
- Improved file handling, including ability to read prompts from standard input.
|
- Improved file handling, including ability to read prompts from standard input.
|
||||||
(kudos to [Yunsaki](https://github.com/yunsaki)
|
(kudos to [Yunsaki](https://github.com/yunsaki)
|
||||||
- The web server is now integrated with the invoke.py script. Invoke by adding --web to
|
- The web server is now integrated with the invoke.py script. Invoke by adding
|
||||||
the invoke.py command arguments.
|
--web to the invoke.py command arguments.
|
||||||
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
|
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
|
||||||
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
|
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
|
||||||
VRAM requirements are modestly reduced. Thanks to both [Blessedcoolant](https://github.com/blessedcoolant) and
|
VRAM requirements are modestly reduced. Thanks to both
|
||||||
|
[Blessedcoolant](https://github.com/blessedcoolant) and
|
||||||
[Oceanswave](https://github.com/oceanswave) for their work on this.
|
[Oceanswave](https://github.com/oceanswave) for their work on this.
|
||||||
- You can now swap samplers on the invoke> command line. [Blessedcoolant](https://github.com/blessedcoolant)
|
- You can now swap samplers on the invoke> command line.
|
||||||
|
[Blessedcoolant](https://github.com/blessedcoolant)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.11 <small>(26 August 2022)</small>
|
## v1.11 <small>(26 August 2022)</small>
|
||||||
|
|
||||||
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module. (kudos to [Oceanswave](https://github.com/Oceanswave)
|
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
|
||||||
- You now can specify a seed of -1 to use the previous image's seed, -2 to use the seed for the image generated before that, etc.
|
(kudos to [Oceanswave](https://github.com/Oceanswave)
|
||||||
Seed memory only extends back to the previous command, but will work on all images generated with the -n# switch.
|
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
|
||||||
|
the seed for the image generated before that, etc. Seed memory only extends
|
||||||
|
back to the previous command, but will work on all images generated with the
|
||||||
|
-n# switch.
|
||||||
- Variant generation support temporarily disabled pending more general solution.
|
- Variant generation support temporarily disabled pending more general solution.
|
||||||
- Created a feature branch named **yunsaki-morphing-invoke** which adds experimental support for
|
- Created a feature branch named **yunsaki-morphing-invoke** which adds
|
||||||
iteratively modifying the prompt and its parameters. Please see[Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86)
|
experimental support for iteratively modifying the prompt and its parameters.
|
||||||
for a synopsis of how this works. Note that when this feature is eventually added to the main branch, it will may be modified
|
Please
|
||||||
significantly.
|
see[Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
|
||||||
|
a synopsis of how this works. Note that when this feature is eventually added
|
||||||
|
to the main branch, it will may be modified significantly.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.10 <small>(25 August 2022)</small>
|
## v1.10 <small>(25 August 2022)</small>
|
||||||
|
|
||||||
- A barebones but fully functional interactive web server for online generation of txt2img and img2img.
|
- A barebones but fully functional interactive web server for online generation
|
||||||
|
of txt2img and img2img.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.09 <small>(24 August 2022)</small>
|
## v1.09 <small>(24 August 2022)</small>
|
||||||
|
|
||||||
- A new -v option allows you to generate multiple variants of an initial image
|
- A new -v option allows you to generate multiple variants of an initial image
|
||||||
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave). [
|
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
|
||||||
See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
|
[ See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
|
||||||
- Added ability to personalize text to image generation (kudos to [Oceanswave](https://github.com/Oceanswave) and [nicolai256](https://github.com/nicolai256))
|
- Added ability to personalize text to image generation (kudos to
|
||||||
|
[Oceanswave](https://github.com/Oceanswave) and
|
||||||
|
[nicolai256](https://github.com/nicolai256))
|
||||||
- Enabled all of the samplers from k_diffusion
|
- Enabled all of the samplers from k_diffusion
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.08 <small>(24 August 2022)</small>
|
## v1.08 <small>(24 August 2022)</small>
|
||||||
|
|
||||||
- Escape single quotes on the invoke> command before trying to parse. This avoids
|
- Escape single quotes on the invoke> command before trying to parse. This
|
||||||
parse errors.
|
avoids parse errors.
|
||||||
- Removed instruction to get Python3.8 as first step in Windows install.
|
- Removed instruction to get Python3.8 as first step in Windows install.
|
||||||
Anaconda3 does it for you.
|
Anaconda3 does it for you.
|
||||||
- Added bounds checks for numeric arguments that could cause crashes.
|
- Added bounds checks for numeric arguments that could cause crashes.
|
||||||
@@ -211,34 +514,36 @@ title: Changelog
|
|||||||
|
|
||||||
## v1.07 <small>(23 August 2022)</small>
|
## v1.07 <small>(23 August 2022)</small>
|
||||||
|
|
||||||
- Image filenames will now never fill gaps in the sequence, but will be assigned the
|
- Image filenames will now never fill gaps in the sequence, but will be assigned
|
||||||
next higher name in the chosen directory. This ensures that the alphabetic and chronological
|
the next higher name in the chosen directory. This ensures that the alphabetic
|
||||||
sort orders are the same.
|
and chronological sort orders are the same.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.06 <small>(23 August 2022)</small>
|
## v1.06 <small>(23 August 2022)</small>
|
||||||
|
|
||||||
- Added weighted prompt support contributed by [xraxra](https://github.com/xraxra)
|
- Added weighted prompt support contributed by
|
||||||
- Example of using weighted prompts to tweak a demonic figure contributed by [bmaltais](https://github.com/bmaltais)
|
[xraxra](https://github.com/xraxra)
|
||||||
|
- Example of using weighted prompts to tweak a demonic figure contributed by
|
||||||
|
[bmaltais](https://github.com/bmaltais)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.05 <small>(22 August 2022 - after the drop)</small>
|
## v1.05 <small>(22 August 2022 - after the drop)</small>
|
||||||
|
|
||||||
- Filenames now use the following formats:
|
- Filenames now use the following formats: 000010.95183149.png -- Two files
|
||||||
000010.95183149.png -- Two files produced by the same command (e.g. -n2),
|
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
|
||||||
000010.26742632.png -- distinguished by a different seed.
|
by a different seed.
|
||||||
|
|
||||||
000011.455191342.01.png -- Two files produced by the same command using
|
000011.455191342.01.png -- Two files produced by the same command using
|
||||||
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
|
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
|
||||||
|
|
||||||
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid can
|
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
|
||||||
be regenerated with the indicated key
|
can be regenerated with the indicated key
|
||||||
|
|
||||||
- It should no longer be possible for one image to overwrite another
|
- It should no longer be possible for one image to overwrite another
|
||||||
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and retrieve
|
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
|
||||||
the path of the output directory.
|
retrieve the path of the output directory.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -252,26 +557,28 @@ title: Changelog
|
|||||||
|
|
||||||
## v1.03 <small>(22 August 2022)</small>
|
## v1.03 <small>(22 August 2022)</small>
|
||||||
|
|
||||||
- The original txt2img and img2img scripts from the CompViz repository have been moved into
|
- The original txt2img and img2img scripts from the CompViz repository have been
|
||||||
a subfolder named "orig_scripts", to reduce confusion.
|
moved into a subfolder named "orig_scripts", to reduce confusion.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.02 <small>(21 August 2022)</small>
|
## v1.02 <small>(21 August 2022)</small>
|
||||||
|
|
||||||
- A copy of the prompt and all of its switches and options is now stored in the corresponding
|
- A copy of the prompt and all of its switches and options is now stored in the
|
||||||
image in a tEXt metadata field named "Dream". You can read the prompt using scripts/images2prompt.py,
|
corresponding image in a tEXt metadata field named "Dream". You can read the
|
||||||
or an image editor that allows you to explore the full metadata.
|
prompt using scripts/images2prompt.py, or an image editor that allows you to
|
||||||
**Please run "conda env update" to load the k_lms dependencies!!**
|
explore the full metadata. **Please run "conda env update" to load the k_lms
|
||||||
|
dependencies!!**
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## v1.01 <small>(21 August 2022)</small>
|
## v1.01 <small>(21 August 2022)</small>
|
||||||
|
|
||||||
- added k_lms sampling.
|
- added k_lms sampling. **Please run "conda env update" to load the k_lms
|
||||||
**Please run "conda env update" to load the k_lms dependencies!!**
|
dependencies!!**
|
||||||
- use half precision arithmetic by default, resulting in faster execution and lower memory requirements
|
- use half precision arithmetic by default, resulting in faster execution and
|
||||||
Pass argument --full_precision to invoke.py to get slower but more accurate image generation
|
lower memory requirements Pass argument --full_precision to invoke.py to get
|
||||||
|
slower but more accurate image generation
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
BIN
docs/assets/canvas/biker_granny.png
Normal file
|
After Width: | Height: | Size: 359 KiB |
BIN
docs/assets/canvas/biker_jacket_granny.png
Normal file
|
After Width: | Height: | Size: 528 KiB |
BIN
docs/assets/canvas/mask_granny.png
Normal file
|
After Width: | Height: | Size: 601 KiB |
BIN
docs/assets/canvas/staging_area.png
Normal file
|
After Width: | Height: | Size: 59 KiB |
BIN
docs/assets/canvas_preview.png
Normal file
|
After Width: | Height: | Size: 142 KiB |
BIN
docs/assets/concepts/image1.png
Normal file
|
After Width: | Height: | Size: 122 KiB |
BIN
docs/assets/concepts/image2.png
Normal file
|
After Width: | Height: | Size: 128 KiB |
BIN
docs/assets/concepts/image3.png
Normal file
|
After Width: | Height: | Size: 99 KiB |
BIN
docs/assets/concepts/image4.png
Normal file
|
After Width: | Height: | Size: 112 KiB |
BIN
docs/assets/concepts/image5.png
Normal file
|
After Width: | Height: | Size: 107 KiB |
BIN
docs/assets/installer-walkthrough/choose-gpu.png
Normal file
|
After Width: | Height: | Size: 26 KiB |
BIN
docs/assets/installer-walkthrough/confirm-directory.png
Normal file
|
After Width: | Height: | Size: 20 KiB |
BIN
docs/assets/installer-walkthrough/downloading-models.png
Normal file
|
After Width: | Height: | Size: 37 KiB |
BIN
docs/assets/installer-walkthrough/unpacked-zipfile.png
Normal file
|
After Width: | Height: | Size: 56 KiB |
BIN
docs/assets/installing-models/webui-models-1.png
Normal file
|
After Width: | Height: | Size: 98 KiB |
BIN
docs/assets/installing-models/webui-models-2.png
Normal file
|
After Width: | Height: | Size: 94 KiB |
BIN
docs/assets/installing-models/webui-models-3.png
Normal file
|
After Width: | Height: | Size: 99 KiB |
BIN
docs/assets/installing-models/webui-models-4.png
Normal file
|
After Width: | Height: | Size: 98 KiB |
BIN
docs/assets/invoke_ai_banner.png
Normal file
|
After Width: | Height: | Size: 169 KiB |
|
Before Width: | Height: | Size: 284 KiB |
|
Before Width: | Height: | Size: 252 KiB |
|
Before Width: | Height: | Size: 428 KiB |
|
Before Width: | Height: | Size: 331 KiB |
|
Before Width: | Height: | Size: 369 KiB |
|
Before Width: | Height: | Size: 362 KiB |
|
Before Width: | Height: | Size: 329 KiB |
|
Before Width: | Height: | Size: 329 KiB |
|
Before Width: | Height: | Size: 377 KiB |
|
Before Width: | Height: | Size: 328 KiB |
|
Before Width: | Height: | Size: 380 KiB |
|
Before Width: | Height: | Size: 372 KiB |
|
Before Width: | Height: | Size: 401 KiB |
|
Before Width: | Height: | Size: 441 KiB |
|
Before Width: | Height: | Size: 451 KiB |
|
Before Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 338 KiB |
|
Before Width: | Height: | Size: 271 KiB |
|
Before Width: | Height: | Size: 353 KiB |
|
Before Width: | Height: | Size: 330 KiB |
|
Before Width: | Height: | Size: 439 KiB |
|
Before Width: | Height: | Size: 463 KiB |
|
Before Width: | Height: | Size: 444 KiB |
|
Before Width: | Height: | Size: 468 KiB |
|
Before Width: | Height: | Size: 466 KiB |
|
Before Width: | Height: | Size: 475 KiB |
|
Before Width: | Height: | Size: 429 KiB |
|
Before Width: | Height: | Size: 429 KiB |
|
Before Width: | Height: | Size: 1.3 MiB |
|
Before Width: | Height: | Size: 477 KiB |
|
Before Width: | Height: | Size: 476 KiB |
|
Before Width: | Height: | Size: 434 KiB |
@@ -1,116 +0,0 @@
|
|||||||
## 000001.1863159593.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 1863159593 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000002.1151955949.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 1151955949 -W 512 -H 512 -C 7.5 -A plms
|
|
||||||
## 000003.2736230502.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 2736230502 -W 512 -H 512 -C 7.5 -A ddim
|
|
||||||
## 000004.42.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000005.42.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000006.478163327.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 478163327 -W 640 -H 448 -C 7.5 -A k_lms
|
|
||||||
## 000007.2407640369.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2407640369:0.1
|
|
||||||
## 000008.2772421987.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2772421987:0.1
|
|
||||||
## 000009.3532317557.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 3532317557:0.1
|
|
||||||
## 000010.2028635318.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 2028635318 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000011.1111168647.png
|
|
||||||

|
|
||||||
|
|
||||||
pond with waterlillies -s 50 -S 1111168647 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000012.1476370516.png
|
|
||||||

|
|
||||||
|
|
||||||
pond with waterlillies -s 50 -S 1476370516 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000013.4281108706.png
|
|
||||||

|
|
||||||
|
|
||||||
banana sushi -s 50 -S 4281108706 -W 960 -H 960 -C 7.5 -A k_lms
|
|
||||||
## 000014.2396987386.png
|
|
||||||

|
|
||||||
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 2396987386 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_lms -f 0.75
|
|
||||||
## 000015.1252923272.png
|
|
||||||

|
|
||||||
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 1252923272 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512-transparent.png -A k_lms -f 0.75
|
|
||||||
## 000016.2633891320.png
|
|
||||||

|
|
||||||
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 2633891320 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A plms -f 0.75
|
|
||||||
## 000017.1134411920.png
|
|
||||||

|
|
||||||
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 1134411920 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_euler_a -f 0.75
|
|
||||||
## 000018.47.png
|
|
||||||

|
|
||||||
|
|
||||||
big red dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000019.47.png
|
|
||||||

|
|
||||||
|
|
||||||
big red++++ dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000020.47.png
|
|
||||||

|
|
||||||
|
|
||||||
big red dog playing with cat+++ -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000021.47.png
|
|
||||||

|
|
||||||
|
|
||||||
big (red dog).swap(tiger) playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000022.47.png
|
|
||||||

|
|
||||||
|
|
||||||
dog:1,cat:2 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000023.47.png
|
|
||||||

|
|
||||||
|
|
||||||
dog:2,cat:1 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
## 000024.1029061431.png
|
|
||||||

|
|
||||||
|
|
||||||
medusa with cobras -s 50 -S 1029061431 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm hair
|
|
||||||
## 000025.1284519352.png
|
|
||||||

|
|
||||||
|
|
||||||
bearded man -s 50 -S 1284519352 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm face
|
|
||||||
## curly.942491079.gfpgan.png
|
|
||||||

|
|
||||||
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 -ft gfpgan -U 2.0 0.75
|
|
||||||
## curly.942491079.outcrop.png
|
|
||||||

|
|
||||||
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64
|
|
||||||
## curly.942491079.outpaint.png
|
|
||||||

|
|
||||||
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -D top 64
|
|
||||||
## curly.942491079.outcrop-01.png
|
|
||||||

|
|
||||||
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
outputs/preflight/000001.1863159593.png: banana sushi -s 50 -S 1863159593 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000002.1151955949.png: banana sushi -s 50 -S 1151955949 -W 512 -H 512 -C 7.5 -A plms
|
|
||||||
outputs/preflight/000003.2736230502.png: banana sushi -s 50 -S 2736230502 -W 512 -H 512 -C 7.5 -A ddim
|
|
||||||
outputs/preflight/000004.42.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000005.42.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000006.478163327.png: banana sushi -s 50 -S 478163327 -W 640 -H 448 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000007.2407640369.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2407640369:0.1
|
|
||||||
outputs/preflight/000008.2772421987.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2772421987:0.1
|
|
||||||
outputs/preflight/000009.3532317557.png: banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 3532317557:0.1
|
|
||||||
outputs/preflight/000010.2028635318.png: banana sushi -s 50 -S 2028635318 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000011.1111168647.png: pond with waterlillies -s 50 -S 1111168647 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000012.1476370516.png: pond with waterlillies -s 50 -S 1476370516 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000013.4281108706.png: banana sushi -s 50 -S 4281108706 -W 960 -H 960 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000014.2396987386.png: old sea captain with crow on shoulder -s 50 -S 2396987386 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_lms -f 0.75
|
|
||||||
outputs/preflight/000015.1252923272.png: old sea captain with crow on shoulder -s 50 -S 1252923272 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512-transparent.png -A k_lms -f 0.75
|
|
||||||
outputs/preflight/000016.2633891320.png: old sea captain with crow on shoulder -s 50 -S 2633891320 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A plms -f 0.75
|
|
||||||
outputs/preflight/000017.1134411920.png: old sea captain with crow on shoulder -s 50 -S 1134411920 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_euler_a -f 0.75
|
|
||||||
outputs/preflight/000018.47.png: big red dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000019.47.png: big red++++ dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000020.47.png: big red dog playing with cat+++ -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000021.47.png: big (red dog).swap(tiger) playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000022.47.png: dog:1,cat:2 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000023.47.png: dog:2,cat:1 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
outputs/preflight/000024.1029061431.png: medusa with cobras -s 50 -S 1029061431 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm hair
|
|
||||||
outputs/preflight/000025.1284519352.png: bearded man -s 50 -S 1284519352 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm face
|
|
||||||
outputs/preflight/curly.942491079.gfpgan.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 -ft gfpgan -U 2.0 0.75
|
|
||||||
outputs/preflight/curly.942491079.outcrop.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64
|
|
||||||
outputs/preflight/curly.942491079.outpaint.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -D top 64
|
|
||||||
outputs/preflight/curly.942491079.outcrop-01.png: !fix ./docs/assets/preflight-checks/inputs/curly.png -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64
|
|
||||||
@@ -1,61 +0,0 @@
|
|||||||
# outputs/preflight/000001.1863159593.png
|
|
||||||
banana sushi -s 50 -S 1863159593 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000002.1151955949.png
|
|
||||||
banana sushi -s 50 -S 1151955949 -W 512 -H 512 -C 7.5 -A plms
|
|
||||||
# outputs/preflight/000003.2736230502.png
|
|
||||||
banana sushi -s 50 -S 2736230502 -W 512 -H 512 -C 7.5 -A ddim
|
|
||||||
# outputs/preflight/000004.42.png
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000005.42.png
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000006.478163327.png
|
|
||||||
banana sushi -s 50 -S 478163327 -W 640 -H 448 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000007.2407640369.png
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2407640369:0.1
|
|
||||||
# outputs/preflight/000007.2772421987.png
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 2772421987:0.1
|
|
||||||
# outputs/preflight/000007.3532317557.png
|
|
||||||
banana sushi -s 50 -S 42 -W 512 -H 512 -C 7.5 -A k_lms -V 3532317557:0.1
|
|
||||||
# outputs/preflight/000008.2028635318.png
|
|
||||||
banana sushi -s 50 -S 2028635318 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000009.1111168647.png
|
|
||||||
pond with waterlillies -s 50 -S 1111168647 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000010.1476370516.png
|
|
||||||
pond with waterlillies -s 50 -S 1476370516 -W 512 -H 512 -C 7.5 -A k_lms --seamless
|
|
||||||
# outputs/preflight/000011.4281108706.png
|
|
||||||
banana sushi -s 50 -S 4281108706 -W 960 -H 960 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000012.2396987386.png
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 2396987386 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_lms -f 0.75
|
|
||||||
# outputs/preflight/000013.1252923272.png
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 1252923272 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512-transparent.png -A k_lms -f 0.75
|
|
||||||
# outputs/preflight/000014.2633891320.png
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 2633891320 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A plms -f 0.75
|
|
||||||
# outputs/preflight/000015.1134411920.png
|
|
||||||
old sea captain with crow on shoulder -s 50 -S 1134411920 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/Lincoln-and-Parrot-512.png -A k_euler_a -f 0.75
|
|
||||||
# outputs/preflight/000016.42.png
|
|
||||||
big red dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000017.42.png
|
|
||||||
big red++++ dog playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000018.42.png
|
|
||||||
big red dog playing with cat+++ -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000019.42.png
|
|
||||||
big (red dog).swap(tiger) playing with cat -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000020.42.png
|
|
||||||
dog:1,cat:2 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000021.42.png
|
|
||||||
dog:2,cat:1 -s 50 -S 47 -W 512 -H 512 -C 7.5 -A k_lms
|
|
||||||
# outputs/preflight/000022.1029061431.png
|
|
||||||
medusa with cobras -s 50 -S 1029061431 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm hair
|
|
||||||
# outputs/preflight/000023.1284519352.png
|
|
||||||
bearded man -s 50 -S 1284519352 -W 512 -H 512 -C 7.5 -I docs/assets/preflight-checks/inputs/curly.png -A k_lms -f 0.75 -tm face
|
|
||||||
# outputs/preflight/000024.curly.hair.deselected.png
|
|
||||||
!mask -I docs/assets/preflight-checks/inputs/curly.png -tm hair
|
|
||||||
# outputs/preflight/curly.942491079.gfpgan.png
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -U2 -G0.8
|
|
||||||
# outputs/preflight/curly.942491079.outcrop.png
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -c top 64
|
|
||||||
# outputs/preflight/curly.942491079.outpaint.png
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -D top 64
|
|
||||||
# outputs/preflight/curly.942491079.outcrop-01.png
|
|
||||||
!switch inpainting-1.5
|
|
||||||
!fix ./docs/assets/preflight-checks/inputs/curly.png -c top 64
|
|
||||||
BIN
docs/assets/textual-inversion/ti-frontend.png
Normal file
|
After Width: | Height: | Size: 124 KiB |
@@ -1,45 +1,56 @@
|
|||||||
---
|
---
|
||||||
title: CLI
|
title: Command-Line Interface
|
||||||
hide:
|
|
||||||
- toc
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# :material-bash: CLI
|
# :material-bash: CLI
|
||||||
|
|
||||||
## **Interactive Command Line Interface**
|
## **Interactive Command Line Interface**
|
||||||
|
|
||||||
The `invoke.py` script, located in `scripts/`, provides an interactive
|
The InvokeAI command line interface (CLI) provides scriptable access
|
||||||
interface to image generation similar to the "invoke mothership" bot that Stable
|
to InvokeAI's features.Some advanced features are only available
|
||||||
AI provided on its Discord server.
|
through the CLI, though they eventually find their way into the WebUI.
|
||||||
|
|
||||||
Unlike the `txt2img.py` and `img2img.py` scripts provided in the original
|
The CLI is accessible from the `invoke.sh`/`invoke.bat` launcher by
|
||||||
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) source
|
selecting option (1). Alternatively, it can be launched directly from
|
||||||
code repository, the time-consuming initialization of the AI model
|
the command line by activating the InvokeAI environment and giving the
|
||||||
initialization only happens once. After that image generation from the
|
command:
|
||||||
command-line interface is very fast.
|
|
||||||
|
```bash
|
||||||
|
invokeai
|
||||||
|
```
|
||||||
|
|
||||||
|
After some startup messages, you will be presented with the `invoke> `
|
||||||
|
prompt. Here you can type prompts to generate images and issue other
|
||||||
|
commands to load and manipulate generative models. The CLI has a large
|
||||||
|
number of command-line options that control its behavior. To get a
|
||||||
|
concise summary of the options, call `invokeai` with the `--help` argument:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invokeai --help
|
||||||
|
```
|
||||||
|
|
||||||
The script uses the readline library to allow for in-line editing, command
|
The script uses the readline library to allow for in-line editing, command
|
||||||
history (++up++ and ++down++), autocompletion, and more. To help keep track of
|
history (++up++ and ++down++), autocompletion, and more. To help keep track of
|
||||||
which prompts generated which images, the script writes a log file of image
|
which prompts generated which images, the script writes a log file of image
|
||||||
names and prompts to the selected output directory.
|
names and prompts to the selected output directory.
|
||||||
|
|
||||||
In addition, as of version 1.02, it also writes the prompt into the PNG file's
|
Here is a typical session
|
||||||
metadata where it can be retrieved using `scripts/images2prompt.py`
|
|
||||||
|
|
||||||
The script is confirmed to work on Linux, Windows and Mac systems.
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
This script runs from the command-line or can be used as a Web application. The Web GUI is
|
|
||||||
currently rudimentary, but a much better replacement is on its way.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
(invokeai) ~/stable-diffusion$ python3 ./scripts/invoke.py
|
PS1:C:\Users\fred> invokeai
|
||||||
* Initializing, be patient...
|
* Initializing, be patient...
|
||||||
Loading model from models/ldm/text2img-large/model.ckpt
|
* Initializing, be patient...
|
||||||
(...more initialization messages...)
|
>> Initialization file /home/lstein/invokeai/invokeai.init found. Loading...
|
||||||
|
>> Internet connectivity is True
|
||||||
* Initialization done! Awaiting your command...
|
>> InvokeAI, version 2.3.0-rc5
|
||||||
|
>> InvokeAI runtime directory is "/home/lstein/invokeai"
|
||||||
|
>> GFPGAN Initialized
|
||||||
|
>> CodeFormer Initialized
|
||||||
|
>> ESRGAN Initialized
|
||||||
|
>> Using device_type cuda
|
||||||
|
>> xformers memory-efficient attention is available and enabled
|
||||||
|
(...more initialization messages...)
|
||||||
|
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
|
||||||
invoke> ashley judd riding a camel -n2 -s150
|
invoke> ashley judd riding a camel -n2 -s150
|
||||||
Outputs:
|
Outputs:
|
||||||
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
||||||
@@ -49,33 +60,22 @@ invoke> "there's a fly in my soup" -n6 -g
|
|||||||
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||||
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
||||||
invoke> q
|
invoke> q
|
||||||
|
|
||||||
# this shows how to retrieve the prompt stored in the saved image's metadata
|
|
||||||
(invokeai) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
|
|
||||||
00009.png: "ashley judd riding a camel" -s150 -S 416354203
|
|
||||||
00010.png: "ashley judd riding a camel" -s150 -S 1362479620
|
|
||||||
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
|
||||||
```
|
```
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The `invoke>` prompt's arguments are pretty much identical to those used in the
|
|
||||||
Discord bot, except you don't need to type `!invoke` (it doesn't hurt if you do).
|
|
||||||
A significant change is that creation of individual images is now the default
|
|
||||||
unless `--grid` (`-g`) is given. A full list is given in
|
|
||||||
[List of prompt arguments](#list-of-prompt-arguments).
|
|
||||||
|
|
||||||
## Arguments
|
## Arguments
|
||||||
|
|
||||||
The script itself also recognizes a series of command-line switches that will
|
The script recognizes a series of command-line switches that will
|
||||||
change important global defaults, such as the directory for image outputs and
|
change important global defaults, such as the directory for image
|
||||||
the location of the model weight files.
|
outputs and the location of the model weight files.
|
||||||
|
|
||||||
### List of arguments recognized at the command line
|
### List of arguments recognized at the command line
|
||||||
|
|
||||||
These command-line arguments can be passed to `invoke.py` when you first run it
|
These command-line arguments can be passed to `invoke.py` when you first run it
|
||||||
from the Windows, Mac or Linux command line. Some set defaults that can be
|
from the Windows, Mac or Linux command line. Some set defaults that can be
|
||||||
overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt-arguments). Others
|
overridden on a per-prompt basis (see
|
||||||
|
[List of prompt arguments](#list-of-prompt-arguments). Others
|
||||||
|
|
||||||
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
|
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
|
||||||
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
|
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
|
||||||
@@ -83,21 +83,28 @@ overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt
|
|||||||
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Location for generated images. |
|
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Location for generated images. |
|
||||||
| `--prompt_as_dir` | `-p` | `False` | Name output directories using the prompt text. |
|
| `--prompt_as_dir` | `-p` | `False` | Name output directories using the prompt text. |
|
||||||
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
|
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
|
||||||
| `--model <modelname>` | | `stable-diffusion-1.4` | Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m" |
|
| `--model <modelname>` | | `stable-diffusion-1.5` | Loads the initial model specified in configs/models.yaml. |
|
||||||
| `--full_precision` | `-F` | `False` | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
|
| `--ckpt_convert ` | | `False` | If provided both .ckpt and .safetensors files will be auto-converted into diffusers format in memory |
|
||||||
| `--png_compression <0-9>` | `-z<0-9>` | 6 | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
|
| `--autoconvert <path>` | | `None` | On startup, scan the indicated directory for new .ckpt/.safetensor files and automatically convert and import them |
|
||||||
| `--safety-checker` | | False | Activate safety checker for NSFW and other potentially disturbing imagery |
|
| `--precision` | | `fp16` | Provide `fp32` for full precision mode, `fp16` for half-precision. `fp32` needed for Macintoshes and some NVidia cards. |
|
||||||
|
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
|
||||||
|
| `--safety-checker` | | `False` | Activate safety checker for NSFW and other potentially disturbing imagery |
|
||||||
|
| `--patchmatch`, `--no-patchmatch` | | `--patchmatch` | Load/Don't load the PatchMatch inpainting extension |
|
||||||
|
| `--xformers`, `--no-xformers` | | `--xformers` | Load/Don't load the Xformers memory-efficient attention module (CUDA only) |
|
||||||
| `--web` | | `False` | Start in web server mode |
|
| `--web` | | `False` | Start in web server mode |
|
||||||
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
|
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
|
||||||
| `--port <port>` | | `9090` | Which port web server should listen for requests on. |
|
| `--port <port>` | | `9090` | Which port web server should listen for requests on. |
|
||||||
| `--config <path>` | | `configs/models.yaml` | Configuration file for models and their weights. |
|
| `--config <path>` | | `configs/models.yaml` | Configuration file for models and their weights. |
|
||||||
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate per prompt. |
|
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate per prompt. |
|
||||||
|
| `--width <int>` | `-W<int>` | `512` | Width of generated image |
|
||||||
|
| `--height <int>` | `-H<int>` | `512` | Height of generated image | `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
|
||||||
|
| `--strength <float>` | `-s<float>` | `0.75` | For img2img: how hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
|
||||||
|
| `--fit` | `-F` | `False` | For img2img: scale the init image to fit into the specified -H and -W dimensions |
|
||||||
| `--grid` | `-g` | `False` | Save all image series as a grid rather than individually. |
|
| `--grid` | `-g` | `False` | Save all image series as a grid rather than individually. |
|
||||||
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
|
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
|
||||||
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
|
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
|
||||||
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
|
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
|
||||||
| `--gfpgan_dir` | | `src/gfpgan` | Path to where GFPGAN is installed. |
|
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file. |
|
||||||
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file, relative to `--gfpgan_dir`. |
|
|
||||||
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
|
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
|
||||||
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
|
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
|
||||||
|
|
||||||
@@ -107,7 +114,8 @@ overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt
|
|||||||
|
|
||||||
| Argument | Shortcut | Default | Description |
|
| Argument | Shortcut | Default | Description |
|
||||||
|--------------------|------------|---------------------|--------------|
|
|--------------------|------------|---------------------|--------------|
|
||||||
| `--weights <path>` | | `None` | Pth to weights file; use `--model stable-diffusion-1.4` instead |
|
| `--full_precision` | | `False` | Same as `--precision=fp32`|
|
||||||
|
| `--weights <path>` | | `None` | Path to weights file; use `--model stable-diffusion-1.4` instead |
|
||||||
| `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
|
| `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
@@ -120,11 +128,48 @@ overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt
|
|||||||
You can either double your slashes (ick): `C:\\path\\to\\my\\file`, or
|
You can either double your slashes (ick): `C:\\path\\to\\my\\file`, or
|
||||||
use Linux/Mac style forward slashes (better): `C:/path/to/my/file`.
|
use Linux/Mac style forward slashes (better): `C:/path/to/my/file`.
|
||||||
|
|
||||||
|
## The .invokeai initialization file
|
||||||
|
|
||||||
|
To start up invoke.py with your preferred settings, place your desired
|
||||||
|
startup options in a file in your home directory named `.invokeai` The
|
||||||
|
file should contain the startup options as you would type them on the
|
||||||
|
command line (`--steps=10 --grid`), one argument per line, or a
|
||||||
|
mixture of both using any of the accepted command switch formats:
|
||||||
|
|
||||||
|
!!! example "my unmodified initialization file"
|
||||||
|
|
||||||
|
```bash title="~/.invokeai" linenums="1"
|
||||||
|
# InvokeAI initialization file
|
||||||
|
# This is the InvokeAI initialization file, which contains command-line default values.
|
||||||
|
# Feel free to edit. If anything goes wrong, you can re-initialize this file by deleting
|
||||||
|
# or renaming it and then running invokeai-configure again.
|
||||||
|
|
||||||
|
# The --root option below points to the folder in which InvokeAI stores its models, configs and outputs.
|
||||||
|
--root="/Users/mauwii/invokeai"
|
||||||
|
|
||||||
|
# the --outdir option controls the default location of image files.
|
||||||
|
--outdir="/Users/mauwii/invokeai/outputs"
|
||||||
|
|
||||||
|
# You may place other frequently-used startup commands here, one or more per line.
|
||||||
|
# Examples:
|
||||||
|
# --web --host=0.0.0.0
|
||||||
|
# --steps=20
|
||||||
|
# -Ak_euler_a -C10.0
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
|
||||||
|
The initialization file only accepts the command line arguments.
|
||||||
|
There are additional arguments that you can provide on the `invoke>` command
|
||||||
|
line (such as `-n` or `--iterations`) that cannot be entered into this file.
|
||||||
|
Also be alert for empty blank lines at the end of the file, which will cause
|
||||||
|
an arguments error at startup time.
|
||||||
|
|
||||||
## List of prompt arguments
|
## List of prompt arguments
|
||||||
|
|
||||||
After the invoke.py script initializes, it will present you with a
|
After the invoke.py script initializes, it will present you with a `invoke>`
|
||||||
`invoke>` prompt. Here you can enter information to generate images
|
prompt. Here you can enter information to generate images from text
|
||||||
from text ([txt2img](#txt2img)), to embellish an existing image or sketch
|
([txt2img](#txt2img)), to embellish an existing image or sketch
|
||||||
([img2img](#img2img)), or to selectively alter chosen regions of the image
|
([img2img](#img2img)), or to selectively alter chosen regions of the image
|
||||||
([inpainting](#inpainting)).
|
([inpainting](#inpainting)).
|
||||||
|
|
||||||
@@ -141,60 +186,61 @@ from text ([txt2img](#txt2img)), to embellish an existing image or sketch
|
|||||||
|
|
||||||
Here are the invoke> command that apply to txt2img:
|
Here are the invoke> command that apply to txt2img:
|
||||||
|
|
||||||
| Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
|
| Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
|
||||||
|--------------------|------------|---------------------|--------------|
|
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
|
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
|
||||||
| --width <int> | -W<int> | 512 | Width of generated image |
|
| `--width <int>` | `-W<int>` | `512` | Width of generated image |
|
||||||
| --height <int> | -H<int> | 512 | Height of generated image |
|
| `--height <int>` | `-H<int>` | `512` | Height of generated image |
|
||||||
| --iterations <int> | -n<int> | 1 | How many images to generate from this prompt |
|
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate from this prompt |
|
||||||
| --steps <int> | -s<int> | 50 | How many steps of refinement to apply |
|
| `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
|
||||||
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
|
| `--cfg_scale <float>` | `-C<float>` | `7.5` | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
|
||||||
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.|
|
| `--seed <int>` | `-S<int>` | `None` | Set the random seed for the next series of images. This can be used to recreate an image generated previously. |
|
||||||
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
|
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use -h to get list of available samplers. |
|
||||||
| --karras_max <int> | | 29 | When using k_* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
|
| `--karras_max <int>` | | `29` | When using k\_\* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
|
||||||
| --hires_fix | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
|
| `--hires_fix` | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
|
||||||
| --png_compression <0-9> | -z<0-9> | 6 | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
|
| `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
|
||||||
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt |
|
| `--grid` | `-g` | `False` | Turn on grid mode to return a single image combining all the images generated by this prompt |
|
||||||
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) |
|
| `--individual` | `-i` | `True` | Turn off grid mode (deprecated; leave off --grid instead) |
|
||||||
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images |
|
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Temporarily change the location of these images |
|
||||||
| --seamless | | False | Activate seamless tiling for interesting effects |
|
| `--seamless` | | `False` | Activate seamless tiling for interesting effects |
|
||||||
| --seamless_axes | | x,y | Specify which axes to use circular convolution on. |
|
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
|
||||||
| --log_tokenization | -t | False | Display a color-coded list of the parsed tokens derived from the prompt |
|
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
|
||||||
| --skip_normalization| -x | False | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
|
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
|
||||||
| --upscale <int> <float> | -U <int> <float> | -U 1 0.75| Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
|
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
|
||||||
| --facetool_strength <float> | -G <float> | -G0 | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
|
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
|
||||||
| --facetool <name> | -ft <name> | -ft gfpgan | Select face restoration algorithm to use: gfpgan, codeformer |
|
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
|
||||||
| --codeformer_fidelity | -cf <float> | 0.75 | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
|
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
|
||||||
| --save_original | -save_orig| False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
|
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
|
||||||
| --variation <float> |-v<float>| 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
|
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
|
||||||
| --with_variations <pattern> | | None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
||||||
| --save_intermediates <n> | | None | Save the image from every nth step into an "intermediates" folder inside the output directory |
|
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
|
||||||
|
|
||||||
Note that the width and height of the image must be multiples of
|
!!! note
|
||||||
64. You can provide different values, but they will be rounded down to
|
|
||||||
the nearest multiple of 64.
|
|
||||||
|
|
||||||
|
the width and height of the image must be multiples of 64. You can
|
||||||
|
provide different values, but they will be rounded down to the nearest multiple
|
||||||
|
of 64.
|
||||||
|
|
||||||
### This is an example of img2img:
|
!!! example "This is a example of img2img"
|
||||||
|
|
||||||
~~~~
|
```bash
|
||||||
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
||||||
~~~~
|
```
|
||||||
|
|
||||||
This will modify the indicated vacation photograph by making it more
|
This will modify the indicated vacation photograph by making it more like the
|
||||||
like the prompt. Results will vary greatly depending on what is in the
|
prompt. Results will vary greatly depending on what is in the image. We also ask
|
||||||
image. We also ask to --fit the image into a box no bigger than
|
to --fit the image into a box no bigger than 640x480. Otherwise the image size
|
||||||
640x480. Otherwise the image size will be identical to the provided
|
will be identical to the provided photo and you may run out of memory if it is
|
||||||
photo and you may run out of memory if it is large.
|
large.
|
||||||
|
|
||||||
In addition to the command-line options recognized by txt2img, img2img
|
In addition to the command-line options recognized by txt2img, img2img accepts
|
||||||
accepts additional options:
|
additional options:
|
||||||
|
|
||||||
| Argument <img width="160" align="right"/> | Shortcut | Default | Description |
|
| Argument <img width="160" align="right"/> | Shortcut | Default | Description |
|
||||||
|----------------------|-------------|-----------------|--------------|
|
| ----------------------------------------- | ----------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
|
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
|
||||||
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
|
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
|
||||||
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely.|
|
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
|
||||||
|
|
||||||
### inpainting
|
### inpainting
|
||||||
|
|
||||||
@@ -211,73 +257,72 @@ accepts additional options:
|
|||||||
the pixels underneath when you create the transparent areas. See
|
the pixels underneath when you create the transparent areas. See
|
||||||
[Inpainting](./INPAINTING.md) for details.
|
[Inpainting](./INPAINTING.md) for details.
|
||||||
|
|
||||||
inpainting accepts all the arguments used for txt2img and img2img, as
|
inpainting accepts all the arguments used for txt2img and img2img, as well as
|
||||||
well as the --mask (-M) and --text_mask (-tm) arguments:
|
the --mask (-M) and --text_mask (-tm) arguments:
|
||||||
|
|
||||||
| Argument <img width="100" align="right"/> | Shortcut | Default | Description |
|
| Argument <img width="100" align="right"/> | Shortcut | Default | Description |
|
||||||
|--------------------|------------|---------------------|--------------|
|
| ----------------------------------------- | ------------------------ | ------- | ------------------------------------------------------------------------------------------------ |
|
||||||
| `--init_mask <path>` | `-M<path>` | `None` |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
|
| `--init_mask <path>` | `-M<path>` | `None` | Path to an image the same size as the initial_image, with areas for inpainting made transparent. |
|
||||||
| `--invert_mask ` | | False |If true, invert the mask so that transparent areas are opaque and vice versa.|
|
| `--invert_mask ` | | False | If true, invert the mask so that transparent areas are opaque and vice versa. |
|
||||||
| `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image|
|
| `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image |
|
||||||
|
|
||||||
The mask may either be an image with transparent areas, in which case
|
The mask may either be an image with transparent areas, in which case the
|
||||||
the inpainting will occur in the transparent areas only, or a black
|
inpainting will occur in the transparent areas only, or a black and white image,
|
||||||
and white image, in which case all black areas will be painted into.
|
in which case all black areas will be painted into.
|
||||||
|
|
||||||
`--text_mask` (short form `-tm`) is a way to generate a mask using a
|
`--text_mask` (short form `-tm`) is a way to generate a mask using a text
|
||||||
text description of the part of the image to replace. For example, if
|
description of the part of the image to replace. For example, if you have an
|
||||||
you have an image of a breakfast plate with a bagel, toast and
|
image of a breakfast plate with a bagel, toast and scrambled eggs, you can
|
||||||
scrambled eggs, you can selectively mask the bagel and replace it with
|
selectively mask the bagel and replace it with a piece of cake this way:
|
||||||
a piece of cake this way:
|
|
||||||
|
|
||||||
~~~
|
```bash
|
||||||
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
|
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
|
||||||
~~~
|
```
|
||||||
|
|
||||||
The algorithm uses <a
|
The algorithm uses <a
|
||||||
href="https://github.com/timojl/clipseg">clipseg</a> to classify
|
href="https://github.com/timojl/clipseg">clipseg</a> to classify different
|
||||||
different regions of the image. The classifier puts out a confidence
|
regions of the image. The classifier puts out a confidence score for each region
|
||||||
score for each region it identifies. Generally regions that score
|
it identifies. Generally regions that score above 0.5 are reliable, but if you
|
||||||
above 0.5 are reliable, but if you are getting too much or too little
|
are getting too much or too little masking you can adjust the threshold down (to
|
||||||
masking you can adjust the threshold down (to get more mask), or up
|
get more mask), or up (to get less). In this example, by passing `-tm` a higher
|
||||||
(to get less). In this example, by passing `-tm` a higher value, we
|
value, we are insisting on a more stringent classification.
|
||||||
are insisting on a more stringent classification.
|
|
||||||
|
|
||||||
~~~
|
```bash
|
||||||
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
|
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
|
||||||
~~~
|
```
|
||||||
|
|
||||||
# Other Commands
|
### Custom Styles and Subjects
|
||||||
|
|
||||||
|
You can load and use hundreds of community-contributed Textual
|
||||||
|
Inversion models just by typing the appropriate trigger phrase. Please
|
||||||
|
see [Concepts Library](CONCEPTS.md) for more details.
|
||||||
|
|
||||||
|
## Other Commands
|
||||||
|
|
||||||
The CLI offers a number of commands that begin with "!".
|
The CLI offers a number of commands that begin with "!".
|
||||||
|
|
||||||
## Postprocessing images
|
### Postprocessing images
|
||||||
|
|
||||||
To postprocess a file using face restoration or upscaling, use the
|
To postprocess a file using face restoration or upscaling, use the `!fix`
|
||||||
`!fix` command.
|
command.
|
||||||
|
|
||||||
### `!fix`
|
#### `!fix`
|
||||||
|
|
||||||
This command runs a post-processor on a previously-generated image. It
|
This command runs a post-processor on a previously-generated image. It takes a
|
||||||
takes a PNG filename or path and applies your choice of the `-U`, `-G`, or
|
PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
|
||||||
`--embiggen` switches in order to fix faces or upscale. If you provide a
|
switches in order to fix faces or upscale. If you provide a filename, the script
|
||||||
filename, the script will look for it in the current output
|
will look for it in the current output directory. Otherwise you can provide a
|
||||||
directory. Otherwise you can provide a full or partial path to the
|
full or partial path to the desired file.
|
||||||
desired file.
|
|
||||||
|
|
||||||
Some examples:
|
Some examples:
|
||||||
|
|
||||||
!!! example ""
|
!!! example "Upscale to 4X its original size and fix faces using codeformer"
|
||||||
|
|
||||||
Upscale to 4X its original size and fix faces using codeformer:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
|
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! example ""
|
!!! example "Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen"
|
||||||
|
|
||||||
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
|
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
|
||||||
@@ -286,138 +331,103 @@ Some examples:
|
|||||||
>> GFPGAN - Restoring Faces for image seed:4829112
|
>> GFPGAN - Restoring Faces for image seed:4829112
|
||||||
Outputs:
|
Outputs:
|
||||||
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
|
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
|
||||||
|
```
|
||||||
|
|
||||||
### !mask
|
#### `!mask`
|
||||||
|
|
||||||
This command takes an image, a text prompt, and uses the `clipseg`
|
This command takes an image, a text prompt, and uses the `clipseg` algorithm to
|
||||||
algorithm to automatically generate a mask of the area that matches
|
automatically generate a mask of the area that matches the text prompt. It is
|
||||||
the text prompt. It is useful for debugging the text masking process
|
useful for debugging the text masking process prior to inpainting with the
|
||||||
prior to inpainting with the `--text_mask` argument. See
|
`--text_mask` argument. See [INPAINTING.md] for details.
|
||||||
[INPAINTING.md] for details.
|
|
||||||
|
|
||||||
## Model selection and importation
|
### Model selection and importation
|
||||||
|
|
||||||
The CLI allows you to add new models on the fly, as well as to switch
|
The CLI allows you to add new models on the fly, as well as to switch
|
||||||
among them rapidly without leaving the script.
|
among them rapidly without leaving the script. There are several
|
||||||
|
different model formats, each described in the [Model Installation
|
||||||
|
Guide](../installation/050_INSTALLING_MODELS.md).
|
||||||
|
|
||||||
### !models
|
#### `!models`
|
||||||
|
|
||||||
This prints out a list of the models defined in `config/models.yaml'.
|
This prints out a list of the models defined in `config/models.yaml'. The active
|
||||||
The active model is bold-faced
|
model is bold-faced
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
<pre>
|
|
||||||
laion400m not loaded <no description>
|
|
||||||
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
|
|
||||||
waifu-diffusion not loaded Waifu Diffusion v1.3
|
|
||||||
</pre>
|
|
||||||
|
|
||||||
### !switch <model>
|
|
||||||
|
|
||||||
This quickly switches from one model to another without leaving the
|
|
||||||
CLI script. `invoke.py` uses a memory caching system; once a model
|
|
||||||
has been loaded, switching back and forth is quick. The following
|
|
||||||
example shows this in action. Note how the second column of the
|
|
||||||
`!models` table changes to `cached` after a model is first loaded,
|
|
||||||
and that the long initialization step is not needed when loading
|
|
||||||
a cached model.
|
|
||||||
|
|
||||||
<pre>
|
<pre>
|
||||||
invoke> !models
|
inpainting-1.5 not loaded Stable Diffusion inpainting model
|
||||||
laion400m not loaded <no description>
|
<b>stable-diffusion-1.5 active Stable Diffusion v1.5</b>
|
||||||
<b>stable-diffusion-1.4 cached Stable Diffusion v1.4</b>
|
waifu-diffusion not loaded Waifu Diffusion v1.4
|
||||||
waifu-diffusion active Waifu Diffusion v1.3
|
|
||||||
|
|
||||||
invoke> !switch waifu-diffusion
|
|
||||||
>> Caching model stable-diffusion-1.4 in system RAM
|
|
||||||
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
|
|
||||||
| LatentDiffusion: Running in eps-prediction mode
|
|
||||||
| DiffusionWrapper has 859.52 M params.
|
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
|
||||||
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
|
||||||
| Using faster float16 precision
|
|
||||||
>> Model loaded in 18.24s
|
|
||||||
>> Max VRAM used to load the model: 2.17G
|
|
||||||
>> Current VRAM usage:2.17G
|
|
||||||
>> Setting Sampler to k_lms
|
|
||||||
|
|
||||||
invoke> !models
|
|
||||||
laion400m not loaded <no description>
|
|
||||||
stable-diffusion-1.4 cached Stable Diffusion v1.4
|
|
||||||
<b>waifu-diffusion active Waifu Diffusion v1.3</b>
|
|
||||||
|
|
||||||
invoke> !switch stable-diffusion-1.4
|
|
||||||
>> Caching model waifu-diffusion in system RAM
|
|
||||||
>> Retrieving model stable-diffusion-1.4 from system RAM cache
|
|
||||||
>> Setting Sampler to k_lms
|
|
||||||
|
|
||||||
invoke> !models
|
|
||||||
laion400m not loaded <no description>
|
|
||||||
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
|
|
||||||
waifu-diffusion cached Waifu Diffusion v1.3
|
|
||||||
</pre>
|
</pre>
|
||||||
|
|
||||||
### !import_model <path/to/model/weights>
|
#### `!switch <model>`
|
||||||
|
|
||||||
This command imports a new model weights file into InvokeAI, makes it
|
This quickly switches from one model to another without leaving the CLI script.
|
||||||
available for image generation within the script, and writes out the
|
`invoke.py` uses a memory caching system; once a model has been loaded,
|
||||||
configuration for the model into `config/models.yaml` for use in
|
switching back and forth is quick. The following example shows this in action.
|
||||||
subsequent sessions.
|
Note how the second column of the `!models` table changes to `cached` after a
|
||||||
|
model is first loaded, and that the long initialization step is not needed when
|
||||||
|
loading a cached model.
|
||||||
|
|
||||||
Provide `!import_model` with the path to a weights file ending in
|
#### `!import_model <hugging_face_repo_ID>`
|
||||||
`.ckpt`. If you type a partial path and press tab, the CLI will
|
|
||||||
autocomplete. Although it will also autocomplete to `.vae` files,
|
|
||||||
these are not currenty supported (but will be soon).
|
|
||||||
|
|
||||||
When you hit return, the CLI will prompt you to fill in additional
|
This imports and installs a `diffusers`-style model that is stored on
|
||||||
information about the model, including the short name you wish to use
|
the [HuggingFace Web Site](https://huggingface.co). You can look up
|
||||||
for it with the `!switch` command, a brief description of the model,
|
any [Stable Diffusion diffusers
|
||||||
the default image width and height to use with this model, and the
|
model](https://huggingface.co/models?library=diffusers) and install it
|
||||||
model's configuration file. The latter three fields are automatically
|
with a command like the following:
|
||||||
filled with reasonable defaults. In the example below, the bold-faced
|
|
||||||
text shows what the user typed in with the exception of the width,
|
```bash
|
||||||
height and configuration file paths, which were filled in
|
!import_model prompthero/openjourney
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `!import_model <path/to/diffusers/directory>`
|
||||||
|
|
||||||
|
If you have a copy of a `diffusers`-style model saved to disk, you can
|
||||||
|
import it by passing the path to model's top-level directory.
|
||||||
|
|
||||||
|
#### `!import_model <url>`
|
||||||
|
|
||||||
|
For a `.ckpt` or `.safetensors` file, if you have a direct download
|
||||||
|
URL for the file, you can provide it to `!import_model` and the file
|
||||||
|
will be downloaded and installed for you.
|
||||||
|
|
||||||
|
#### `!import_model <path/to/model/weights.ckpt>`
|
||||||
|
|
||||||
|
This command imports a new model weights file into InvokeAI, makes it available
|
||||||
|
for image generation within the script, and writes out the configuration for the
|
||||||
|
model into `config/models.yaml` for use in subsequent sessions.
|
||||||
|
|
||||||
|
Provide `!import_model` with the path to a weights file ending in `.ckpt`. If
|
||||||
|
you type a partial path and press tab, the CLI will autocomplete. Although it
|
||||||
|
will also autocomplete to `.vae` files, these are not currenty supported (but
|
||||||
|
will be soon).
|
||||||
|
|
||||||
|
When you hit return, the CLI will prompt you to fill in additional information
|
||||||
|
about the model, including the short name you wish to use for it with the
|
||||||
|
`!switch` command, a brief description of the model, the default image width and
|
||||||
|
height to use with this model, and the model's configuration file. The latter
|
||||||
|
three fields are automatically filled with reasonable defaults. In the example
|
||||||
|
below, the bold-faced text shows what the user typed in with the exception of
|
||||||
|
the width, height and configuration file paths, which were filled in
|
||||||
automatically.
|
automatically.
|
||||||
|
|
||||||
Example:
|
#### `!import_model <path/to/directory_of_models>`
|
||||||
|
|
||||||
<pre>
|
If you provide the path of a directory that contains one or more
|
||||||
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
|
`.ckpt` or `.safetensors` files, the CLI will scan the directory and
|
||||||
>> Model import in process. Please enter the values needed to configure this model:
|
interactively offer to import the models it finds there. Also see the
|
||||||
|
`--autoconvert` command-line option.
|
||||||
|
|
||||||
Name for this model: <b>waifu-diffusion</b>
|
#### `!edit_model <name_of_model>`
|
||||||
Description of this model: <b>Waifu Diffusion v1.3</b>
|
|
||||||
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
|
|
||||||
Default image width: <b>512</b>
|
|
||||||
Default image height: <b>512</b>
|
|
||||||
>> New configuration:
|
|
||||||
waifu-diffusion:
|
|
||||||
config: configs/stable-diffusion/v1-inference.yaml
|
|
||||||
description: Waifu Diffusion v1.3
|
|
||||||
height: 512
|
|
||||||
weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
|
|
||||||
width: 512
|
|
||||||
OK to import [n]? <b>y</b>
|
|
||||||
>> Caching model stable-diffusion-1.4 in system RAM
|
|
||||||
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
|
|
||||||
| LatentDiffusion: Running in eps-prediction mode
|
|
||||||
| DiffusionWrapper has 859.52 M params.
|
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
|
||||||
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
|
||||||
| Making attention of type 'vanilla' with 512 in_channels
|
|
||||||
| Using faster float16 precision
|
|
||||||
invoke>
|
|
||||||
</pre>
|
|
||||||
|
|
||||||
###!edit_model <name_of_model>
|
The `!edit_model` command can be used to modify a model that is already defined
|
||||||
|
in `config/models.yaml`. Call it with the short name of the model you wish to
|
||||||
The `!edit_model` command can be used to modify a model that is
|
modify, and it will allow you to modify the model's `description`, `weights` and
|
||||||
already defined in `config/models.yaml`. Call it with the short
|
other fields.
|
||||||
name of the model you wish to modify, and it will allow you to
|
|
||||||
modify the model's `description`, `weights` and other fields.
|
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
<pre>
|
<pre>
|
||||||
invoke> <b>!edit_model waifu-diffusion</b>
|
invoke> <b>!edit_model waifu-diffusion</b>
|
||||||
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
|
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
|
||||||
@@ -440,80 +450,79 @@ OK to import [n]? y
|
|||||||
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
|
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
|
||||||
...
|
...
|
||||||
</pre>
|
</pre>
|
||||||
=======
|
|
||||||
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3
|
### History processing
|
||||||
...lots of text...
|
|
||||||
Outputs:
|
The CLI provides a series of convenient commands for reviewing previous actions,
|
||||||
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
|
retrieving them, modifying them, and re-running them.
|
||||||
|
|
||||||
|
#### `!history`
|
||||||
|
|
||||||
|
The invoke script keeps track of all the commands you issue during a session,
|
||||||
|
allowing you to re-run them. On Mac and Linux systems, it also writes the
|
||||||
|
command-line history out to disk, giving you access to the most recent 1000
|
||||||
|
commands issued.
|
||||||
|
|
||||||
|
The `!history` command will return a numbered list of all the commands issued
|
||||||
|
during the session (Windows), or the most recent 1000 commands (Mac|Linux). You
|
||||||
|
can then repeat a command by using the command `!NNN`, where "NNN" is the
|
||||||
|
history line number. For example:
|
||||||
|
|
||||||
|
!!! example ""
|
||||||
|
|
||||||
|
```bash
|
||||||
|
invoke> !history
|
||||||
|
...
|
||||||
|
[14] happy woman sitting under tree wearing broad hat and flowing garment
|
||||||
|
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
|
||||||
|
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
|
||||||
|
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
|
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
|
...
|
||||||
|
invoke> !20
|
||||||
|
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
```
|
```
|
||||||
## History processing
|
|
||||||
|
|
||||||
The CLI provides a series of convenient commands for reviewing previous
|
####`!fetch`
|
||||||
actions, retrieving them, modifying them, and re-running them.
|
|
||||||
|
|
||||||
### !history
|
This command retrieves the generation parameters from a previously generated
|
||||||
|
image and either loads them into the command line (Linux|Mac), or prints them
|
||||||
|
out in a comment for copy-and-paste (Windows). You may provide either the name
|
||||||
|
of a file in the current output directory, or a full file path. Specify path to
|
||||||
|
a folder with image png files, and wildcard \*.png to retrieve the dream command
|
||||||
|
used to generate the images, and save them to a file commands.txt for further
|
||||||
|
processing.
|
||||||
|
|
||||||
The invoke script keeps track of all the commands you issue during a
|
!!! example "load the generation command for a single png file"
|
||||||
session, allowing you to re-run them. On Mac and Linux systems, it
|
|
||||||
also writes the command-line history out to disk, giving you access to
|
|
||||||
the most recent 1000 commands issued.
|
|
||||||
|
|
||||||
The `!history` command will return a numbered list of all the commands
|
```bash
|
||||||
issued during the session (Windows), or the most recent 1000 commands
|
invoke> !fetch 0000015.8929913.png
|
||||||
(Mac|Linux). You can then repeat a command by using the command `!NNN`,
|
# the script returns the next line, ready for editing and running:
|
||||||
where "NNN" is the history line number. For example:
|
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
!!! example "fetch the generation commands from a batch of files and store them into `selected.txt`"
|
||||||
invoke> !history
|
|
||||||
...
|
|
||||||
[14] happy woman sitting under tree wearing broad hat and flowing garment
|
|
||||||
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
|
|
||||||
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
|
|
||||||
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
|
||||||
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
|
||||||
...
|
|
||||||
invoke> !20
|
|
||||||
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
|
||||||
```
|
|
||||||
|
|
||||||
### !fetch
|
```bash
|
||||||
|
invoke> !fetch outputs\selected-imgs\*.png selected.txt
|
||||||
|
```
|
||||||
|
|
||||||
This command retrieves the generation parameters from a previously
|
#### `!replay`
|
||||||
generated image and either loads them into the command line
|
|
||||||
(Linux|Mac), or prints them out in a comment for copy-and-paste
|
|
||||||
(Windows). You may provide either the name of a file in the current
|
|
||||||
output directory, or a full file path. Specify path to a folder with
|
|
||||||
image png files, and wildcard *.png to retrieve the dream command used
|
|
||||||
to generate the images, and save them to a file commands.txt for
|
|
||||||
further processing.
|
|
||||||
|
|
||||||
This example loads the generation command for a single png file:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !fetch 0000015.8929913.png
|
|
||||||
# the script returns the next line, ready for editing and running:
|
|
||||||
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
|
|
||||||
```
|
|
||||||
|
|
||||||
This one fetches the generation commands from a batch of files and
|
|
||||||
stores them into `selected.txt`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !fetch outputs\selected-imgs\*.png selected.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
### !replay
|
|
||||||
|
|
||||||
This command replays a text file generated by !fetch or created manually
|
This command replays a text file generated by !fetch or created manually
|
||||||
|
|
||||||
~~~
|
!!! example
|
||||||
invoke> !replay outputs\selected-imgs\selected.txt
|
|
||||||
~~~
|
|
||||||
|
|
||||||
Note that these commands may behave unexpectedly if given a PNG file that
|
```bash
|
||||||
was not generated by InvokeAI.
|
invoke> !replay outputs\selected-imgs\selected.txt
|
||||||
|
```
|
||||||
|
|
||||||
### !search <search string>
|
!!! note
|
||||||
|
|
||||||
|
These commands may behave unexpectedly if given a PNG file that was
|
||||||
|
not generated by InvokeAI.
|
||||||
|
|
||||||
|
#### `!search <search string>`
|
||||||
|
|
||||||
This is similar to !history but it only returns lines that contain
|
This is similar to !history but it only returns lines that contain
|
||||||
`search string`. For example:
|
`search string`. For example:
|
||||||
@@ -523,44 +532,49 @@ invoke> !search surreal
|
|||||||
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||||
```
|
```
|
||||||
|
|
||||||
### `!clear`
|
#### `!clear`
|
||||||
|
|
||||||
This clears the search history from memory and disk. Be advised that
|
This clears the search history from memory and disk. Be advised that this
|
||||||
this operation is irreversible and does not issue any warnings!
|
operation is irreversible and does not issue any warnings!
|
||||||
|
|
||||||
## Command-line editing and completion
|
## Command-line editing and completion
|
||||||
|
|
||||||
The command-line offers convenient history tracking, editing, and
|
The command-line offers convenient history tracking, editing, and command
|
||||||
command completion.
|
completion.
|
||||||
|
|
||||||
- To scroll through previous commands and potentially edit/reuse them, use the ++up++ and ++down++ keys.
|
- To scroll through previous commands and potentially edit/reuse them, use the
|
||||||
- To edit the current command, use the ++left++ and ++right++ keys to position the cursor, and then ++backspace++, ++delete++ or insert characters.
|
++up++ and ++down++ keys.
|
||||||
- To move to the very beginning of the command, type ++ctrl+a++ (or ++command+a++ on the Mac)
|
- To edit the current command, use the ++left++ and ++right++ keys to position
|
||||||
|
the cursor, and then ++backspace++, ++delete++ or insert characters.
|
||||||
|
- To move to the very beginning of the command, type ++ctrl+a++ (or
|
||||||
|
++command+a++ on the Mac)
|
||||||
- To move to the end of the command, type ++ctrl+e++.
|
- To move to the end of the command, type ++ctrl+e++.
|
||||||
- To cut a section of the command, position the cursor where you want to start cutting and type ++ctrl+k++
|
- To cut a section of the command, position the cursor where you want to start
|
||||||
- To paste a cut section back in, position the cursor where you want to paste, and type ++ctrl+y++
|
cutting and type ++ctrl+k++
|
||||||
|
- To paste a cut section back in, position the cursor where you want to paste,
|
||||||
|
and type ++ctrl+y++
|
||||||
|
|
||||||
Windows users can get similar, but more limited, functionality if they
|
Windows users can get similar, but more limited, functionality if they launch
|
||||||
launch `invoke.py` with the `winpty` program and have the `pyreadline3`
|
`invoke.py` with the `winpty` program and have the `pyreadline3` library
|
||||||
library installed:
|
installed:
|
||||||
|
|
||||||
```batch
|
```batch
|
||||||
> winpty python scripts\invoke.py
|
> winpty python scripts\invoke.py
|
||||||
```
|
```
|
||||||
|
|
||||||
On the Mac and Linux platforms, when you exit invoke.py, the last 1000
|
On the Mac and Linux platforms, when you exit invoke.py, the last 1000 lines of
|
||||||
lines of your command-line history will be saved. When you restart
|
your command-line history will be saved. When you restart `invoke.py`, you can
|
||||||
`invoke.py`, you can access the saved history using the ++up++ key.
|
access the saved history using the ++up++ key.
|
||||||
|
|
||||||
In addition, limited command-line completion is installed. In various
|
In addition, limited command-line completion is installed. In various contexts,
|
||||||
contexts, you can start typing your command and press ++tab++. A list of
|
you can start typing your command and press ++tab++. A list of potential
|
||||||
potential completions will be presented to you. You can then type a
|
completions will be presented to you. You can then type a little more, hit
|
||||||
little more, hit ++tab++ again, and eventually autocomplete what you want.
|
++tab++ again, and eventually autocomplete what you want.
|
||||||
|
|
||||||
When specifying file paths using the one-letter shortcuts, the CLI
|
When specifying file paths using the one-letter shortcuts, the CLI will attempt
|
||||||
will attempt to complete pathnames for you. This is most handy for the
|
to complete pathnames for you. This is most handy for the `-I` (init image) and
|
||||||
`-I` (init image) and `-M` (init mask) paths. To initiate completion, start
|
`-M` (init mask) paths. To initiate completion, start the path with a slash
|
||||||
the path with a slash (`/`) or `./`. For example:
|
(`/`) or `./`. For example:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invoke> zebra with a mustache -I./test-pictures<TAB>
|
invoke> zebra with a mustache -I./test-pictures<TAB>
|
||||||
|
|||||||
131
docs/features/CONCEPTS.md
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
---
|
||||||
|
title: Concepts Library
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-library-shelves: The Hugging Face Concepts Library and Importing Textual Inversion files
|
||||||
|
|
||||||
|
## Using Textual Inversion Files
|
||||||
|
|
||||||
|
Textual inversion (TI) files are small models that customize the output of
|
||||||
|
Stable Diffusion image generation. They can augment SD with specialized subjects
|
||||||
|
and artistic styles. They are also known as "embeds" in the machine learning
|
||||||
|
world.
|
||||||
|
|
||||||
|
Each TI file introduces one or more vocabulary terms to the SD model. These are
|
||||||
|
known in InvokeAI as "triggers." Triggers are often, but not always, denoted
|
||||||
|
using angle brackets as in "<trigger-phrase>". The two most common type of
|
||||||
|
TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
|
||||||
|
different TI training packages. InvokeAI supports both formats, but its
|
||||||
|
[built-in TI training system](TEXTUAL_INVERSION.md) produces `.pt`.
|
||||||
|
|
||||||
|
The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
|
||||||
|
amassed a large ligrary of >800 community-contributed TI files covering a
|
||||||
|
broad range of subjects and styles. InvokeAI has built-in support for this
|
||||||
|
library which downloads and merges TI files automatically upon request. You can
|
||||||
|
also install your own or others' TI files by placing them in a designated
|
||||||
|
directory.
|
||||||
|
|
||||||
|
### An Example
|
||||||
|
|
||||||
|
Here are a few examples to illustrate how it works. All these images were
|
||||||
|
generated using the command-line client and the Stable Diffusion 1.5 model:
|
||||||
|
|
||||||
|
| Japanese gardener | Japanese gardener <ghibli-face> | Japanese gardener <hoi4-leaders> | Japanese gardener <cartoona-animals> |
|
||||||
|
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
|
||||||
|
|  |  |  |  |
|
||||||
|
|
||||||
|
You can also combine styles and concepts:
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
| A portrait of <alf> in <cartoona-animal> style |
|
||||||
|
| :--------------------------------------------------------: |
|
||||||
|
|  |
|
||||||
|
</figure>
|
||||||
|
## Using a Hugging Face Concept
|
||||||
|
|
||||||
|
!!! warning "Authenticating to HuggingFace"
|
||||||
|
|
||||||
|
Some concepts require valid authentication to HuggingFace. Without it, they will not be downloaded
|
||||||
|
and will be silently ignored.
|
||||||
|
|
||||||
|
If you used an installer to install InvokeAI, you may have already set a HuggingFace token.
|
||||||
|
If you skipped this step, you can:
|
||||||
|
|
||||||
|
- run the InvokeAI configuration script again (if you used a manual installer): `invokeai-configure`
|
||||||
|
- set one of the `HUGGINGFACE_TOKEN` or `HUGGING_FACE_HUB_TOKEN` environment variables to contain your token
|
||||||
|
|
||||||
|
Finally, if you already used any HuggingFace library on your computer, you might already have a token
|
||||||
|
in your local cache. Check for a hidden `.huggingface` directory in your home folder. If it
|
||||||
|
contains a `token` file, then you are all set.
|
||||||
|
|
||||||
|
|
||||||
|
Hugging Face TI concepts are downloaded and installed automatically as you
|
||||||
|
require them. This requires your machine to be connected to the Internet. To
|
||||||
|
find out what each concept is for, you can browse the
|
||||||
|
[Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
|
||||||
|
look at examples of what each concept produces.
|
||||||
|
|
||||||
|
When you have an idea of a concept you wish to try, go to the command-line
|
||||||
|
client (CLI) and type a `<` character and the beginning of the Hugging Face
|
||||||
|
concept name you wish to load. Press ++tab++, and the CLI will show you all
|
||||||
|
matching concepts. You can also type `<` and hit ++tab++ to get a listing of all
|
||||||
|
~800 concepts, but be prepared to scroll up to see them all! If there is more
|
||||||
|
than one match you can continue to type and ++tab++ until the concept is
|
||||||
|
completed.
|
||||||
|
|
||||||
|
!!! example
|
||||||
|
|
||||||
|
if you type in `<x` and hit ++tab++, you'll be prompted with the completions:
|
||||||
|
|
||||||
|
```py
|
||||||
|
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
|
||||||
|
```
|
||||||
|
|
||||||
|
Now type `id` and press ++tab++. It will be autocompleted to `<xidiversity>`
|
||||||
|
because this is a unique match.
|
||||||
|
|
||||||
|
Finish your prompt and generate as usual. You may include multiple concept terms
|
||||||
|
in the prompt.
|
||||||
|
|
||||||
|
If you have never used this concept before, you will see a message that the TI
|
||||||
|
model is being downloaded and installed. After this, the concept will be saved
|
||||||
|
locally (in the `models/sd-concepts-library` directory) for future use.
|
||||||
|
|
||||||
|
Several steps happen during downloading and installation, including a scan of
|
||||||
|
the file for malicious code. Should any errors occur, you will be warned and the
|
||||||
|
concept will fail to load. Generation will then continue treating the trigger
|
||||||
|
term as a normal string of characters (e.g. as literal `<ghibli-face>`).
|
||||||
|
|
||||||
|
You can also use `<concept-names>` in the WebGUI's prompt textbox. There is no
|
||||||
|
autocompletion at this time.
|
||||||
|
|
||||||
|
## Installing your Own TI Files
|
||||||
|
|
||||||
|
You may install any number of `.pt` and `.bin` files simply by copying them into
|
||||||
|
the `embeddings` directory of the InvokeAI runtime directory (usually `invokeai`
|
||||||
|
in your home directory). You may create subdirectories in order to organize the
|
||||||
|
files in any way you wish. Be careful not to overwrite one file with another.
|
||||||
|
For example, TI files generated by the Hugging Face toolkit share the named
|
||||||
|
`learned_embedding.bin`. You can use subdirectories to keep them distinct.
|
||||||
|
|
||||||
|
At startup time, InvokeAI will scan the `embeddings` directory and load any TI
|
||||||
|
files it finds there. At startup you will see a message similar to this one:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
|
||||||
|
```
|
||||||
|
|
||||||
|
Note the `*` trigger term. This is a placeholder term that many early TI
|
||||||
|
tutorials taught people to use rather than a more descriptive term.
|
||||||
|
Unfortunately, if you have multiple TI files that all use this term, only the
|
||||||
|
first one loaded will be triggered by use of the term.
|
||||||
|
|
||||||
|
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
|
||||||
|
or more TI files together. If it encounters a collision of terms, the script
|
||||||
|
will prompt you to select new terms that do not collide. See
|
||||||
|
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
|
||||||
|
|
||||||
|
## Further Reading
|
||||||
|
|
||||||
|
Please see [the repository](https://github.com/rinongal/textual_inversion) and
|
||||||
|
associated paper for details and limitations.
|
||||||
@@ -85,7 +85,7 @@ increasing size, every tile after the first in a row or column
|
|||||||
effectively only covers an extra `1 - overlap_ratio` on each axis. If
|
effectively only covers an extra `1 - overlap_ratio` on each axis. If
|
||||||
the input/`--init_img` is same size as a tile, the ideal (for time)
|
the input/`--init_img` is same size as a tile, the ideal (for time)
|
||||||
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
|
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
|
||||||
4.0 etc..
|
4.0, etc.
|
||||||
|
|
||||||
`-embiggen_tiles <spaced list of tiles>`
|
`-embiggen_tiles <spaced list of tiles>`
|
||||||
|
|
||||||
@@ -100,6 +100,15 @@ Tiles are numbered starting with one, and left-to-right,
|
|||||||
top-to-bottom. So, if you are generating a 3x3 tiled image, the
|
top-to-bottom. So, if you are generating a 3x3 tiled image, the
|
||||||
middle row would be `4 5 6`.
|
middle row would be `4 5 6`.
|
||||||
|
|
||||||
|
`-embiggen_strength <strength>`
|
||||||
|
|
||||||
|
Another advanced option if you want to experiment with the strength parameter
|
||||||
|
that embiggen uses when it calls Img2Img. Values range from 0.0 to 1.0
|
||||||
|
and lower values preserve more of the character of the initial image.
|
||||||
|
Values that are too high will result in a completely different end image,
|
||||||
|
while values that are too low will result in an image not dissimilar to one
|
||||||
|
you would get with ESRGAN upscaling alone. The default value is 0.4.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
!!! example ""
|
!!! example ""
|
||||||
|
|||||||
@@ -4,135 +4,183 @@ title: Image-to-Image
|
|||||||
|
|
||||||
# :material-image-multiple: Image-to-Image
|
# :material-image-multiple: Image-to-Image
|
||||||
|
|
||||||
## `img2img`
|
Both the Web and command-line interfaces provide an "img2img" feature
|
||||||
|
that lets you seed your creations with an initial drawing or
|
||||||
|
photo. This is a really cool feature that tells stable diffusion to
|
||||||
|
build the prompt on top of the image you provide, preserving the
|
||||||
|
original's basic shape and layout.
|
||||||
|
|
||||||
This script also provides an `img2img` feature that lets you seed your creations with an initial
|
See the [WebUI Guide](WEB.md) for a walkthrough of the img2img feature
|
||||||
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
|
in the InvokeAI web server. This document describes how to use img2img
|
||||||
top of the image you provide, preserving the original's basic shape and layout. To use it, provide
|
in the command-line tool.
|
||||||
the `--init_img` option as shown here:
|
|
||||||
|
|
||||||
```commandline
|
## Basic Usage
|
||||||
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
|
||||||
```
|
|
||||||
|
|
||||||
This will take the original image shown here:
|
Launch the command-line client by launching `invoke.sh`/`invoke.bat`
|
||||||
|
and choosing option (1). Alternative, activate the InvokeAI
|
||||||
|
environment and issue the command `invokeai`.
|
||||||
|
|
||||||
<figure markdown>
|
Once the `invoke> ` prompt appears, you can start an img2img render by
|
||||||
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
|
pointing to a seed file with the `-I` option as shown here:
|
||||||
</figure>
|
|
||||||
|
|
||||||
and generate a new image based on it as shown here:
|
!!! example ""
|
||||||
|
|
||||||
<figure markdown>
|
```commandline
|
||||||
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
|
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
||||||
</figure>
|
```
|
||||||
|
|
||||||
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` (`-f`) controls how much
|
<figure markdown>
|
||||||
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
|
|
||||||
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
|
|
||||||
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
|
|
||||||
adding steps will continuously change the resulting image and it will not converge.
|
|
||||||
|
|
||||||
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
|
| original image | generated image |
|
||||||
the original image. This is done by passing the first generated image
|
| :------------: | :-------------: |
|
||||||
back into img2img the requested number of times. It generates
|
| { width=320 } | { width=320 } |
|
||||||
|
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
||||||
|
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
|
||||||
|
the original intact), to `1.0` (ignore the original completely). The default is
|
||||||
|
`0.75`, and ranges from `0.25-0.90` give interesting results. Other relevant
|
||||||
|
options include `-C` (classification free guidance scale), and `-s` (steps).
|
||||||
|
Unlike `txt2img`, adding steps will continuously change the resulting image and
|
||||||
|
it will not converge.
|
||||||
|
|
||||||
|
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>`
|
||||||
|
count variants on the original image. This is done by passing the first
|
||||||
|
generated image back into img2img the requested number of times. It generates
|
||||||
interesting variants.
|
interesting variants.
|
||||||
|
|
||||||
Note that the prompt makes a big difference. For example, this slight variation on the prompt produces
|
Note that the prompt makes a big difference. For example, this slight variation
|
||||||
a very different image:
|
on the prompt produces a very different image:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
|
{ width=320 }
|
||||||
<caption markdown>photograph of a tree on a hill with a river</caption>
|
<caption markdown>photograph of a tree on a hill with a river</caption>
|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
!!! tip
|
!!! tip
|
||||||
|
|
||||||
When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
|
When designing prompts, think about how the images scraped from the internet were
|
||||||
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
|
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
|
||||||
model, or film settings.
|
They will, however, be captioned with the publication, photographer, camera model,
|
||||||
|
or film settings.
|
||||||
|
|
||||||
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
|
If the initial image contains transparent regions, then Stable Diffusion will
|
||||||
transparent regions, a process called [`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting). However, for this to work correctly, the color
|
only draw within the transparent regions, a process called
|
||||||
information underneath the transparent needs to be preserved, not erased.
|
[`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting).
|
||||||
|
However, for this to work correctly, the color information underneath the
|
||||||
|
transparent needs to be preserved, not erased.
|
||||||
|
|
||||||
!!! warning
|
!!! warning "**IMPORTANT ISSUE** "
|
||||||
|
|
||||||
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
|
`img2img` does not work properly on initial images smaller
|
||||||
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
|
than 512x512. Please scale your image to at least 512x512 before using it.
|
||||||
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
|
Larger images are not a problem, but may run out of VRAM on your GPU card. To
|
||||||
by width x height:
|
fix this, use the --fit option, which downscales the initial image to fit within
|
||||||
~~~
|
the box specified by width x height:
|
||||||
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
|
||||||
~~~
|
```
|
||||||
|
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
||||||
|
```
|
||||||
|
|
||||||
## How does it actually work, though?
|
## How does it actually work, though?
|
||||||
|
|
||||||
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
|
The main difference between `img2img` and `prompt2img` is the starting point.
|
||||||
gaussian noise and progressively refines it over the requested number of steps, `img2img` skips some of these earlier steps
|
While `prompt2img` always starts with pure gaussian noise and progressively
|
||||||
(how many it skips is indirectly controlled by the `--strength` parameter), and uses instead your initial image mixed with gaussian noise as the starting image.
|
refines it over the requested number of steps, `img2img` skips some of these
|
||||||
|
earlier steps (how many it skips is indirectly controlled by the `--strength`
|
||||||
|
parameter), and uses instead your initial image mixed with gaussian noise as the
|
||||||
|
starting image.
|
||||||
|
|
||||||
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
|
**Let's start** by thinking about vanilla `prompt2img`, just generating an image
|
||||||
|
from a prompt. If the step count is 10, then the "latent space" (Stable
|
||||||
|
Diffusion's internal representation of the image) for the prompt "fire" with
|
||||||
|
seed `1592514025` develops something like this:
|
||||||
|
|
||||||
```commandline
|
!!! example ""
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
```bash
|
||||||

|
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
||||||
</figure>
|
```
|
||||||
|
|
||||||
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
|
<figure markdown>
|
||||||
|
{ width=720 }
|
||||||
|
</figure>
|
||||||
|
|
||||||
**When you use `img2img`** some of the earlier steps are cut, and instead an initial image of your choice is used. But because of how the maths behind Stable Diffusion works, this image needs to be mixed with just the right amount of noise (fuzz/static) for where it is being inserted. This is where the strength parameter comes in. Depending on the set strength, your image will be inserted into the sequence at the appropriate point, with just the right amount of noise.
|
Put simply: starting from a frame of fuzz/static, SD finds details in each frame
|
||||||
|
that it thinks look like "fire" and brings them a little bit more into focus,
|
||||||
|
gradually scrubbing out the fuzz until a clear image remains.
|
||||||
|
|
||||||
|
**When you use `img2img`** some of the earlier steps are cut, and instead an
|
||||||
|
initial image of your choice is used. But because of how the maths behind Stable
|
||||||
|
Diffusion works, this image needs to be mixed with just the right amount of
|
||||||
|
noise (fuzz/static) for where it is being inserted. This is where the strength
|
||||||
|
parameter comes in. Depending on the set strength, your image will be inserted
|
||||||
|
into the sequence at the appropriate point, with just the right amount of noise.
|
||||||
|
|
||||||
### A concrete example
|
### A concrete example
|
||||||
|
|
||||||
I want SD to draw a fire based on this hand-drawn image:
|
!!! example "I want SD to draw a fire based on this hand-drawn image"
|
||||||
|
|
||||||
<figure markdown>
|
{ align=left }
|
||||||

|
|
||||||
</figure>
|
|
||||||
|
|
||||||
Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like:
|
Let's only do 10 steps, to make it easier to see what's happening. If strength
|
||||||
|
is `0.7`, this is what the internal steps the algorithm has to take will look
|
||||||
|
like:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
With strength `0.4`, the steps look more like this:
|
With strength `0.4`, the steps look more like this:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`:
|
Notice how much more fuzzy the starting image is for strength `0.7` compared to
|
||||||
|
`0.4`, and notice also how much longer the sequence is with `0.7`:
|
||||||
|
|
||||||
| | strength = 0.7 | strength = 0.4 |
|
| | strength = 0.7 | strength = 0.4 |
|
||||||
| -- | -- | -- |
|
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
|
||||||
| initial image that SD sees |  |  |
|
| initial image that SD sees |  |  |
|
||||||
| steps argument to `invoke>` | `-S10` | `-S10` |
|
| steps argument to `invoke>` | `-S10` | `-S10` |
|
||||||
| steps actually taken | 7 | 4 |
|
| steps actually taken | `7` | `4` |
|
||||||
| latent space at each step |  |  |
|
| latent space at each step |  |  |
|
||||||
| output |  |  |
|
| output |  |  |
|
||||||
|
|
||||||
Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing.
|
Both of the outputs look kind of like what I was thinking of. With the strength
|
||||||
|
higher, my input becomes more vague, _and_ Stable Diffusion has more steps to
|
||||||
|
refine its output. But it's not really making what I want, which is a picture of
|
||||||
|
cheery open fire. With the strength lower, my input is more clear, _but_ Stable
|
||||||
|
Diffusion has less chance to refine itself, so the result ends up inheriting all
|
||||||
|
the problems of my bad drawing.
|
||||||
|
|
||||||
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `"fire"`:
|
If you want to try this out yourself, all of these are using a seed of
|
||||||
|
`1592514025` with a width/height of `384`, step count `10`, the default sampler
|
||||||
|
(`k_lms`), and the single-word prompt `"fire"`:
|
||||||
|
|
||||||
```commandline
|
```bash
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
||||||
```
|
```
|
||||||
|
|
||||||
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
|
The code for rendering intermediates is on my (damian0815's) branch
|
||||||
|
[document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) -
|
||||||
|
run `invoke.py` and check your `outputs/img-samples/intermediates` folder while
|
||||||
|
generating an image.
|
||||||
|
|
||||||
### Compensating for the reduced step count
|
### Compensating for the reduced step count
|
||||||
|
|
||||||
After putting this guide together I was curious to see how the difference would be if I increased the step count to compensate, so that SD could have the same amount of steps to develop the image regardless of the strength. So I ran the generation again using the same seed, but this time adapting the step count to give each generation 20 steps.
|
After putting this guide together I was curious to see how the difference would
|
||||||
|
be if I increased the step count to compensate, so that SD could have the same
|
||||||
|
amount of steps to develop the image regardless of the strength. So I ran the
|
||||||
|
generation again using the same seed, but this time adapting the step count to
|
||||||
|
give each generation 20 steps.
|
||||||
|
|
||||||
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
|
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
|
||||||
|
does `20` steps from my image):
|
||||||
|
|
||||||
```commandline
|
```bash
|
||||||
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -140,7 +188,8 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
|||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
|
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
|
||||||
|
make sure SD does `20` steps from my image):
|
||||||
|
|
||||||
```commandline
|
```commandline
|
||||||
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
||||||
@@ -150,7 +199,11 @@ invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
|||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`:
|
In both cases the image is nice and clean and "finished", but because at
|
||||||
|
strength `0.7` Stable Diffusion has been give so much more freedom to improve on
|
||||||
|
my badly-drawn flames, they've come out looking much better. You can really see
|
||||||
|
the difference when looking at the latent steps. There's more noise on the first
|
||||||
|
image with strength `0.7`:
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
@@ -162,15 +215,19 @@ than there is for strength `0.4`:
|
|||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
|
and that extra noise gives the algorithm more choices when it is evaluating how
|
||||||
|
to denoise any particular pixel in the image.
|
||||||
|
|
||||||
Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
|
Unfortunately, it seems that `img2img` is very sensitive to the step count.
|
||||||
|
Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames.
|
By comparing the latents we can sort of see that something got interpreted
|
||||||
|
differently enough on the third or fourth step to lead to a rather different
|
||||||
|
interpretation of the flames.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
@@ -180,4 +237,9 @@ By comparing the latents we can sort of see that something got interpreted diffe
|
|||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see [stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.
|
This is the result of a difference in the de-noising "schedule" - basically the
|
||||||
|
noise has to be cleaned by a certain degree each step or the model won't
|
||||||
|
"converge" on the image properly (see
|
||||||
|
[stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more
|
||||||
|
about that). A different step count means a different schedule, which means
|
||||||
|
things get interpreted slightly differently at every step.
|
||||||
|
|||||||
@@ -6,29 +6,27 @@ title: Inpainting
|
|||||||
|
|
||||||
## **Creating Transparent Regions for Inpainting**
|
## **Creating Transparent Regions for Inpainting**
|
||||||
|
|
||||||
Inpainting is really cool. To do it, you start with an initial image
|
Inpainting is really cool. To do it, you start with an initial image and use a
|
||||||
and use a photoeditor to make one or more regions transparent
|
photoeditor to make one or more regions transparent (i.e. they have a "hole" in
|
||||||
(i.e. they have a "hole" in them). You then provide the path to this
|
them). You then provide the path to this image at the dream> command line using
|
||||||
image at the invoke> command line using the `-I` switch. Stable
|
the `-I` switch. Stable Diffusion will only paint within the transparent region.
|
||||||
Diffusion will only paint within the transparent region.
|
|
||||||
|
|
||||||
There's a catch. In the current implementation, you have to prepare
|
There's a catch. In the current implementation, you have to prepare the initial
|
||||||
the initial image correctly so that the underlying colors are
|
image correctly so that the underlying colors are preserved under the
|
||||||
preserved under the transparent area. Many imaging editing
|
transparent area. Many imaging editing applications will by default erase the
|
||||||
applications will by default erase the color information under the
|
color information under the transparent pixels and replace them with white or
|
||||||
transparent pixels and replace them with white or black, which will
|
black, which will lead to suboptimal inpainting. It often helps to apply
|
||||||
lead to suboptimal inpainting. It often helps to apply incomplete
|
incomplete transparency, such as any value between 1 and 99%
|
||||||
transparency, such as any value between 1 and 99%
|
|
||||||
|
|
||||||
You also must take care to export the PNG file in such a way that the
|
You also must take care to export the PNG file in such a way that the color
|
||||||
color information is preserved. There is often an option in the export
|
information is preserved. There is often an option in the export dialog that
|
||||||
dialog that lets you specify this.
|
lets you specify this.
|
||||||
|
|
||||||
If your photoeditor is erasing the underlying color information,
|
If your photoeditor is erasing the underlying color information, `dream.py` will
|
||||||
`invoke.py` will give you a big fat warning. If you can't find a way to
|
give you a big fat warning. If you can't find a way to coax your photoeditor to
|
||||||
coax your photoeditor to retain color values under transparent areas,
|
retain color values under transparent areas, then you can combine the `-I` and
|
||||||
then you can combine the `-I` and `-M` switches to provide both the
|
`-M` switches to provide both the original unedited image and the masked
|
||||||
original unedited image and the masked (partially transparent) image:
|
(partially transparent) image:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
|
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
|
||||||
@@ -36,47 +34,47 @@ invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent
|
|||||||
|
|
||||||
## **Masking using Text**
|
## **Masking using Text**
|
||||||
|
|
||||||
You can also create a mask using a text prompt to select the part of
|
You can also create a mask using a text prompt to select the part of the image
|
||||||
the image you want to alter, using the <a
|
you want to alter, using the [clipseg](https://github.com/timojl/clipseg)
|
||||||
href="https://github.com/timojl/clipseg">clipseg</a> algorithm. This
|
algorithm. This works on any image, not just ones generated by InvokeAI.
|
||||||
works on any image, not just ones generated by InvokeAI.
|
|
||||||
|
|
||||||
The `--text_mask` (short form `-tm`) option takes two arguments. The
|
The `--text_mask` (short form `-tm`) option takes two arguments. The first
|
||||||
first argument is a text description of the part of the image you wish
|
argument is a text description of the part of the image you wish to mask (paint
|
||||||
to mask (paint over). If the text description contains a space, you must
|
over). If the text description contains a space, you must surround it with
|
||||||
surround it with quotation marks. The optional second argument is the
|
quotation marks. The optional second argument is the minimum threshold for the
|
||||||
minimum threshold for the mask classifier's confidence score, described
|
mask classifier's confidence score, described in more detail below.
|
||||||
in more detail below.
|
|
||||||
|
|
||||||
To see how this works in practice, here's an image of a still life
|
To see how this works in practice, here's an image of a still life painting that
|
||||||
painting that I got off the web.
|
I got off the web.
|
||||||
|
|
||||||
<img src="../assets/still-life-scaled.jpg">
|
<figure markdown>
|
||||||
|

|
||||||
|
</figure>
|
||||||
|
|
||||||
You can selectively mask out the
|
You can selectively mask out the orange and replace it with a baseball in this
|
||||||
orange and replace it with a baseball in this way:
|
way:
|
||||||
|
|
||||||
~~~
|
```bash
|
||||||
invoke> a baseball -I /path/to/still_life.png -tm orange
|
invoke> a baseball -I /path/to/still_life.png -tm orange
|
||||||
~~~
|
```
|
||||||
|
|
||||||
<img src="../assets/still-life-inpainted.png">
|
<figure markdown>
|
||||||
|

|
||||||
|
</figure>
|
||||||
|
|
||||||
The clipseg classifier produces a confidence score for each region it
|
The clipseg classifier produces a confidence score for each region it
|
||||||
identifies. Generally regions that score above 0.5 are reliable, but
|
identifies. Generally regions that score above 0.5 are reliable, but if you are
|
||||||
if you are getting too much or too little masking you can adjust the
|
getting too much or too little masking you can adjust the threshold down (to get
|
||||||
threshold down (to get more mask), or up (to get less). In this
|
more mask), or up (to get less). In this example, by passing `-tm` a higher
|
||||||
example, by passing `-tm` a higher value, we are insisting on a tigher
|
value, we are insisting on a tigher mask. However, if you make it too high, the
|
||||||
mask. However, if you make it too high, the orange may not be picked
|
orange may not be picked up at all!
|
||||||
up at all!
|
|
||||||
|
|
||||||
~~~
|
```bash
|
||||||
invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6
|
invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6
|
||||||
~~~
|
```
|
||||||
|
|
||||||
The `!mask` command may be useful for debugging problems with the
|
The `!mask` command may be useful for debugging problems with the text2mask
|
||||||
text2mask feature. The syntax is `!mask /path/to/image.png -tm <text>
|
feature. The syntax is `!mask /path/to/image.png -tm <text> <threshold>`
|
||||||
<threshold>`
|
|
||||||
|
|
||||||
It will generate three files:
|
It will generate three files:
|
||||||
|
|
||||||
@@ -84,19 +82,18 @@ It will generate three files:
|
|||||||
- it will be named XXXXX.<imagename>.<prompt>.selected.png
|
- it will be named XXXXX.<imagename>.<prompt>.selected.png
|
||||||
- The image with the un-selected area highlighted.
|
- The image with the un-selected area highlighted.
|
||||||
- it will be named XXXXX.<imagename>.<prompt>.deselected.png
|
- it will be named XXXXX.<imagename>.<prompt>.deselected.png
|
||||||
- The image with the selected area converted into a black and white
|
- The image with the selected area converted into a black and white image
|
||||||
image according to the threshold level
|
according to the threshold level
|
||||||
- it will be named XXXXX.<imagename>.<prompt>.masked.png
|
- it will be named XXXXX.<imagename>.<prompt>.masked.png
|
||||||
|
|
||||||
The `.masked.png` file can then be directly passed to the `invoke>`
|
The `.masked.png` file can then be directly passed to the `invoke>` prompt in
|
||||||
prompt in the CLI via the `-M` argument. Do not attempt this with
|
the CLI via the `-M` argument. Do not attempt this with the `selected.png` or
|
||||||
the `selected.png` or `deselected.png` files, as they contain some
|
`deselected.png` files, as they contain some transparency throughout the image
|
||||||
transparency throughout the image and will not produce the desired
|
and will not produce the desired results.
|
||||||
results.
|
|
||||||
|
|
||||||
Here is an example of how `!mask` works:
|
Here is an example of how `!mask` works:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
invoke> !mask ./test-pictures/curly.png -tm hair 0.5
|
invoke> !mask ./test-pictures/curly.png -tm hair 0.5
|
||||||
>> generating masks from ./test-pictures/curly.png
|
>> generating masks from ./test-pictures/curly.png
|
||||||
>> Initializing clipseg model for text to mask inference
|
>> Initializing clipseg model for text to mask inference
|
||||||
@@ -106,23 +103,30 @@ Outputs:
|
|||||||
[941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5
|
[941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5
|
||||||
```
|
```
|
||||||
|
|
||||||
**Original image "curly.png"**
|
<figure markdown>
|
||||||
<img src="../assets/outpainting/curly.png">
|

|
||||||
|
<figcaption>Original image "curly.png"</figcaption>
|
||||||
|
</figure>
|
||||||
|
|
||||||
**000019.curly.hair.selected.png**
|
<figure markdown>
|
||||||
<img src="../assets/inpainting/000019.curly.hair.selected.png">
|

|
||||||
|
<figcaption>000019.curly.hair.selected.png</figcaption>
|
||||||
|
</figure>
|
||||||
|
|
||||||
**000019.curly.hair.deselected.png**
|
<figure markdown>
|
||||||
<img src="../assets/inpainting/000019.curly.hair.deselected.png">
|

|
||||||
|
<figcaption>000019.curly.hair.deselected.png</figcaption>
|
||||||
|
</figure>
|
||||||
|
|
||||||
**000019.curly.hair.masked.png**
|
<figure markdown>
|
||||||
<img src="../assets/inpainting/000019.curly.hair.masked.png">
|

|
||||||
|
<figcaption>000019.curly.hair.masked.png</figcaption>
|
||||||
|
</figure>
|
||||||
|
|
||||||
It looks like we selected the hair pretty well at the 0.5 threshold
|
It looks like we selected the hair pretty well at the 0.5 threshold (which is
|
||||||
(which is the default, so we didn't actually have to specify it), so
|
the default, so we didn't actually have to specify it), so let's have some fun:
|
||||||
let's have some fun:
|
|
||||||
|
|
||||||
```
|
```bash
|
||||||
invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20
|
invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20
|
||||||
>> loaded input image of size 512x512 from ./test-pictures/curly.png
|
>> loaded input image of size 512x512 from ./test-pictures/curly.png
|
||||||
...
|
...
|
||||||
@@ -130,86 +134,83 @@ Outputs:
|
|||||||
[946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75
|
[946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75
|
||||||
```
|
```
|
||||||
|
|
||||||
<img src="../assets/inpainting/000024.801380492.png">
|
<figure markdown>
|
||||||
|

|
||||||
|
</figure>
|
||||||
|
|
||||||
You can also skip the `!mask` creation step and just select the masked
|
You can also skip the `!mask` creation step and just select the masked
|
||||||
|
|
||||||
region directly:
|
region directly:
|
||||||
```
|
|
||||||
|
```bash
|
||||||
invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20
|
invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20
|
||||||
```
|
```
|
||||||
|
|
||||||
## Using the RunwayML inpainting model
|
## Using the RunwayML inpainting model
|
||||||
|
|
||||||
The [RunwayML Inpainting Model
|
The
|
||||||
v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting) is
|
[RunwayML Inpainting Model v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting)
|
||||||
a specialized version of [Stable Diffusion
|
is a specialized version of
|
||||||
v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5)
|
[Stable Diffusion v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5)
|
||||||
that contains extra channels specifically designed to enhance
|
that contains extra channels specifically designed to enhance inpainting and
|
||||||
inpainting and outpainting. While it can do regular `txt2img` and
|
outpainting. While it can do regular `txt2img` and `img2img`, it really shines
|
||||||
`img2img`, it really shines when filling in missing regions. It has an
|
when filling in missing regions. It has an almost uncanny ability to blend the
|
||||||
almost uncanny ability to blend the new regions with existing ones in
|
new regions with existing ones in a semantically coherent way.
|
||||||
a semantically coherent way.
|
|
||||||
|
|
||||||
To install the inpainting model, follow the
|
To install the inpainting model, follow the
|
||||||
[instructions](INSTALLING-MODELS.md) for installing a new model. You
|
[instructions](../installation/050_INSTALLING_MODELS.md) for installing a new model.
|
||||||
may use either the CLI (`invoke.py` script) or directly edit the
|
You may use either the CLI (`invoke.py` script) or directly edit the
|
||||||
`configs/models.yaml` configuration file to do this. The main thing to
|
`configs/models.yaml` configuration file to do this. The main thing to watch out
|
||||||
watch out for is that the the model `config` option must be set up to
|
for is that the the model `config` option must be set up to use
|
||||||
use `v1-inpainting-inference.yaml` rather than the `v1-inference.yaml`
|
`v1-inpainting-inference.yaml` rather than the `v1-inference.yaml` file that is
|
||||||
file that is used by Stable Diffusion 1.4 and 1.5.
|
used by Stable Diffusion 1.4 and 1.5.
|
||||||
|
|
||||||
After installation, your `models.yaml` should contain an entry that
|
After installation, your `models.yaml` should contain an entry that looks like
|
||||||
looks like this one:
|
this one:
|
||||||
|
|
||||||
inpainting-1.5:
|
inpainting-1.5: weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
|
||||||
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
|
description: SD inpainting v1.5 config:
|
||||||
description: SD inpainting v1.5
|
configs/stable-diffusion/v1-inpainting-inference.yaml vae:
|
||||||
config: configs/stable-diffusion/v1-inpainting-inference.yaml
|
models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512
|
||||||
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
height: 512
|
||||||
width: 512
|
|
||||||
height: 512
|
|
||||||
|
|
||||||
As shown in the example, you may include a VAE fine-tuning weights
|
As shown in the example, you may include a VAE fine-tuning weights file as well.
|
||||||
file as well. This is strongly recommended.
|
This is strongly recommended.
|
||||||
|
|
||||||
To use the custom inpainting model, launch `invoke.py` with the
|
To use the custom inpainting model, launch `invoke.py` with the argument
|
||||||
argument `--model inpainting-1.5` or alternatively from within the
|
`--model inpainting-1.5` or alternatively from within the script use the
|
||||||
script use the `!switch inpainting-1.5` command to load and switch to
|
`!switch inpainting-1.5` command to load and switch to the inpainting model.
|
||||||
the inpainting model.
|
|
||||||
|
|
||||||
You can now do inpainting and outpainting exactly as described above,
|
You can now do inpainting and outpainting exactly as described above, but there
|
||||||
but there will (likely) be a noticeable improvement in
|
will (likely) be a noticeable improvement in coherence. Txt2img and Img2img will
|
||||||
coherence. Txt2img and Img2img will work as well.
|
work as well.
|
||||||
|
|
||||||
There are a few caveats to be aware of:
|
There are a few caveats to be aware of:
|
||||||
|
|
||||||
1. The inpainting model is larger than the standard model, and will
|
1. The inpainting model is larger than the standard model, and will use nearly 4
|
||||||
use nearly 4 GB of GPU VRAM. This makes it unlikely to run on
|
GB of GPU VRAM. This makes it unlikely to run on a 4 GB graphics card.
|
||||||
a 4 GB graphics card.
|
|
||||||
|
|
||||||
2. When operating in Img2img mode, the inpainting model is much less
|
2. When operating in Img2img mode, the inpainting model is much less steerable
|
||||||
steerable than the standard model. It is great for making small
|
than the standard model. It is great for making small changes, such as
|
||||||
changes, such as changing the pattern of a fabric, or slightly
|
changing the pattern of a fabric, or slightly changing a subject's expression
|
||||||
changing a subject's expression or hair, but the model will
|
or hair, but the model will resist making the dramatic alterations that the
|
||||||
resist making the dramatic alterations that the standard
|
standard model lets you do.
|
||||||
model lets you do.
|
|
||||||
|
|
||||||
3. While the `--hires` option works fine with the inpainting model,
|
3. While the `--hires` option works fine with the inpainting model, some special
|
||||||
some special features, such as `--embiggen` are disabled.
|
features, such as `--embiggen` are disabled.
|
||||||
|
|
||||||
4. Prompt weighting (`banana++ sushi`) and merging work well with
|
4. Prompt weighting (`banana++ sushi`) and merging work well with the inpainting
|
||||||
the inpainting model, but prompt swapping (a ("fluffy cat").swap("smiling dog") eating a hotdog`)
|
model, but prompt swapping
|
||||||
will not have any effect due to the way the model is set up.
|
(`a ("fluffy cat").swap("smiling dog") eating a hotdog`) will not have any
|
||||||
You may use text masking (with `-tm thing-to-mask`) as an
|
effect due to the way the model is set up. You may use text masking (with
|
||||||
effective replacement.
|
`-tm thing-to-mask`) as an effective replacement.
|
||||||
|
|
||||||
5. The model tends to oversharpen image if you use high step or CFG
|
5. The model tends to oversharpen image if you use high step or CFG values. If
|
||||||
values. If you need to do large steps, use the standard model.
|
you need to do large steps, use the standard model.
|
||||||
|
|
||||||
6. The `--strength` (`-f`) option has no effect on the inpainting
|
6. The `--strength` (`-f`) option has no effect on the inpainting model due to
|
||||||
model due to its fundamental differences with the standard
|
its fundamental differences with the standard model. It will always take the
|
||||||
model. It will always take the full number of steps you specify.
|
full number of steps you specify.
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
@@ -217,23 +218,21 @@ Here are some troubleshooting tips for inpainting and outpainting.
|
|||||||
|
|
||||||
## Inpainting is not changing the masked region enough!
|
## Inpainting is not changing the masked region enough!
|
||||||
|
|
||||||
One of the things to understand about how inpainting works is that it
|
One of the things to understand about how inpainting works is that it is
|
||||||
is equivalent to running img2img on just the masked (transparent)
|
equivalent to running img2img on just the masked (transparent) area. img2img
|
||||||
area. img2img builds on top of the existing image data, and therefore
|
builds on top of the existing image data, and therefore will attempt to preserve
|
||||||
will attempt to preserve colors, shapes and textures to the best of
|
colors, shapes and textures to the best of its ability. Unfortunately this means
|
||||||
its ability. Unfortunately this means that if you want to make a
|
that if you want to make a dramatic change in the inpainted region, for example
|
||||||
dramatic change in the inpainted region, for example replacing a red
|
replacing a red wall with a blue one, the algorithm will fight you.
|
||||||
wall with a blue one, the algorithm will fight you.
|
|
||||||
|
|
||||||
You have a couple of options. The first is to increase the values of
|
You have a couple of options. The first is to increase the values of the
|
||||||
the requested steps (`-sXXX`), strength (`-f0.XX`), and/or
|
requested steps (`-sXXX`), strength (`-f0.XX`), and/or condition-free guidance
|
||||||
condition-free guidance (`-CXX.X`). If this is not working for you, a
|
(`-CXX.X`). If this is not working for you, a more extreme step is to provide
|
||||||
more extreme step is to provide the `--inpaint_replace 0.X` (`-r0.X`)
|
the `--inpaint_replace 0.X` (`-r0.X`) option. This value ranges from 0.0 to 1.0.
|
||||||
option. This value ranges from 0.0 to 1.0. The higher it is the less
|
The higher it is the less attention the algorithm will pay to the data
|
||||||
attention the algorithm will pay to the data underneath the masked
|
underneath the masked region. At high values this will enable you to replace
|
||||||
region. At high values this will enable you to replace colored regions
|
colored regions entirely, but beware that the masked region mayl not blend in
|
||||||
entirely, but beware that the masked region mayl not blend in with the
|
with the surrounding unmasked regions as well.
|
||||||
surrounding unmasked regions as well.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -248,8 +247,8 @@ surrounding unmasked regions as well.
|
|||||||
5. Open the Layers toolbar (^L) and select "Floating Selection"
|
5. Open the Layers toolbar (^L) and select "Floating Selection"
|
||||||
6. Set opacity to a value between 0% and 99%
|
6. Set opacity to a value between 0% and 99%
|
||||||
7. Export as PNG
|
7. Export as PNG
|
||||||
8. In the export dialogue, Make sure the "Save colour values from
|
8. In the export dialogue, Make sure the "Save colour values from transparent
|
||||||
transparent pixels" checkbox is selected.
|
pixels" checkbox is selected.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -261,36 +260,47 @@ surrounding unmasked regions as well.
|
|||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area you desire to inpaint.
|
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area
|
||||||
|
you desire to inpaint.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the ++shift+ctrl+i++ shortcut, or right clicking and using the "Select Inverse" option.
|
3. Because we'll be applying a mask over the area we want to preserve, you
|
||||||
|
should now select the inverse by using the ++shift+ctrl+i++ shortcut, or
|
||||||
|
right clicking and using the "Select Inverse" option.
|
||||||
|
|
||||||
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the underlying image, or your inpainting results will be dramatically impacted.
|
4. You'll now create a mask by selecting the image layer, and Masking the
|
||||||
|
selection. Make sure that you don't delete any of the underlying image, or
|
||||||
|
your inpainting results will be dramatically impacted.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
5. Make sure to hide any background layers that are present. You should see the mask applied to your image layer, and the image on your canvas should display the checkered background.
|
5. Make sure to hide any background layers that are present. You should see the
|
||||||
|
mask applied to your image layer, and the image on your canvas should display
|
||||||
|
the checkered background.
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
|
6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the
|
||||||
|
menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good!
|
7. After following the inpainting instructions above (either through the CLI or
|
||||||
|
the Web UI), marvel at your newfound ability to selectively invoke. Lookin'
|
||||||
|
good!
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||

|

|
||||||
</figure>
|
</figure>
|
||||||
|
|
||||||
8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is selected.
|
8. In the export dialogue, Make sure the "Save colour values from transparent
|
||||||
|
pixels" checkbox is selected.
|
||||||
|
|||||||
76
docs/features/MODEL_MERGING.md
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
---
|
||||||
|
title: Model Merging
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-image-off: Model Merging
|
||||||
|
|
||||||
|
## How to Merge Models
|
||||||
|
|
||||||
|
As of version 2.3, InvokeAI comes with a script that allows you to
|
||||||
|
merge two or three diffusers-type models into a new merged model. The
|
||||||
|
resulting model will combine characteristics of the original, and can
|
||||||
|
be used to teach an old model new tricks.
|
||||||
|
|
||||||
|
You may run the merge script by starting the invoke launcher
|
||||||
|
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
|
||||||
|
models_. This will launch a text-based interactive user interface that
|
||||||
|
prompts you to select the models to merge, how to merge them, and the
|
||||||
|
merged model name.
|
||||||
|
|
||||||
|
Alternatively you may activate InvokeAI's virtual environment from the
|
||||||
|
command line, and call the script via `merge_models --gui` to open up
|
||||||
|
a version that has a nice graphical front end. To get the commandline-
|
||||||
|
only version, omit `--gui`.
|
||||||
|
|
||||||
|
The user interface for the text-based interactive script is
|
||||||
|
straightforward. It shows you a series of setting fields. Use control-N (^N)
|
||||||
|
to move to the next field, and control-P (^P) to move to the previous
|
||||||
|
one. You can also use TAB and shift-TAB to move forward and
|
||||||
|
backward. Once you are in a multiple choice field, use the up and down
|
||||||
|
cursor arrows to move to your desired selection, and press <SPACE> or
|
||||||
|
<ENTER> to select it. Change text fields by typing in them, and adjust
|
||||||
|
scrollbars using the left and right arrow keys.
|
||||||
|
|
||||||
|
Once you are happy with your settings, press the OK button. Note that
|
||||||
|
there may be two pages of settings, depending on the height of your
|
||||||
|
screen, and the OK button may be on the second page. Advance past the
|
||||||
|
last field of the first page to get to the second page, and reverse
|
||||||
|
this to get back.
|
||||||
|
|
||||||
|
If the merge runs successfully, it will create a new diffusers model
|
||||||
|
under the selected name and register it with InvokeAI.
|
||||||
|
|
||||||
|
## The Settings
|
||||||
|
|
||||||
|
* Model Selection -- there are three multiple choice fields that
|
||||||
|
display all the diffusers-style models that InvokeAI knows about.
|
||||||
|
If you do not see the model you are looking for, then it is probably
|
||||||
|
a legacy checkpoint model and needs to be converted using the
|
||||||
|
`invoke` command-line client and its `!optimize` command. You
|
||||||
|
must select at least two models to merge. The third can be left at
|
||||||
|
"None" if you desire.
|
||||||
|
|
||||||
|
* Alpha -- This is the ratio to use when combining models. It ranges
|
||||||
|
from 0 to 1. The higher the value, the more weight is given to the
|
||||||
|
2d and (optionally) 3d models. So if you have two models named "A"
|
||||||
|
and "B", an alpha value of 0.25 will give you a merged model that is
|
||||||
|
25% A and 75% B.
|
||||||
|
|
||||||
|
* Interpolation Method -- This is the method used to combine
|
||||||
|
weights. The options are "weighted_sum" (the default), "sigmoid",
|
||||||
|
"inv_sigmoid" and "add_difference". Each produces slightly different
|
||||||
|
results. When three models are in use, only "add_difference" is
|
||||||
|
available. (TODO: cite a reference that describes what these
|
||||||
|
interpolation methods actually do and how to decide among them).
|
||||||
|
|
||||||
|
* Force -- Not all models are compatible with each other. The merge
|
||||||
|
script will check for compatibility and refuse to merge ones that
|
||||||
|
are incompatible. Set this checkbox to try merging anyway.
|
||||||
|
|
||||||
|
* Name for merged model - This is the name for the new model. Please
|
||||||
|
use InvokeAI conventions - only alphanumeric letters and the
|
||||||
|
characters ".+-".
|
||||||
|
|
||||||
|
## Caveats
|
||||||
|
|
||||||
|
This is a new script and may contain bugs.
|
||||||