Compare commits
2480 Commits
release-1.
...
developmen
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
971f5c5ab1 | ||
|
|
22133392b2 | ||
|
|
5e81f51f6a | ||
|
|
9fae65ed69 | ||
|
|
2443e5dc01 | ||
|
|
a9aa4e45aa | ||
|
|
9b6b27a156 | ||
|
|
b68074bb8f | ||
|
|
1f8e56672c | ||
|
|
f8708f5dbe | ||
|
|
103efea641 | ||
|
|
b60edab0fa | ||
|
|
6bc11bfd3f | ||
|
|
5897e511f1 | ||
|
|
f43b767b87 | ||
|
|
61cc41aa3f | ||
|
|
40c3ab0181 | ||
|
|
8999a5564b | ||
|
|
8423be539b | ||
|
|
6cc56043e2 | ||
|
|
62cda009dd | ||
|
|
45e51bac9a | ||
|
|
a514f9b236 | ||
|
|
90b21db86c | ||
|
|
bc44ab786c | ||
|
|
281a2e3ecb | ||
|
|
84cd96decf | ||
|
|
a3121b8137 | ||
|
|
3f6d0fb7da | ||
|
|
08ef4d62e9 | ||
|
|
81cb7fd1b7 | ||
|
|
7c658c6d76 | ||
|
|
495104e941 | ||
|
|
1e1f871ee1 | ||
|
|
30c5a0b067 | ||
|
|
9f02595ef2 | ||
|
|
b101334b4e | ||
|
|
0608d259dd | ||
|
|
12eff0dd42 | ||
|
|
939164eaa7 | ||
|
|
f2d2a49977 | ||
|
|
37535f5897 | ||
|
|
e5646dee27 | ||
|
|
d5011efaa1 | ||
|
|
74487a95a9 | ||
|
|
a50f4da9d1 | ||
|
|
4ae1df5b5e | ||
|
|
7f3ba16cd2 | ||
|
|
7a0438586c | ||
|
|
9adaf8f8ad | ||
|
|
a341297b0c | ||
|
|
9e0504abe5 | ||
|
|
3131edb255 | ||
|
|
b0697bc4ff | ||
|
|
1e9121c8d6 | ||
|
|
916e795c26 | ||
|
|
3aebe754fa | ||
|
|
3f0cfaac4a | ||
|
|
a3a0a87f55 | ||
|
|
f5e8ffe7b4 | ||
|
|
404d81f6fd | ||
|
|
c7864f8a6d | ||
|
|
9568ac66e0 | ||
|
|
d4280bbaaa | ||
|
|
46a5fd67ed | ||
|
|
b93336dbf9 | ||
|
|
9fe9301762 | ||
|
|
7f1b95fbda | ||
|
|
52c79fa097 | ||
|
|
62ac725ba9 | ||
|
|
db188cd3c3 | ||
|
|
e67ef4aec2 | ||
|
|
473869b8ed | ||
|
|
c8c1b3e217 | ||
|
|
fcd3ef1f98 | ||
|
|
a7f11a8c09 | ||
|
|
318426b67a | ||
|
|
6f3e99efc3 | ||
|
|
7515bcfe78 | ||
|
|
8d0ef022eb | ||
|
|
9f1c1cf2e6 | ||
|
|
d44112c209 | ||
|
|
b31f90c0bd | ||
|
|
344cdf0ade | ||
|
|
500bde5b0e | ||
|
|
df03927ec6 | ||
|
|
419f670f86 | ||
|
|
30dc9220c1 | ||
|
|
941d427302 | ||
|
|
876ae7f70f | ||
|
|
a86049f822 | ||
|
|
ec3d25d778 | ||
|
|
69a4a6fec5 | ||
|
|
7b76b79887 | ||
|
|
3ea732365c | ||
|
|
dc5d696ed2 | ||
|
|
0060551490 | ||
|
|
2fcc7d9b36 | ||
|
|
78217f5ef9 | ||
|
|
c6112e3295 | ||
|
|
8b08af714d | ||
|
|
723dcf4236 | ||
|
|
ddfd82559f | ||
|
|
8488575e5c | ||
|
|
7e4e51b224 | ||
|
|
f3b7316683 | ||
|
|
25b19b9ab8 | ||
|
|
9a6a970771 | ||
|
|
93de78b6e8 | ||
|
|
00da042dab | ||
|
|
6445e802f6 | ||
|
|
7caf20aad3 | ||
|
|
11969c2e2e | ||
|
|
e821b97cfc | ||
|
|
ef1dbdb33d | ||
|
|
0cdb7bb0cd | ||
|
|
306ed44e19 | ||
|
|
b0810e1ed7 | ||
|
|
089c85a017 | ||
|
|
a1d80fd106 | ||
|
|
d9c7a28c90 | ||
|
|
c787a3a801 | ||
|
|
1f772e4bdc | ||
|
|
cb7458db77 | ||
|
|
ef482b4d3e | ||
|
|
3e22160462 | ||
|
|
6a3d725dbb | ||
|
|
8a16c8a196 | ||
|
|
90eaac5134 | ||
|
|
896c2532c7 | ||
|
|
f68702520b | ||
|
|
286e46aaa3 | ||
|
|
088fd97418 | ||
|
|
e1e978b423 | ||
|
|
d27d92325d | ||
|
|
80f6f9a931 | ||
|
|
7dff8ccd31 | ||
|
|
3f6b275bec | ||
|
|
5ed6a31b97 | ||
|
|
b72b61b790 | ||
|
|
b81231823e | ||
|
|
c7c6940e1a | ||
|
|
6c33d1356d | ||
|
|
f08c78a043 | ||
|
|
b6dd5b664c | ||
|
|
76e7e82f5e | ||
|
|
d4376ed240 | ||
|
|
9d34213b4c | ||
|
|
b908f2b4bc | ||
|
|
9418324030 | ||
|
|
0f6856b719 | ||
|
|
83d8e69219 | ||
|
|
7f999e9dfc | ||
|
|
0c3ae232af | ||
|
|
9950790f4c | ||
|
|
b50a1eb63f | ||
|
|
d55b1e169c | ||
|
|
1071a12777 | ||
|
|
d987d0a336 | ||
|
|
50a67a7172 | ||
|
|
a3308c853d | ||
|
|
cde395e02f | ||
|
|
e7f670a5b6 | ||
|
|
917c576ddb | ||
|
|
dfc0c587b1 | ||
|
|
548bcaceb2 | ||
|
|
5fd43fca13 | ||
|
|
37a356d377 | ||
|
|
cccbfb12aa | ||
|
|
d018b2d7a7 | ||
|
|
e358adecdd | ||
|
|
cdc5f66592 | ||
|
|
b8cebf29f2 | ||
|
|
68aebad7ad | ||
|
|
ae4a44de3e | ||
|
|
2ab868314f | ||
|
|
bc46c46835 | ||
|
|
d82a21cfb2 | ||
|
|
87439feeb2 | ||
|
|
e5951ad098 | ||
|
|
4f51680307 | ||
|
|
d0ceabd372 | ||
|
|
2bda3d6d2f | ||
|
|
a96af7a15d | ||
|
|
93192b90f4 | ||
|
|
024acf42af | ||
|
|
04cb2d39cb | ||
|
|
c69573e65d | ||
|
|
84f702b6d0 | ||
|
|
bb70c32ad5 | ||
|
|
425a1713ab | ||
|
|
70e67c45dd | ||
|
|
07ca0876ec | ||
|
|
aa96a457b6 | ||
|
|
e28599cadb | ||
|
|
ae6dd219d9 | ||
|
|
19322fc1ec | ||
|
|
635e7da05d | ||
|
|
c0005eb063 | ||
|
|
74485411a8 | ||
|
|
ed70fc683c | ||
|
|
425d3bc95d | ||
|
|
3994c28b77 | ||
|
|
0100a63b59 | ||
|
|
432dc704a6 | ||
|
|
1d540219fa | ||
|
|
827f516baf | ||
|
|
48ad0c289c | ||
|
|
223e0529ba | ||
|
|
98e3bbb3bd | ||
|
|
e3efcc620c | ||
|
|
15dd1339d2 | ||
|
|
caf8f0ae35 | ||
|
|
cfb87bc116 | ||
|
|
c0ad1b3469 | ||
|
|
4382cd0b91 | ||
|
|
b049bbc64e | ||
|
|
34395ff490 | ||
|
|
1bc1085542 | ||
|
|
d7884432c9 | ||
|
|
82f6402d04 | ||
|
|
0e7b735611 | ||
|
|
5304ef504c | ||
|
|
17b295871f | ||
|
|
70dcfa1684 | ||
|
|
5d484273ed | ||
|
|
179656d541 | ||
|
|
73099af6ec | ||
|
|
c223d93b4d | ||
|
|
4e34194479 | ||
|
|
00e2674076 | ||
|
|
0a2e67df1a | ||
|
|
7831468304 | ||
|
|
88d02585e7 | ||
|
|
f82e82f1bb | ||
|
|
317762861f | ||
|
|
3f1360368d | ||
|
|
d5467e7db5 | ||
|
|
9284983429 | ||
|
|
bb79c78fe8 | ||
|
|
e3735ebb45 | ||
|
|
eb17dfdeaa | ||
|
|
1114ac97e2 | ||
|
|
c7ef41af54 | ||
|
|
7075a17091 | ||
|
|
7f0fb47cf3 | ||
|
|
775f032c56 | ||
|
|
5410d42da0 | ||
|
|
e21e901fa2 | ||
|
|
00385240e7 | ||
|
|
0a96d2a888 | ||
|
|
016551e036 | ||
|
|
b8bb46042c | ||
|
|
b44e9c7752 | ||
|
|
8ed10c732b | ||
|
|
82a53782d0 | ||
|
|
6adebf065f | ||
|
|
83f369053f | ||
|
|
77d3839860 | ||
|
|
c02a0da837 | ||
|
|
4f4c6bbe33 | ||
|
|
72ea5453ce | ||
|
|
458081f9c9 | ||
|
|
d1fbe81a60 | ||
|
|
6c7191712f | ||
|
|
248068fe5d | ||
|
|
9b281856ee | ||
|
|
fdf41cc739 | ||
|
|
d079445943 | ||
|
|
a997ab2cf6 | ||
|
|
e98068a546 | ||
|
|
b945ae4e01 | ||
|
|
b23c471cf0 | ||
|
|
964e584bd3 | ||
|
|
461358bdde | ||
|
|
2433cc344a | ||
|
|
bd2eea1c70 | ||
|
|
16df759499 | ||
|
|
5a1a36ec29 | ||
|
|
c76badfb08 | ||
|
|
71c4f401b0 | ||
|
|
c59b9897d9 | ||
|
|
4cf1c856ed | ||
|
|
a78a1020be | ||
|
|
90cb7a6442 | ||
|
|
8f5cded86e | ||
|
|
02d02a86b1 | ||
|
|
ba9c695463 | ||
|
|
8202f34f38 | ||
|
|
40a7f47d22 | ||
|
|
37bcf9cc47 | ||
|
|
0340d9ad53 | ||
|
|
0d35a67e9c | ||
|
|
1260e28d94 | ||
|
|
229f782e3b | ||
|
|
c15b839dd4 | ||
|
|
a095214e52 | ||
|
|
8e81425e89 | ||
|
|
c5cbe8f87d | ||
|
|
e0581a2c37 | ||
|
|
32f538bf3a | ||
|
|
3c5a14a814 | ||
|
|
0661256b61 | ||
|
|
602e35db65 | ||
|
|
bc7ece771d | ||
|
|
38bdb440d0 | ||
|
|
ce8c2bea2f | ||
|
|
3ac0f11e97 | ||
|
|
98fe49cb55 | ||
|
|
2b7e3abe57 | ||
|
|
150c4a5d2d | ||
|
|
0381a853b5 | ||
|
|
c79ec204ec | ||
|
|
8d3b1582a5 | ||
|
|
5fd7d71a7a | ||
|
|
19e2cff18c | ||
|
|
1f0220697b | ||
|
|
18ae3949ef | ||
|
|
aa95510444 | ||
|
|
f33df25830 | ||
|
|
3a5a8ceba5 | ||
|
|
a1e5f17d1e | ||
|
|
303431be89 | ||
|
|
8e9f80cc97 | ||
|
|
3ad598761c | ||
|
|
b4eaf8b751 | ||
|
|
fa608efa11 | ||
|
|
e9d319bfde | ||
|
|
561721aef7 | ||
|
|
891c0f21d5 | ||
|
|
8973ce7d47 | ||
|
|
51c283ba56 | ||
|
|
7d262fc158 | ||
|
|
fdb16000ab | ||
|
|
f62cc7db9d | ||
|
|
9fa3e28dd4 | ||
|
|
9200b26f21 | ||
|
|
d998b2f806 | ||
|
|
ac8a7ff70b | ||
|
|
2d6e0baa87 | ||
|
|
58f65d49b6 | ||
|
|
c212b74990 | ||
|
|
0352979a8b | ||
|
|
70bd61d616 | ||
|
|
f2a6985c78 | ||
|
|
fe5a581313 | ||
|
|
2ec9792f50 | ||
|
|
a4204abfce | ||
|
|
274b276133 | ||
|
|
7707bc7818 | ||
|
|
e5edd025d6 | ||
|
|
4c035ad4ae | ||
|
|
e9090bca8f | ||
|
|
398a9bc0c6 | ||
|
|
38b9658c15 | ||
|
|
29e229b409 | ||
|
|
f04d1bab21 | ||
|
|
c23efb8e2b | ||
|
|
5604d3c447 | ||
|
|
93cdb476d9 | ||
|
|
206101f59d | ||
|
|
1305e7a56c | ||
|
|
58edf262e4 | ||
|
|
23348dcd3f | ||
|
|
9bf6013fdd | ||
|
|
fd67df9447 | ||
|
|
1d11e06e6f | ||
|
|
45e5053d06 | ||
|
|
9c5999ede1 | ||
|
|
7ddf7f0b7d | ||
|
|
47e6f94111 | ||
|
|
810fad9e06 | ||
|
|
b8de5244b1 | ||
|
|
853c6af623 | ||
|
|
72e011a4e4 | ||
|
|
98db0d746c | ||
|
|
1a8e007066 | ||
|
|
1b6bbfb4db | ||
|
|
67e25624b9 | ||
|
|
9c218788e2 | ||
|
|
bb084a844b | ||
|
|
0a88243911 | ||
|
|
8a0a90d0f3 | ||
|
|
9141132a5c | ||
|
|
78f7bef1a3 | ||
|
|
1fb7b50be7 | ||
|
|
b57c81ab38 | ||
|
|
af040e97af | ||
|
|
8dc7f119e5 | ||
|
|
4b4111a802 | ||
|
|
832f183320 | ||
|
|
8aa94d5774 | ||
|
|
48aa6416dc | ||
|
|
47ddda1f64 | ||
|
|
c248ae44d4 | ||
|
|
9e4545b2fc | ||
|
|
8cf3883adc | ||
|
|
e06a6ed4c8 | ||
|
|
12a33f6e2d | ||
|
|
6d9638ba31 | ||
|
|
c54eb00055 | ||
|
|
72338506ed | ||
|
|
78c1d07c4b | ||
|
|
143b18af8a | ||
|
|
9d39d6ecb3 | ||
|
|
9686bf0ea8 | ||
|
|
7aa7be6b24 | ||
|
|
443c9110f1 | ||
|
|
ae0ce82609 | ||
|
|
f1982cb6d8 | ||
|
|
af62958323 | ||
|
|
9342ad8d97 | ||
|
|
5214742d02 | ||
|
|
178f0c78d8 | ||
|
|
2487040ae3 | ||
|
|
5606af5083 | ||
|
|
4b5a96501d | ||
|
|
ededeaed86 | ||
|
|
636620b1d5 | ||
|
|
21961f0c32 | ||
|
|
1fe41146f0 | ||
|
|
2ad6ef355a | ||
|
|
865502ee4f | ||
|
|
c7984f3299 | ||
|
|
7f150ed833 | ||
|
|
badf4e256c | ||
|
|
e64c60bbb3 | ||
|
|
1780618543 | ||
|
|
f91fd27624 | ||
|
|
09e41e8f76 | ||
|
|
6eeb2107b3 | ||
|
|
8b47c82992 | ||
|
|
eab435da27 | ||
|
|
17053ad8b7 | ||
|
|
fefb4dc1f8 | ||
|
|
d05b1b3544 | ||
|
|
82d4904c07 | ||
|
|
1cdcf33cfa | ||
|
|
6616fa835a | ||
|
|
cbc029c6f9 | ||
|
|
7b9a4564b1 | ||
|
|
d318968abe | ||
|
|
fcdefa0620 | ||
|
|
e71655237a | ||
|
|
ef8b3ce639 | ||
|
|
36870a8f53 | ||
|
|
b70420951d | ||
|
|
1f0c5b4cf1 | ||
|
|
8648da8111 | ||
|
|
45b4593563 | ||
|
|
41b04316cf | ||
|
|
e97c6db2a3 | ||
|
|
896820a349 | ||
|
|
06c8f468bf | ||
|
|
61920e2701 | ||
|
|
f34ba7ca70 | ||
|
|
c30ef0895d | ||
|
|
aa3a774f73 | ||
|
|
2c30555b84 | ||
|
|
743f605773 | ||
|
|
6b89adfa7e | ||
|
|
8aa4a258f4 | ||
|
|
519c661abb | ||
|
|
174a9b78b0 | ||
|
|
22c956c75f | ||
|
|
13696adc3a | ||
|
|
0196571a12 | ||
|
|
9666f466ab | ||
|
|
240e5486c8 | ||
|
|
aa247e68be | ||
|
|
895c47fd11 | ||
|
|
0c32d7b507 | ||
|
|
09625eae66 | ||
|
|
76249b3d4e | ||
|
|
d85cd99f17 | ||
|
|
f4576dcc2d | ||
|
|
62fe308f84 | ||
|
|
9b984e0d1e | ||
|
|
5502b29340 | ||
|
|
15fa246ccf | ||
|
|
4929ae6c1d | ||
|
|
16a52a607d | ||
|
|
7c68eff99f | ||
|
|
2048a47b85 | ||
|
|
8164b6b9cf | ||
|
|
f73d5a647d | ||
|
|
365e2dde1b | ||
|
|
4fc82d554f | ||
|
|
96b34c0f85 | ||
|
|
dd5a88dcee | ||
|
|
95ed56bf82 | ||
|
|
1ae80f5ab9 | ||
|
|
1f0bd3ca6c | ||
|
|
a1971f6830 | ||
|
|
c6118e8898 | ||
|
|
7ba958cf7f | ||
|
|
383905d5d2 | ||
|
|
6173e3e9ca | ||
|
|
3feb7d8922 | ||
|
|
1d9edbd0dd | ||
|
|
d439abdb89 | ||
|
|
ee47ea0c89 | ||
|
|
300bb2e627 | ||
|
|
ccf8593501 | ||
|
|
0fda612f3f | ||
|
|
5afff65b71 | ||
|
|
7e55bdefce | ||
|
|
620cf84d3d | ||
|
|
cfe567c62a | ||
|
|
cefe12f1df | ||
|
|
1e51c39928 | ||
|
|
42a02bbb80 | ||
|
|
f1ae6dae4c | ||
|
|
6195579910 | ||
|
|
16c8b23b34 | ||
|
|
07ae626b22 | ||
|
|
8d171bb044 | ||
|
|
6e33ca7e9e | ||
|
|
db46e12f2b | ||
|
|
868e4b2db8 | ||
|
|
2e562742c1 | ||
|
|
68e6958009 | ||
|
|
ea6e3a7949 | ||
|
|
b2879ca99f | ||
|
|
4e911566c3 | ||
|
|
9bafda6a15 | ||
|
|
871a8a5375 | ||
|
|
0eef74bc00 | ||
|
|
423ae32097 | ||
|
|
8282e5d045 | ||
|
|
19305cdbdf | ||
|
|
eb9028ab30 | ||
|
|
21483f5d07 | ||
|
|
82dcbac28f | ||
|
|
d43bd4625d | ||
|
|
ea891324a2 | ||
|
|
8fd9ea2193 | ||
|
|
fb02666856 | ||
|
|
f6f5c2731b | ||
|
|
b4e3f771e0 | ||
|
|
99bb9491ac | ||
|
|
a48e021c0b | ||
|
|
825fa6977d | ||
|
|
e332529fbd | ||
|
|
0f6aa7fe19 | ||
|
|
b8870d8290 | ||
|
|
ffa91be3f1 | ||
|
|
2d5294bca1 | ||
|
|
0453f21127 | ||
|
|
9fc09aa4bd | ||
|
|
2468a28e66 | ||
|
|
e3ed748191 | ||
|
|
3f5bf7ac44 | ||
|
|
00378e1ea6 | ||
|
|
5e87062cf8 | ||
|
|
3e7a459990 | ||
|
|
bbf4c03e50 | ||
|
|
b45e632f23 | ||
|
|
611a3a9753 | ||
|
|
1611f0d181 | ||
|
|
08835115e4 | ||
|
|
2d84e28d32 | ||
|
|
57be9ae6c3 | ||
|
|
ef17aae8ab | ||
|
|
0cc39f01a3 | ||
|
|
688d7258f1 | ||
|
|
4513320bf1 | ||
|
|
6c9a2761f5 | ||
|
|
533fd04ef0 | ||
|
|
2bdd738f03 | ||
|
|
7782760541 | ||
|
|
dff5681cf0 | ||
|
|
5a2790a69b | ||
|
|
7c5305ccba | ||
|
|
4013e8ad6f | ||
|
|
d1dfd257f9 | ||
|
|
5322d735ee | ||
|
|
cdb107dcda | ||
|
|
be1393a41c | ||
|
|
e554c2607f | ||
|
|
de2686d323 | ||
|
|
0b72a4a35e | ||
|
|
6215592b12 | ||
|
|
349cc25433 | ||
|
|
214d276379 | ||
|
|
ef24d76adc | ||
|
|
ab2b5a691d | ||
|
|
942a202945 | ||
|
|
1379642fc6 | ||
|
|
408cf5e092 | ||
|
|
ce298d32b5 | ||
|
|
d7107d931a | ||
|
|
147dcc2961 | ||
|
|
efd7f42414 | ||
|
|
4e1b619ad7 | ||
|
|
c7de2b2801 | ||
|
|
e8075658ac | ||
|
|
4202dabee1 | ||
|
|
d67db2bcf1 | ||
|
|
f26199d377 | ||
|
|
7159ec885f | ||
|
|
90cd791e76 | ||
|
|
b5cf734ba9 | ||
|
|
5a95ce5625 | ||
|
|
f7dc8eafee | ||
|
|
89da42ad79 | ||
|
|
e8aba99c92 | ||
|
|
ced9c83e96 | ||
|
|
247816db9a | ||
|
|
80f2cfe3e3 | ||
|
|
9a15a89e20 | ||
|
|
c73a61b785 | ||
|
|
88203d8db2 | ||
|
|
881c69e905 | ||
|
|
c40278dae7 | ||
|
|
7b329b7c91 | ||
|
|
c19b02ab21 | ||
|
|
6ebddf09c2 | ||
|
|
5841e1b5be | ||
|
|
5f09ffa276 | ||
|
|
9e70c216f6 | ||
|
|
cbe8a9550c | ||
|
|
259ecb7b71 | ||
|
|
002791ef68 | ||
|
|
21e491f878 | ||
|
|
12c4c715aa | ||
|
|
fe700d27df | ||
|
|
7a4ceb0f7c | ||
|
|
bb5d77a9fb | ||
|
|
3c55baf06b | ||
|
|
ca882ad5ff | ||
|
|
6a7b4ef63f | ||
|
|
f60d22b29b | ||
|
|
6a6fbe24a3 | ||
|
|
5efd2ed7a8 | ||
|
|
62c346850c | ||
|
|
f6fafe3eb3 | ||
|
|
6547c320a9 | ||
|
|
2d32cf4eeb | ||
|
|
7a4e358d53 | ||
|
|
ac1469bbd3 | ||
|
|
c0c32d9daa | ||
|
|
52e74fef7c | ||
|
|
e431d296c0 | ||
|
|
1e7a5fda24 | ||
|
|
050d72478e | ||
|
|
d3a09f1284 | ||
|
|
e096eef049 | ||
|
|
62c97dd7e6 | ||
|
|
e58b7a7ef9 | ||
|
|
dc556cb1a7 | ||
|
|
0c8f0e3386 | ||
|
|
98f03053ba | ||
|
|
59ef2471e1 | ||
|
|
ce7651944d | ||
|
|
a3e0b285d8 | ||
|
|
3cdfedc649 | ||
|
|
531f596bd1 | ||
|
|
8683426041 | ||
|
|
631592ec99 | ||
|
|
4cd29420ef | ||
|
|
582fee6c3a | ||
|
|
2b39d1677c | ||
|
|
47342277dd | ||
|
|
f7ce6fae9a | ||
|
|
8566490e51 | ||
|
|
6151968cd3 | ||
|
|
ba4691dae8 | ||
|
|
7d16af3aa7 | ||
|
|
61ff90d1fd | ||
|
|
303a2495c7 | ||
|
|
23d54ee69e | ||
|
|
330b417a7b | ||
|
|
f70af7afb9 | ||
|
|
e7368d7231 | ||
|
|
07c3c57cde | ||
|
|
b774c8afc3 | ||
|
|
231dfe01f4 | ||
|
|
5319796e58 | ||
|
|
39daa5aea7 | ||
|
|
a7517ce0de | ||
|
|
fbfffe028f | ||
|
|
19b6c671a6 | ||
|
|
c2fab45a6e | ||
|
|
0596ebd5a9 | ||
|
|
338efa5a7a | ||
|
|
5d4d8f54df | ||
|
|
3d4a9c2deb | ||
|
|
74fad5f6ed | ||
|
|
9c264b42c3 | ||
|
|
09ee1b1877 | ||
|
|
4b27d8821d | ||
|
|
c49d9c2611 | ||
|
|
4134e2e9da | ||
|
|
e4a212dfca | ||
|
|
19bb185fd9 | ||
|
|
1eaa58c970 | ||
|
|
4245c9e0cd | ||
|
|
2b078c0d6e | ||
|
|
0f4413da7d | ||
|
|
91b491b7e7 | ||
|
|
61e8916141 | ||
|
|
da5de6a240 | ||
|
|
fdf9b1c40c | ||
|
|
bc7bfed0d3 | ||
|
|
b532e6dd17 | ||
|
|
b46921c22d | ||
|
|
13f26a99b8 | ||
|
|
3d265e28ff | ||
|
|
29d9ce03ab | ||
|
|
3caa95ced9 | ||
|
|
94cf660848 | ||
|
|
e1cb5b8251 | ||
|
|
101fe9efa9 | ||
|
|
2e9463089d | ||
|
|
8127f0691e | ||
|
|
b55dcf5943 | ||
|
|
bb5fe98e94 | ||
|
|
0290cd6814 | ||
|
|
fc4d07f198 | ||
|
|
e7aeaa310c | ||
|
|
85b5fcd5e1 | ||
|
|
e5d0c9c224 | ||
|
|
162e420e9c | ||
|
|
bfbae09a9c | ||
|
|
d2e8ecbd4b | ||
|
|
a701e4f90b | ||
|
|
f22f81b4ff | ||
|
|
63202e2467 | ||
|
|
ef68a419f1 | ||
|
|
9fc6ee0c4c | ||
|
|
ea65650883 | ||
|
|
5d76c57ce2 | ||
|
|
2c250a515e | ||
|
|
4204740cb2 | ||
|
|
bd3ba596c2 | ||
|
|
0a89d350d9 | ||
|
|
b7fcf6dc04 | ||
|
|
accb1779cb | ||
|
|
387f39407a | ||
|
|
6a32adb7ed | ||
|
|
3ab3a7d37a | ||
|
|
da5fd10bb9 | ||
|
|
9291fde960 | ||
|
|
31ef15210d | ||
|
|
aa01657678 | ||
|
|
6fb6bc6d7f | ||
|
|
da33e038ca | ||
|
|
78f7094a0b | ||
|
|
0b046c95ef | ||
|
|
c13d7aea56 | ||
|
|
f7a47c1b67 | ||
|
|
6c34b89cfb | ||
|
|
7138faf5d3 | ||
|
|
0d3a931e88 | ||
|
|
861e825ebf | ||
|
|
1ca1ab594c | ||
|
|
9425389240 | ||
|
|
9f16ff1774 | ||
|
|
2ac3c9e8fd | ||
|
|
4a9209c5e8 | ||
|
|
b78d718357 | ||
|
|
104466f5c0 | ||
|
|
2ecdfca52f | ||
|
|
e81df1a701 | ||
|
|
61013e8eee | ||
|
|
48d4fccd61 | ||
|
|
2859af386c | ||
|
|
8dee3387fd | ||
|
|
63eeac49f8 | ||
|
|
d5fdee72d3 | ||
|
|
765092eb12 | ||
|
|
2c9747fd41 | ||
|
|
62898b0f8f | ||
|
|
ac7ee9d0a5 | ||
|
|
0adb7d4676 | ||
|
|
27a7980dad | ||
|
|
a5915ccd2c | ||
|
|
d6815f61ee | ||
|
|
d71f11f55c | ||
|
|
ed45dca7c1 | ||
|
|
dd71066391 | ||
|
|
6f51b2078e | ||
|
|
d035e0e811 | ||
|
|
55a8da0f02 | ||
|
|
43de16cae4 | ||
|
|
320cbdd62d | ||
|
|
f8dce07486 | ||
|
|
37382042c1 | ||
|
|
2af8139029 | ||
|
|
a5c77ff926 | ||
|
|
15df6c148a | ||
|
|
e6226b45de | ||
|
|
ab1e207765 | ||
|
|
d2ed8883f7 | ||
|
|
3ddf1f6c3e | ||
|
|
5395707280 | ||
|
|
710e465054 | ||
|
|
30bd79ffa1 | ||
|
|
20c83d7568 | ||
|
|
67e0e97eda | ||
|
|
6bebc679c4 | ||
|
|
9406b95518 | ||
|
|
8d8f93fd00 | ||
|
|
20a3875f32 | ||
|
|
8ab428e588 | ||
|
|
e5dcae5fff | ||
|
|
329cd8a38b | ||
|
|
39f0995d78 | ||
|
|
0855ab4173 | ||
|
|
fe7ab6e480 | ||
|
|
f8dd2df953 | ||
|
|
3795bec037 | ||
|
|
35face48da | ||
|
|
864d080502 | ||
|
|
3a7b495167 | ||
|
|
9d1594cbcc | ||
|
|
c48a1092f7 | ||
|
|
35dba1381c | ||
|
|
631dce3aca | ||
|
|
ea6e998094 | ||
|
|
d551de6e06 | ||
|
|
7ce1cf6f3e | ||
|
|
2e89997d29 | ||
|
|
a7e2a7037a | ||
|
|
75d8fc77c2 | ||
|
|
4ea954fd66 | ||
|
|
8b8c1068d9 | ||
|
|
7793dbb4b4 | ||
|
|
77b93ad0c2 | ||
|
|
f99671b764 | ||
|
|
a8a30065a4 | ||
|
|
05b8de5300 | ||
|
|
387f796ebe | ||
|
|
27ba91e74d | ||
|
|
3033331f65 | ||
|
|
362b234cd1 | ||
|
|
bbe53841e4 | ||
|
|
a825210bd3 | ||
|
|
88fb2a6b46 | ||
|
|
042d3e866f | ||
|
|
0ea711e520 | ||
|
|
ef5f9600e6 | ||
|
|
acdffb1503 | ||
|
|
6679e5be69 | ||
|
|
89ad2e55d9 | ||
|
|
f8dff5b6c2 | ||
|
|
104b0ef0ba | ||
|
|
07cdf6e9cb | ||
|
|
4cf9c965d4 | ||
|
|
4039e9e368 | ||
|
|
38fd0668ba | ||
|
|
5cae8206f9 | ||
|
|
3ce60161d2 | ||
|
|
00b5466f0d | ||
|
|
6eeef7c17e | ||
|
|
219da47576 | ||
|
|
47106eeeea | ||
|
|
07e21acab5 | ||
|
|
65acdfb09b | ||
|
|
9e2ce00f7b | ||
|
|
44599a239f | ||
|
|
7b46d5f823 | ||
|
|
2115874587 | ||
|
|
cd5141f3d1 | ||
|
|
b815aa2130 | ||
|
|
19a6e904ec | ||
|
|
1200fbd3bd | ||
|
|
343ae8b7af | ||
|
|
442f584afa | ||
|
|
55482d7ce3 | ||
|
|
0c3de595df | ||
|
|
38ff75c7ea | ||
|
|
963e0f8a53 | ||
|
|
12f40cbbeb | ||
|
|
e524fb2086 | ||
|
|
eb7ccc356f | ||
|
|
4635836ebc | ||
|
|
d25bf7a55a | ||
|
|
3539f0a1da | ||
|
|
737a7f779b | ||
|
|
71dcc17fa0 | ||
|
|
a90ce61b1b | ||
|
|
d43167ac0b | ||
|
|
245cf606a3 | ||
|
|
943616044a | ||
|
|
943808b925 | ||
|
|
30745f163d | ||
|
|
e20108878c | ||
|
|
f73d349dfe | ||
|
|
dc86fc92ce | ||
|
|
aa785c3ef1 | ||
|
|
fb4feb380b | ||
|
|
9b15b228b8 | ||
|
|
99eb7e6ef2 | ||
|
|
bf50a68eb5 | ||
|
|
67a7d46a29 | ||
|
|
3e2cf8a259 | ||
|
|
624fe4794b | ||
|
|
44731f8a37 | ||
|
|
b2a3c5cbe8 | ||
|
|
e9f690bf9d | ||
|
|
0eb07b7488 | ||
|
|
16e7cbdb38 | ||
|
|
135c62f1a4 | ||
|
|
582e19056a | ||
|
|
52de5c8b33 | ||
|
|
799dc6d0df | ||
|
|
79689e87ce | ||
|
|
0d0481ce75 | ||
|
|
869d9e22c7 | ||
|
|
3f77b68a9d | ||
|
|
2daf187bdb | ||
|
|
e73a2d68b5 | ||
|
|
2dd5c0696d | ||
|
|
f25ad03011 | ||
|
|
c00da1702f | ||
|
|
83f20c23aa | ||
|
|
0050176d57 | ||
|
|
f7bb90234d | ||
|
|
1d3c43b67f | ||
|
|
ef505d2bc5 | ||
|
|
a9a59a3046 | ||
|
|
da012e1bfd | ||
|
|
90c8aa716d | ||
|
|
94cd20de05 | ||
|
|
14725f9d59 | ||
|
|
c6c146f54f | ||
|
|
90d9d6ea00 | ||
|
|
1f62517636 | ||
|
|
29eea93592 | ||
|
|
7179cc7f25 | ||
|
|
b12c8a28d7 | ||
|
|
8c2e82cc54 | ||
|
|
3ae094b673 | ||
|
|
74e6ce3e6a | ||
|
|
71426d200e | ||
|
|
9b7159720f | ||
|
|
e7c2b90bd1 | ||
|
|
d05373d35a | ||
|
|
bd8bb8c80b | ||
|
|
dac1ab0a05 | ||
|
|
2a44411f5b | ||
|
|
2f1c1e7695 | ||
|
|
2b6d78e436 | ||
|
|
b1da13a984 | ||
|
|
d03947a6ee | ||
|
|
422f2ecc91 | ||
|
|
f73a116f43 | ||
|
|
8aa40714e3 | ||
|
|
eaf6d46a7b | ||
|
|
906dafe3cd | ||
|
|
d3047c7cb0 | ||
|
|
62412f8398 | ||
|
|
f1ca789097 | ||
|
|
4104ac6270 | ||
|
|
8d5a225011 | ||
|
|
ca2f579f43 | ||
|
|
b1a2f4ab44 | ||
|
|
3c1ef48fe2 | ||
|
|
c732fd0740 | ||
|
|
04c8937fb6 | ||
|
|
4352eb6628 | ||
|
|
1ae269b8e0 | ||
|
|
dd07392045 | ||
|
|
e33971fe2c | ||
|
|
83e1c39ab8 | ||
|
|
b101be041b | ||
|
|
909740f430 | ||
|
|
aaf7a4f1d3 | ||
|
|
99d23c4d81 | ||
|
|
5e8d1ca19f | ||
|
|
fb4dc7eaf9 | ||
|
|
175c7bddfc | ||
|
|
71a1e0d0e1 | ||
|
|
ce1bfbc32d | ||
|
|
a2e53892ec | ||
|
|
7a923beb4c | ||
|
|
be8a992b85 | ||
|
|
03353ce978 | ||
|
|
c8f4a04196 | ||
|
|
9bef643bf5 | ||
|
|
f6b31d51e0 | ||
|
|
62e1cb48fd | ||
|
|
543464182f | ||
|
|
83a3cc9eb4 | ||
|
|
d12ae3bab0 | ||
|
|
61a4897b71 | ||
|
|
194c8e1c2e | ||
|
|
44e4090909 | ||
|
|
0564397ee6 | ||
|
|
3081b6b7dd | ||
|
|
37d38f196e | ||
|
|
17aee48734 | ||
|
|
9cdd78c6cb | ||
|
|
5561a95232 | ||
|
|
27f0f3e52b | ||
|
|
b159b2fe42 | ||
|
|
63902f3d34 | ||
|
|
1fb15d5c81 | ||
|
|
cc2042bd4c | ||
|
|
ee4273d760 | ||
|
|
2619a0b286 | ||
|
|
92c6a3812d | ||
|
|
230527b1fb | ||
|
|
bfe36c9f8b | ||
|
|
40388b5b90 | ||
|
|
0c34554170 | ||
|
|
b0eb864a25 | ||
|
|
1264cc2d36 | ||
|
|
f7cd98c238 | ||
|
|
8e7d744c60 | ||
|
|
9210bf7d3a | ||
|
|
8f35819ddf | ||
|
|
04d93f0445 | ||
|
|
b7ce5b4f1b | ||
|
|
7e27f189cf | ||
|
|
9472945299 | ||
|
|
f25c1f900f | ||
|
|
493eaa7389 | ||
|
|
ce6d618e3b | ||
|
|
8254ca9492 | ||
|
|
7d677a63b8 | ||
|
|
a2fb2e0d6b | ||
|
|
93cba3fba5 | ||
|
|
3e48b9ff85 | ||
|
|
a956bf9fda | ||
|
|
9f77df70c9 | ||
|
|
c04133a512 | ||
|
|
59747ecf24 | ||
|
|
a6e7aa8f97 | ||
|
|
51fdbe22d2 | ||
|
|
3b01e6e423 | ||
|
|
2e14ba8716 | ||
|
|
7308022bc7 | ||
|
|
8273c04575 | ||
|
|
ee7d4d712a | ||
|
|
d8c1b78d83 | ||
|
|
554445a985 | ||
|
|
b2bf2b08ff | ||
|
|
e7573ac90f | ||
|
|
cdb664f6e5 | ||
|
|
a127eeff20 | ||
|
|
1ca517d73b | ||
|
|
38b1dce7c3 | ||
|
|
c9f9eed04e | ||
|
|
fbea657eff | ||
|
|
55db9dba0a | ||
|
|
64051d081c | ||
|
|
ddb007af65 | ||
|
|
e574a1574f | ||
|
|
2bf9f1f0d8 | ||
|
|
8142b72bcd | ||
|
|
dc2f30a34e | ||
|
|
be7de4849c | ||
|
|
83e6ab08aa | ||
|
|
b385fdd7de | ||
|
|
d965540103 | ||
|
|
404d59b1b8 | ||
|
|
9980c4baf9 | ||
|
|
4c1267338b | ||
|
|
2e0b1c4c8b | ||
|
|
da75876639 | ||
|
|
d4d1014c9f | ||
|
|
213e12fe13 | ||
|
|
3e0a7b6229 | ||
|
|
da88097aba | ||
|
|
3f13dd3ae8 | ||
|
|
d3b0c54c14 | ||
|
|
79b4afeae7 | ||
|
|
9c61aed7d0 | ||
|
|
da223dfe81 | ||
|
|
e035397dcf | ||
|
|
899ba975a6 | ||
|
|
bfa65560eb | ||
|
|
ed9307f469 | ||
|
|
ff87239fb0 | ||
|
|
a357bf4f19 | ||
|
|
63f274f6df | ||
|
|
2ca4242f5f | ||
|
|
c9d27634b4 | ||
|
|
027990928e | ||
|
|
87469a5fdd | ||
|
|
4101127011 | ||
|
|
f6191a4f12 | ||
|
|
8c5d614c38 | ||
|
|
42883545f9 | ||
|
|
61357e4e6e | ||
|
|
c6ae9f1176 | ||
|
|
11d7e6b92f | ||
|
|
c3b992db96 | ||
|
|
1ffd4a9e06 | ||
|
|
147d39cb7c | ||
|
|
824cb201b1 | ||
|
|
582880b314 | ||
|
|
2b79a716aa | ||
|
|
d572af2acf | ||
|
|
54e6a68acb | ||
|
|
09f62032ec | ||
|
|
711ffd238f | ||
|
|
056cb0d8a8 | ||
|
|
37a204324b | ||
|
|
1fc1f8bf05 | ||
|
|
8ff507b03b | ||
|
|
33d6603fef | ||
|
|
b0b1993918 | ||
|
|
07a3df6001 | ||
|
|
92d4dfaabf | ||
|
|
bc626af6ca | ||
|
|
a45786ca2e | ||
|
|
2926c8299c | ||
|
|
32a5ffe436 | ||
|
|
62dd3b7d7d | ||
|
|
15aa7593f6 | ||
|
|
9b3ac92c24 | ||
|
|
66f6ef1b35 | ||
|
|
d93cd10b0d | ||
|
|
a488b14373 | ||
|
|
0147dd6431 | ||
|
|
90d37eac03 | ||
|
|
9d19213b8a | ||
|
|
71c3835f3e | ||
|
|
0fbd26e9bf | ||
|
|
2a78eb96d0 | ||
|
|
3a1003f702 | ||
|
|
329a9d0b11 | ||
|
|
17d75f3da8 | ||
|
|
20551857da | ||
|
|
32122e0312 | ||
|
|
230de023ff | ||
|
|
e6fc8af249 | ||
|
|
febf86dedf | ||
|
|
76ae17abac | ||
|
|
339ff4b464 | ||
|
|
00c0e487dd | ||
|
|
5c8dfa38be | ||
|
|
acf85c66a5 | ||
|
|
3619918954 | ||
|
|
65b14683a8 | ||
|
|
f4fc02a3da | ||
|
|
c334170a93 | ||
|
|
deab6c64fc | ||
|
|
e1c9503951 | ||
|
|
9a21812bf5 | ||
|
|
347b5ce452 | ||
|
|
b39029521b | ||
|
|
97b26f3de2 | ||
|
|
e19a7a990d | ||
|
|
3e424e1046 | ||
|
|
db20b4af9c | ||
|
|
44ff8f8531 | ||
|
|
c974c95e2b | ||
|
|
3b2590243c | ||
|
|
1c2bd275fe | ||
|
|
0cf11ce488 | ||
|
|
a8b794d7e0 | ||
|
|
f868362ca8 | ||
|
|
8858f7e97c | ||
|
|
d6195522aa | ||
|
|
3b79b935a3 | ||
|
|
4079333e29 | ||
|
|
99581dbbf7 | ||
|
|
2db4969e18 | ||
|
|
2ecc1abf21 | ||
|
|
703bc9494a | ||
|
|
e5ab07091d | ||
|
|
891678b656 | ||
|
|
39ea2a257c | ||
|
|
2d68eae16b | ||
|
|
d65948c423 | ||
|
|
9e599c65c5 | ||
|
|
9910a0b004 | ||
|
|
ff96358cb3 | ||
|
|
22267475eb | ||
|
|
5eb0f8ffa7 | ||
|
|
e03a3fcf68 | ||
|
|
edf471f655 | ||
|
|
5b02c8ca4a | ||
|
|
e7688c53b8 | ||
|
|
87cada42db | ||
|
|
6fe67ee426 | ||
|
|
5fbc81885a | ||
|
|
25ba5451f2 | ||
|
|
138c9cf7a8 | ||
|
|
87981306a3 | ||
|
|
f7893b3ea9 | ||
|
|
87395fe6fe | ||
|
|
57bff2a663 | ||
|
|
15f876c66c | ||
|
|
522c35ac5b | ||
|
|
bb2d6d640f | ||
|
|
2412d8dec1 | ||
|
|
2ab5a43663 | ||
|
|
0ec3d6c10a | ||
|
|
d208e1b0f5 | ||
|
|
8a6ba6a212 | ||
|
|
b793d69ff3 | ||
|
|
54f55471df | ||
|
|
cec7fb7dc6 | ||
|
|
b0b82efffe | ||
|
|
e599604294 | ||
|
|
528a183d42 | ||
|
|
b953f82346 | ||
|
|
57a3ea9d7b | ||
|
|
ef2058824a | ||
|
|
6f93dc7712 | ||
|
|
a6e28d2eb7 | ||
|
|
a3a50bb886 | ||
|
|
a705a5a0aa | ||
|
|
f6bc13736a | ||
|
|
194d4c75b3 | ||
|
|
bc9c60ae71 | ||
|
|
0a7005f2bc | ||
|
|
c4fb8e304b | ||
|
|
fe2a2cfc8b | ||
|
|
32dab7d4bf | ||
|
|
1ea541baa6 | ||
|
|
82b7c118c4 | ||
|
|
1c501333e8 | ||
|
|
9a3c7800a7 | ||
|
|
11dc3ca1f8 | ||
|
|
ce5e57d828 | ||
|
|
e98fe9c22d | ||
|
|
6afc0f9b38 | ||
|
|
065a1da9d1 | ||
|
|
916f5bfbb2 | ||
|
|
7f491fd2d2 | ||
|
|
203a6d8a00 | ||
|
|
cac3f5fc61 | ||
|
|
7e33560010 | ||
|
|
759f563b6d | ||
|
|
8c47638eec | ||
|
|
8233098136 | ||
|
|
1cb365fff1 | ||
|
|
e405385e0d | ||
|
|
15c5d6a5ef | ||
|
|
132e2b3ae5 | ||
|
|
c16b7f090e | ||
|
|
057fc95aa3 | ||
|
|
94bad8555a | ||
|
|
6c0dd9b5ef | ||
|
|
1c102c71fc | ||
|
|
75f23793df | ||
|
|
9dcfa8de25 | ||
|
|
3d6650e59b | ||
|
|
7d201d7be0 | ||
|
|
cafaef11f7 | ||
|
|
1e201132ed | ||
|
|
8604fd2727 | ||
|
|
aa6aa68753 | ||
|
|
86b7b07c24 | ||
|
|
af56aee5c6 | ||
|
|
1ec92dd5f3 | ||
|
|
1c946561d3 | ||
|
|
b537e92789 | ||
|
|
7c06849c4d | ||
|
|
488334710b | ||
|
|
19341e95a6 | ||
|
|
c82e94811b | ||
|
|
c15a902e8d | ||
|
|
ca6385e6fa | ||
|
|
828ec1fb5c | ||
|
|
1c687d6d03 | ||
|
|
b9e910b5f4 | ||
|
|
101cac6a21 | ||
|
|
8ea07f3bb0 | ||
|
|
79e79b78aa | ||
|
|
2325c6cd40 | ||
|
|
3ec33414ec | ||
|
|
a61a690f6c | ||
|
|
06f542ed7a | ||
|
|
8954171eea | ||
|
|
e0e69ad279 | ||
|
|
e3e8024e15 | ||
|
|
c4cf888532 | ||
|
|
9eff9e5752 | ||
|
|
84c1825abc | ||
|
|
0621dd7ed4 | ||
|
|
67ddba9cff | ||
|
|
cbf5426d27 | ||
|
|
bac60ca21e | ||
|
|
8e0d671488 | ||
|
|
ee6deef14c | ||
|
|
5d8c048d0d | ||
|
|
f8fd6e39a3 | ||
|
|
dafca16c8b | ||
|
|
3449c05bf4 | ||
|
|
5c3fad22fd | ||
|
|
425cf67ee5 | ||
|
|
4f9529db9e | ||
|
|
f3931a031d | ||
|
|
a4995b7878 | ||
|
|
10d8d1bb25 | ||
|
|
b30ae57731 | ||
|
|
b0bfbafd3d | ||
|
|
7c50bd2039 | ||
|
|
ae4e385abd | ||
|
|
e301cd3321 | ||
|
|
2977680ca1 | ||
|
|
2a5aa6e986 | ||
|
|
3bba41ee89 | ||
|
|
179b5f7839 | ||
|
|
26d7712f03 | ||
|
|
c0b370e1b9 | ||
|
|
15cc92e54a | ||
|
|
acdd5b3922 | ||
|
|
9685fc210c | ||
|
|
f4cdc0001f | ||
|
|
3f78e9a1a3 | ||
|
|
280e2899d7 | ||
|
|
82b0bb838c | ||
|
|
8482518618 | ||
|
|
6425bda663 | ||
|
|
12413b0be6 | ||
|
|
275dca83be | ||
|
|
be5bf03ccc | ||
|
|
0c479cd706 | ||
|
|
7325b73073 | ||
|
|
49380f75a9 | ||
|
|
3d4276439f | ||
|
|
a4c36dbc15 | ||
|
|
4fbd11a1f2 | ||
|
|
8ce3d4dd7f | ||
|
|
b82c968278 | ||
|
|
bc8e86e643 | ||
|
|
1b6fab59a4 | ||
|
|
d1dd35a1d2 | ||
|
|
400f062771 | ||
|
|
40894d67ac | ||
|
|
08a0b85111 | ||
|
|
7da6fad359 | ||
|
|
b24d182237 | ||
|
|
2bdcc106f2 | ||
|
|
7a98387e8d | ||
|
|
58d0f14d03 | ||
|
|
bc9471987b | ||
|
|
dc6e60cbcc | ||
|
|
7dae5fb131 | ||
|
|
3bc1ff5e5a | ||
|
|
8ff9c69e2f | ||
|
|
988ace8029 | ||
|
|
6e9d996ece | ||
|
|
789714b0b1 | ||
|
|
773a64d4c0 | ||
|
|
bb7629d2b8 | ||
|
|
745c020aa2 | ||
|
|
c5344acb25 | ||
|
|
318eb35ea0 | ||
|
|
6e2fd2affe | ||
|
|
8faa06fb15 | ||
|
|
0b7ca6a326 | ||
|
|
ce8c238ac4 | ||
|
|
f6c37e46e1 | ||
|
|
2d69efccef | ||
|
|
f9d2aafaeb | ||
|
|
22514aec2e | ||
|
|
5a22a83f4c | ||
|
|
b1d43eae46 | ||
|
|
0b8cdb6964 | ||
|
|
aed5ad22fb | ||
|
|
dc9c16b93d | ||
|
|
f6e858a548 | ||
|
|
4c2db171ca | ||
|
|
1255127e49 | ||
|
|
1cb74a6357 | ||
|
|
5e2b250426 | ||
|
|
ad190cfbb2 | ||
|
|
542ceb051b | ||
|
|
3473669458 | ||
|
|
3170c83d8d | ||
|
|
3046dabde2 | ||
|
|
1b02074fea | ||
|
|
f15fd2c3d3 | ||
|
|
081271d6a1 | ||
|
|
27f62999c9 | ||
|
|
89d130edf4 | ||
|
|
0e551a3844 | ||
|
|
31869885d9 | ||
|
|
4c026d9d92 | ||
|
|
435231ef08 | ||
|
|
19a79caf41 | ||
|
|
7b095f8f97 | ||
|
|
f5dfd5b0dc | ||
|
|
9579a401b5 | ||
|
|
47a97f7e97 | ||
|
|
3c146ebf9e | ||
|
|
efbcbb0d91 | ||
|
|
578d8b0cb4 | ||
|
|
2b1aaf4ee7 | ||
|
|
4a7f5c7469 | ||
|
|
98fe044dee | ||
|
|
62d4bb05d4 | ||
|
|
02b1040264 | ||
|
|
dfd5899611 | ||
|
|
8ea88f49b1 | ||
|
|
a62541d976 | ||
|
|
fbd9a49899 | ||
|
|
4e571e12b8 | ||
|
|
2567f5faa5 | ||
|
|
97684d78d3 | ||
|
|
57791834ab | ||
|
|
3b0c4b74b6 | ||
|
|
7a701506a4 | ||
|
|
5157cbeda1 | ||
|
|
3d7bc074cf | ||
|
|
b296933ba0 | ||
|
|
70bb7f4a61 | ||
|
|
45cc867b0c | ||
|
|
9c9cb71544 | ||
|
|
173dc34194 | ||
|
|
333219be35 | ||
|
|
c1230da3ab | ||
|
|
a7515624b2 | ||
|
|
9f34ddfcea | ||
|
|
6499b99dad | ||
|
|
c6611b2ad6 | ||
|
|
395445e7b0 | ||
|
|
89c6c11214 | ||
|
|
c6a7be63b8 | ||
|
|
75165957c9 | ||
|
|
4f247a3672 | ||
|
|
d60df54f69 | ||
|
|
1f25f52af9 | ||
|
|
7541c7cf5d | ||
|
|
a6cdde3ce4 | ||
|
|
a53b9a443f | ||
|
|
6e1328d4c2 | ||
|
|
440065f7f8 | ||
|
|
2c27e759cd | ||
|
|
82481a6f9c | ||
|
|
90d64388ab | ||
|
|
3444c8e6b8 | ||
|
|
74419f41a3 | ||
|
|
d84321e080 | ||
|
|
6542556ebd | ||
|
|
542ee56c77 | ||
|
|
461e662644 | ||
|
|
58d73f5cae | ||
|
|
0c1c220bb9 | ||
|
|
bf5ccfffa5 | ||
|
|
70bbb670ec | ||
|
|
7b270ec3b0 | ||
|
|
e4ef7bdbb9 | ||
|
|
5f42d08945 | ||
|
|
911c99f125 | ||
|
|
c7ccb9dacd | ||
|
|
7a0d4c3350 | ||
|
|
2154dd2349 | ||
|
|
f3050fefce | ||
|
|
595d15455a | ||
|
|
183b98384f | ||
|
|
40d7141a4d | ||
|
|
6d475ee290 | ||
|
|
c430f5452b | ||
|
|
97de5e31f9 | ||
|
|
a99aab6309 | ||
|
|
5a40f7ad15 | ||
|
|
2f29b78a00 | ||
|
|
bcb6e2e506 | ||
|
|
194b875cf3 | ||
|
|
b2cd98259d | ||
|
|
4d5b208601 | ||
|
|
488890e6bb | ||
|
|
3feda31d82 | ||
|
|
0f55d89e20 | ||
|
|
c4b4a0e56e | ||
|
|
95c7742c9c | ||
|
|
44e3995425 | ||
|
|
7e6443c882 | ||
|
|
5dd9e30c2f | ||
|
|
762ca60a30 | ||
|
|
8a8be92eac | ||
|
|
f368f682e1 | ||
|
|
d16f0c8a8f | ||
|
|
18e667f98e | ||
|
|
a09c64a1fe | ||
|
|
4c482fe24a | ||
|
|
609983ffa8 | ||
|
|
0f9bff66bc | ||
|
|
e7fb9f342c | ||
|
|
7f31a79431 | ||
|
|
c5a0fc8f68 | ||
|
|
87cb35f5da | ||
|
|
5d911b43c0 | ||
|
|
483097f31c | ||
|
|
7a3eae4572 | ||
|
|
db349aa3ce | ||
|
|
b5c114c5b7 | ||
|
|
f34279b3e7 | ||
|
|
9318719b9e | ||
|
|
815addc452 | ||
|
|
d2db92236a | ||
|
|
ef20df8933 | ||
|
|
f041510659 | ||
|
|
feb405f19a | ||
|
|
2c8806341f | ||
|
|
b8e4c13746 | ||
|
|
40828df663 | ||
|
|
0a217b5f15 | ||
|
|
88a9f33422 | ||
|
|
ffcb31faef | ||
|
|
ea67040ef1 | ||
|
|
e79069a957 | ||
|
|
1ab09e7a06 | ||
|
|
7c6dbcb14a | ||
|
|
8e97bc24a4 | ||
|
|
935a9d3c75 | ||
|
|
5a88be3744 | ||
|
|
8ba5e385ec | ||
|
|
a0f4af087c | ||
|
|
958d7650dd | ||
|
|
e246e7c8b9 | ||
|
|
8e76bc2b5d | ||
|
|
72834ad16c | ||
|
|
36ac66fff2 | ||
|
|
93b1298d46 | ||
|
|
a53e1125e6 | ||
|
|
a3a8404f91 | ||
|
|
3902c467b9 | ||
|
|
40430ad29c | ||
|
|
fb6beaa347 | ||
|
|
1a0cf1320b | ||
|
|
fe28c5fbdc | ||
|
|
0c354eccaa | ||
|
|
33162355be | ||
|
|
1af86618e3 | ||
|
|
b732bcad2f | ||
|
|
a626533cd4 | ||
|
|
2d1c3d7b0b | ||
|
|
22b290daad | ||
|
|
2cbf1e6f4b | ||
|
|
3d075a6b5b | ||
|
|
c7c9abdba3 | ||
|
|
846fd32209 | ||
|
|
6197f81ba0 | ||
|
|
b09491ec45 | ||
|
|
8c9f2ae705 | ||
|
|
d3a4311c3d | ||
|
|
6b838c6105 | ||
|
|
779422d01b | ||
|
|
b947290801 | ||
|
|
f8bd1e9d78 | ||
|
|
38a9f72e11 | ||
|
|
ce3b1162ea | ||
|
|
06802150d9 | ||
|
|
e737ba09be | ||
|
|
6b56d45d85 | ||
|
|
5f4bca0147 | ||
|
|
98271a0267 | ||
|
|
743342816b | ||
|
|
fe00a8c05c | ||
|
|
36c9a7d39c | ||
|
|
acc5199f85 | ||
|
|
6e4dc229e2 | ||
|
|
d641d8ab6d | ||
|
|
8a7ca4a766 | ||
|
|
4254e4dd60 | ||
|
|
ba80f656b3 | ||
|
|
fb0341fdbf | ||
|
|
8366eee9c2 | ||
|
|
97ec1b156c | ||
|
|
6e54f504e7 | ||
|
|
f93963cd6b | ||
|
|
e49e83e944 | ||
|
|
dff4850a82 | ||
|
|
800f9615c2 | ||
|
|
29336387be | ||
|
|
984575b579 | ||
|
|
af8383c770 | ||
|
|
3491a1688b | ||
|
|
ac1999929f | ||
|
|
862a34a211 | ||
|
|
c78ae752bb | ||
|
|
cad237b4c8 | ||
|
|
c2e100e6bf | ||
|
|
bc9f892cab | ||
|
|
79f23ad031 | ||
|
|
52b952526e | ||
|
|
61790bb76a | ||
|
|
b1a3fd945d | ||
|
|
e19aab4a9b | ||
|
|
ce3fe6cce1 | ||
|
|
be99d5a4bd | ||
|
|
14616f4178 | ||
|
|
b512d198f0 | ||
|
|
61b19d406c | ||
|
|
d80fff70f2 | ||
|
|
d87bd29a68 | ||
|
|
d63897fc39 | ||
|
|
fdf6a542bf | ||
|
|
8926bfb237 | ||
|
|
3f53973a2a | ||
|
|
4247e75426 | ||
|
|
485fe67c92 | ||
|
|
b40bfb5116 | ||
|
|
f0fd138ffc | ||
|
|
f79874c586 | ||
|
|
61a3234f43 | ||
|
|
1f4306423a | ||
|
|
e759ed4bd6 | ||
|
|
f368ebea00 | ||
|
|
460dc897ad | ||
|
|
72702b9f16 | ||
|
|
db537f154e | ||
|
|
76ab7b1bfe | ||
|
|
d2b57029c8 | ||
|
|
1853870811 | ||
|
|
3f25ad59c3 | ||
|
|
d16d0d3726 | ||
|
|
66896dcbbe | ||
|
|
98950e67e9 | ||
|
|
af8d73a8e8 | ||
|
|
089327241e | ||
|
|
5e23ec25f9 | ||
|
|
9050069858 | ||
|
|
47408bb568 | ||
|
|
c78c39e676 | ||
|
|
636c356aaf | ||
|
|
3d2175c9f8 | ||
|
|
e2bd492764 | ||
|
|
65cfb0f312 | ||
|
|
66dac1884b | ||
|
|
ac51ec4939 | ||
|
|
b1d1063a25 | ||
|
|
0678b24ebb | ||
|
|
53b4c3cc60 | ||
|
|
d117d23469 | ||
|
|
16a06ba66e | ||
|
|
6858c14d94 | ||
|
|
bf21a0bf02 | ||
|
|
a3463abf13 | ||
|
|
880142708d | ||
|
|
e69aa94800 | ||
|
|
660641e720 | ||
|
|
cd8be1d0e9 | ||
|
|
413064cf45 | ||
|
|
40b3d07900 | ||
|
|
803a51d5ad | ||
|
|
5f22a72188 | ||
|
|
48aca04a72 | ||
|
|
665fd8aebf | ||
|
|
21da4592d1 | ||
|
|
f1d4862b13 | ||
|
|
88e3b6d310 | ||
|
|
0ab5f2159d | ||
|
|
9b4d328be0 | ||
|
|
bdbc76fcd4 | ||
|
|
110c4f70df | ||
|
|
28f06c7200 | ||
|
|
c0aa92ea13 | ||
|
|
8c751d342d | ||
|
|
883b2b6e62 | ||
|
|
9903ce60f0 | ||
|
|
50ac367a38 | ||
|
|
7cf7ba42fb | ||
|
|
a80119f826 | ||
|
|
069f91f930 | ||
|
|
6142cf25cc | ||
|
|
72dd5b18ee | ||
|
|
93001f48f7 | ||
|
|
19174949b6 | ||
|
|
a1739a73b4 | ||
|
|
60f0090786 | ||
|
|
6987c77e2a | ||
|
|
e91aad6527 | ||
|
|
0305c63a07 | ||
|
|
fff01f2068 | ||
|
|
25777cf922 | ||
|
|
2e5169c74b | ||
|
|
05c1810f11 | ||
|
|
2cf294e6de | ||
|
|
b93f04ee38 | ||
|
|
0632a3a2ea | ||
|
|
8731b498c0 | ||
|
|
f408ef2e6c | ||
|
|
f360e85d61 | ||
|
|
283a0d72c7 | ||
|
|
cd69d258aa | ||
|
|
1b5013ab72 | ||
|
|
e8bb39370c | ||
|
|
43c9288534 | ||
|
|
408e3774e0 | ||
|
|
1b0d6a9bdb | ||
|
|
810112577f | ||
|
|
fc61ddab3c | ||
|
|
d5209965bc | ||
|
|
18a9a7c159 | ||
|
|
3bc40506fd | ||
|
|
555f21cd25 | ||
|
|
d176fb07cd | ||
|
|
30de9fcfae | ||
|
|
e02bfd00a8 | ||
|
|
a28636dd4a | ||
|
|
b3ea8fe24e | ||
|
|
e33ed45cfc | ||
|
|
a1813fd23c | ||
|
|
7a6587d3dd | ||
|
|
cc0cf147c8 | ||
|
|
4cf4853ae4 | ||
|
|
90d8f0af73 | ||
|
|
c0e1fb5f71 | ||
|
|
e8e6be0ebe | ||
|
|
7830fd8ca1 | ||
|
|
4efee2a1ec | ||
|
|
e902b50bfc | ||
|
|
c08eedf264 | ||
|
|
1ee3023cdd | ||
|
|
3e8a861fc0 | ||
|
|
cae0579ba9 | ||
|
|
f06f69a81a | ||
|
|
b970ec4ce9 | ||
|
|
a22ae23e9e | ||
|
|
bb75174f4a | ||
|
|
27b238999f | ||
|
|
893bdca0a8 | ||
|
|
de47f68b61 | ||
|
|
6af9f2716e | ||
|
|
60b83ff07e | ||
|
|
38c9001e8e | ||
|
|
7335f908af | ||
|
|
96b90be5c3 | ||
|
|
06ad4387a2 | ||
|
|
a637c2418a | ||
|
|
5f8f2e63eb | ||
|
|
c6e4352c3f | ||
|
|
8c72da3643 | ||
|
|
23af057e5c | ||
|
|
bde9d6d33b | ||
|
|
c14bdcb8fd | ||
|
|
f816526d0d | ||
|
|
50d607ffea | ||
|
|
57577401bd | ||
|
|
58c63fe339 | ||
|
|
7b0cbb34d6 | ||
|
|
37c44ced1d | ||
|
|
e59307d284 | ||
|
|
2a6999d500 | ||
|
|
5ab7c68cc7 | ||
|
|
e92122f2c2 | ||
|
|
ead0e92bac | ||
|
|
682d74754c | ||
|
|
082df27ecd | ||
|
|
dc024845cf | ||
|
|
94ca13c494 | ||
|
|
1f29cb1dc1 | ||
|
|
f404c692ad | ||
|
|
6bf19cd897 | ||
|
|
2743e17588 | ||
|
|
f0b500fba8 | ||
|
|
aaec6baeca | ||
|
|
61611d7d0d | ||
|
|
73154a25d4 | ||
|
|
f4a275d1b5 | ||
|
|
c3712b013f | ||
|
|
3692f223e1 | ||
|
|
fccf809e3a | ||
|
|
23e62efdc5 | ||
|
|
6ea0a7699e | ||
|
|
1e8e5245eb | ||
|
|
4f926fc470 | ||
|
|
a0a9b12daf | ||
|
|
f3292a6953 | ||
|
|
062f3e8f31 | ||
|
|
20ffd4082c | ||
|
|
578638c258 | ||
|
|
cdc78cc6a1 | ||
|
|
c98ade9b25 | ||
|
|
fe0f5bcc11 | ||
|
|
df98178018 | ||
|
|
0b0cde2351 | ||
|
|
5b4c37e043 | ||
|
|
3c4c4d71c9 | ||
|
|
ea2b0828d8 | ||
|
|
045aa7a9a3 | ||
|
|
d478a241a8 | ||
|
|
0a4397094e | ||
|
|
0b786f61cc | ||
|
|
b68cb521ba | ||
|
|
e1f0ee819d | ||
|
|
f2c3fba28d | ||
|
|
676c772f11 | ||
|
|
016fd65f6a | ||
|
|
09bf6dd7c1 | ||
|
|
6e927acd58 | ||
|
|
383b870499 | ||
|
|
98f189cc69 | ||
|
|
dbc9134630 | ||
|
|
746162b578 | ||
|
|
0071f43b2c | ||
|
|
6d09f8c6b2 | ||
|
|
66e9fd4771 | ||
|
|
ef6609abcb | ||
|
|
2f93418095 | ||
|
|
9bcb0dff96 | ||
|
|
f84372efd8 | ||
|
|
334045b27d | ||
|
|
071f65a892 | ||
|
|
e30827e19b | ||
|
|
af98524179 | ||
|
|
e994073b5b | ||
|
|
ad292b095d | ||
|
|
d8685ad66b | ||
|
|
239f41f3e0 | ||
|
|
e0951f28cf | ||
|
|
100f2e8f57 | ||
|
|
7ade11c4f3 | ||
|
|
2faa116238 | ||
|
|
c94b8cd959 | ||
|
|
0c1a2b68bf | ||
|
|
c06dc5b85b | ||
|
|
34fa6e38e7 | ||
|
|
7b9958e59d | ||
|
|
f8775f2f2d | ||
|
|
b74354795d | ||
|
|
9461c8127d | ||
|
|
b5ed668eff | ||
|
|
c6c19f1b3c | ||
|
|
20ba51ce7d | ||
|
|
e45f46d673 | ||
|
|
b3e026aa4e | ||
|
|
89540f293b | ||
|
|
ed8ee8c690 | ||
|
|
31daf1f0d7 | ||
|
|
5b692f4720 | ||
|
|
b89aadb3c9 | ||
|
|
b9183b00a0 | ||
|
|
7b28b5c9a1 | ||
|
|
994c6b7512 | ||
|
|
42072fc15c | ||
|
|
103b30f915 | ||
|
|
1799bf5e42 | ||
|
|
17e755e062 | ||
|
|
ae963fcfdc | ||
|
|
3c732500e7 | ||
|
|
cd494c2f6c | ||
|
|
443fcd030f | ||
|
|
fefcdffb55 | ||
|
|
fa7fe382b7 | ||
|
|
d8d30ab4cb | ||
|
|
61f46cac31 | ||
|
|
df4c80f177 | ||
|
|
df95a7ddf2 | ||
|
|
fb7a9f37e4 | ||
|
|
1e3200801f | ||
|
|
b4debcc4ad | ||
|
|
622db491b2 | ||
|
|
0db8d6943c | ||
|
|
37e2418ee0 | ||
|
|
d81bc46218 | ||
|
|
40b61870f6 | ||
|
|
6cab2e0ca0 | ||
|
|
ba4892e03f | ||
|
|
2b9f8e7218 | ||
|
|
6cb6c4a911 | ||
|
|
693bed5514 | ||
|
|
fe12c6c099 | ||
|
|
67fbaa7c31 | ||
|
|
ddc68b01f7 | ||
|
|
f9feaac8c7 | ||
|
|
d1de1e357a | ||
|
|
cbac95b02a | ||
|
|
00d2d0e90e | ||
|
|
d1a2c4cd8c | ||
|
|
403d02d94f | ||
|
|
9a8fecb2cb | ||
|
|
45af30f3a4 | ||
|
|
58baf9533b | ||
|
|
f59b399f52 | ||
|
|
10f4c0c6b3 | ||
|
|
f9b272a7b9 | ||
|
|
96d7639d2a | ||
|
|
e6011631a1 | ||
|
|
54b9cb49c1 | ||
|
|
60b731e7ab | ||
|
|
ec2dc24ad7 | ||
|
|
357e1ad35f | ||
|
|
340189fa0d | ||
|
|
8d2afefe6a | ||
|
|
9faf7025c6 | ||
|
|
511924c9ab | ||
|
|
4d997145b4 | ||
|
|
9df743e2bf | ||
|
|
ccb2b7c2fb | ||
|
|
30e69f8b32 | ||
|
|
df4d1162b5 | ||
|
|
81bb44319a | ||
|
|
bb05a43787 | ||
|
|
66ff890b85 | ||
|
|
dd3fff1d3e | ||
|
|
d8d2043467 | ||
|
|
94a7b3cc07 | ||
|
|
b02ea331df | ||
|
|
9208bfd151 | ||
|
|
80579a30e5 | ||
|
|
5818528aa6 | ||
|
|
6ec7eab85a | ||
|
|
e6179af46a | ||
|
|
d15c75ecae | ||
|
|
2e438542e9 | ||
|
|
54c5665635 | ||
|
|
8a8c093795 | ||
|
|
7fa45b0540 | ||
|
|
89da371f48 | ||
|
|
10c51b4f35 | ||
|
|
ecb84ecc10 | ||
|
|
0d1aad53ef | ||
|
|
d0a71dc361 | ||
|
|
f31aa32e4d | ||
|
|
e1a6d0c138 | ||
|
|
0aa3dfbc35 | ||
|
|
5ad080f056 | ||
|
|
d4941ca833 | ||
|
|
00b002f731 | ||
|
|
82a223c5f6 | ||
|
|
654ec17000 | ||
|
|
e1f6ea2be7 | ||
|
|
5941ee620c | ||
|
|
a18d0b9ef1 | ||
|
|
eeecc33aaa | ||
|
|
dfad1dccf4 | ||
|
|
d016017b6d | ||
|
|
9b28c65e4b | ||
|
|
0a6c98e47d | ||
|
|
dedf8a3692 | ||
|
|
993158fc6a | ||
|
|
5e15f1e017 | ||
|
|
b9592ff2dc | ||
|
|
0bc6779361 | ||
|
|
2a292d5b82 | ||
|
|
4a5a228fd8 | ||
|
|
6665f4494f | ||
|
|
dbf2c63c90 | ||
|
|
bf1beaa607 | ||
|
|
7dee9efb24 | ||
|
|
9d6d728b51 | ||
|
|
1c649e4663 | ||
|
|
ea60d036d1 | ||
|
|
4d197f699e | ||
|
|
a3e07fb84a | ||
|
|
9fa1f31bf2 | ||
|
|
77db46f99e | ||
|
|
190ba78960 | ||
|
|
012c0dfdeb | ||
|
|
25d9ccc509 | ||
|
|
9cdf3aca7d | ||
|
|
49a96b90d8 | ||
|
|
aba94b85e8 | ||
|
|
aac5102cf3 | ||
|
|
c705ff5e72 | ||
|
|
b20f2bcd7e | ||
|
|
95f4ae4c1e | ||
|
|
a73017939f | ||
|
|
45673e8723 | ||
|
|
3f8a289e9b | ||
|
|
0ab5a36464 | ||
|
|
443a4ad87c | ||
|
|
585b47fdd1 | ||
|
|
5e433728b5 | ||
|
|
7708f4fb98 | ||
|
|
b86a1deb00 | ||
|
|
4951e66103 | ||
|
|
79b445b0ca | ||
|
|
a323070a4d | ||
|
|
f7662c1808 | ||
|
|
93c242c9fb | ||
|
|
c7c6cd7735 | ||
|
|
77ca83e103 | ||
|
|
0ea145d188 | ||
|
|
162285ae86 | ||
|
|
37c921dfe2 | ||
|
|
4f72cb44ad | ||
|
|
878ef2e9e0 | ||
|
|
4923118610 | ||
|
|
defafc0e8e | ||
|
|
16f6a6731d | ||
|
|
19fb66f3d5 | ||
|
|
0881d429f2 | ||
|
|
9a29d442b4 | ||
|
|
d301836fbd | ||
|
|
da95729d90 | ||
|
|
70aa674e9e | ||
|
|
737a97c898 | ||
|
|
8748370f44 | ||
|
|
839e30e4b8 | ||
|
|
e21938c12d | ||
|
|
eeff8e9033 | ||
|
|
336e16ef85 | ||
|
|
eceb7d2b54 | ||
|
|
9775a3502c | ||
|
|
f240e878e5 | ||
|
|
529fc57f2b | ||
|
|
0ca9d1f228 | ||
|
|
b656d333de | ||
|
|
7136603604 | ||
|
|
5cbea51f31 | ||
|
|
2cf8de9234 | ||
|
|
f9239af7dc | ||
|
|
97c0c4bfe8 | ||
|
|
c6be8f320d | ||
|
|
bfb2781279 | ||
|
|
5c43988862 | ||
|
|
62863ac586 | ||
|
|
99122708ca | ||
|
|
817c4a26de | ||
|
|
ecc6b75a3e | ||
|
|
bf707d9e75 | ||
|
|
db52991b9d | ||
|
|
a34d8813b6 | ||
|
|
103b3e7965 | ||
|
|
f74e52079b | ||
|
|
e3be28ecca | ||
|
|
dbfc35ece2 | ||
|
|
4185afea5c | ||
|
|
723d074442 | ||
|
|
6d2084e030 | ||
|
|
4a0354c604 | ||
|
|
424f4fe244 | ||
|
|
348b4b8be5 | ||
|
|
75f633cda8 | ||
|
|
2b3acc7b87 | ||
|
|
044e1ec2a8 | ||
|
|
10db192cc4 | ||
|
|
79ac0f3420 | ||
|
|
c41599746d | ||
|
|
c85ae00b33 | ||
|
|
7f0cc7072b | ||
|
|
1b5aae3ef3 | ||
|
|
6abf739315 | ||
|
|
db825b8138 | ||
|
|
33874bae8d | ||
|
|
afee7f9cea | ||
|
|
653144694f | ||
|
|
c33a84cdfd | ||
|
|
bd1715ff5c | ||
|
|
f8a540881c | ||
|
|
c71d8750f7 | ||
|
|
244239e5f6 | ||
|
|
711d49ed30 | ||
|
|
7996a30e3a | ||
|
|
d0832bfcaa | ||
|
|
a69ca31f34 | ||
|
|
049ea02fc7 | ||
|
|
ab39bc0bac | ||
|
|
5c6b612a72 | ||
|
|
56f155c590 | ||
|
|
bd4fc64156 | ||
|
|
41687746be | ||
|
|
171f8db742 | ||
|
|
d7e67b62f0 | ||
|
|
d1d044aa87 | ||
|
|
8b0d1e59fe | ||
|
|
edada042b3 | ||
|
|
29ab3c2028 | ||
|
|
7670ecc63f | ||
|
|
dd2aedacaf | ||
|
|
dc500946ad | ||
|
|
a48c03e0f4 | ||
|
|
f6284777e6 | ||
|
|
7647490617 | ||
|
|
dbc8fc7900 | ||
|
|
eef788981c | ||
|
|
5b22acca6d | ||
|
|
8c8b34a889 | ||
|
|
7ff94383ce | ||
|
|
0891910cac | ||
|
|
720e5cd651 | ||
|
|
1ad2a8e567 | ||
|
|
52d8bb2836 | ||
|
|
caf4ea3d89 | ||
|
|
95c088b303 | ||
|
|
a20113d5a3 | ||
|
|
0f93dadd6a | ||
|
|
f4004f660e | ||
|
|
4406fd138d | ||
|
|
fd7a72e147 | ||
|
|
3a2be621f3 | ||
|
|
5116c8178c | ||
|
|
91e826e5f4 | ||
|
|
751283a2de | ||
|
|
6266d9e8d6 | ||
|
|
c22c3dec56 | ||
|
|
138956e516 | ||
|
|
60be735e80 | ||
|
|
d0d95d3a2a | ||
|
|
b90a215000 | ||
|
|
6270e313b8 | ||
|
|
a01b7bdc40 | ||
|
|
1eee8111b9 | ||
|
|
64eca42610 | ||
|
|
21a1f681dc | ||
|
|
2882c2d0a6 | ||
|
|
fb857f05ba | ||
|
|
4ffdf73412 | ||
|
|
9130ad7e08 | ||
|
|
d66010410c | ||
|
|
6566c2298c | ||
|
|
063b4a1995 | ||
|
|
18cdb556bd | ||
|
|
8d16a69b80 | ||
|
|
a406b588b4 | ||
|
|
5454a0edc2 | ||
|
|
fe5cc79249 | ||
|
|
361cc42829 | ||
|
|
91cce6b4c3 | ||
|
|
9d88abe2ea | ||
|
|
a61e49bc97 | ||
|
|
d0df894c9f | ||
|
|
f46916d521 | ||
|
|
12755c6ef6 | ||
|
|
cc4f33bf3a | ||
|
|
d8c0d020eb | ||
|
|
1a4bed2e55 | ||
|
|
02bee4fdb1 | ||
|
|
e918cb1a8a | ||
|
|
d922b53c26 | ||
|
|
0163310a47 | ||
|
|
423d25716d | ||
|
|
1d999ba974 | ||
|
|
27d4bb5624 | ||
|
|
c78b496da6 | ||
|
|
dd2af3f93c | ||
|
|
2d65b03f05 | ||
|
|
2288412ef2 | ||
|
|
6bff985496 | ||
|
|
918ade12ed | ||
|
|
70ef83ac30 | ||
|
|
b6cf8b9052 | ||
|
|
68f62c8352 | ||
|
|
33936430d0 | ||
|
|
81b3de9c65 | ||
|
|
ad6cf6f2f7 | ||
|
|
ecef72ca39 | ||
|
|
92d1ed744a | ||
|
|
da4bf95fbc | ||
|
|
d43c5c01e3 | ||
|
|
51278c7a10 | ||
|
|
6ef7c1ad4e | ||
|
|
33cc16473f | ||
|
|
1701c2ea94 | ||
|
|
2e299a1daf | ||
|
|
0b582a40d0 | ||
|
|
1306457b27 | ||
|
|
f4a19af04f | ||
|
|
58545ba057 | ||
|
|
4fe265735a | ||
|
|
2b7f32502c | ||
|
|
3ee82d8a3b | ||
|
|
629ca09fda | ||
|
|
833de06299 | ||
|
|
68eabab2af | ||
|
|
a4f69e62d7 | ||
|
|
7db51d0171 | ||
|
|
1b3c7acce3 | ||
|
|
e6b2c15fc5 | ||
|
|
d319b8a762 | ||
|
|
db580ccefd | ||
|
|
9e99fcbc16 | ||
|
|
346c9b66ec | ||
|
|
a52870684a | ||
|
|
2455bb38a4 | ||
|
|
01e05a98de | ||
|
|
2cac4697aa | ||
|
|
c5e95adb49 | ||
|
|
91565970c2 | ||
|
|
09bd9fa47e | ||
|
|
dc30adfbb4 | ||
|
|
fa98601bfb | ||
|
|
66fe110148 | ||
|
|
bf50ab9dd6 | ||
|
|
70119602a0 | ||
|
|
28fe84177e | ||
|
|
35d3f0ed90 | ||
|
|
0433b3d625 | ||
|
|
4b560b50c2 | ||
|
|
9ad79207c2 | ||
|
|
0be2351c97 | ||
|
|
ed513397b2 | ||
|
|
c52ba1b022 | ||
|
|
d022d0dd11 | ||
|
|
a14fd69a5a | ||
|
|
0d2e6f90c8 | ||
|
|
58e3562652 | ||
|
|
b622819051 | ||
|
|
a547c33327 | ||
|
|
31b77dbaf8 | ||
|
|
4280788c18 | ||
|
|
146e75a1de | ||
|
|
8a2b849620 | ||
|
|
462a1961e4 | ||
|
|
84c10346fb | ||
|
|
2aa8393272 | ||
|
|
c83d01b369 | ||
|
|
5354122094 | ||
|
|
64444025a9 | ||
|
|
d566ee092a | ||
|
|
b983d61e93 | ||
|
|
153c93bdd4 | ||
|
|
3be1cee17c | ||
|
|
bdb0651eb2 | ||
|
|
1480ef84dc | ||
|
|
1714816fe2 | ||
|
|
b5565d2c82 | ||
|
|
4fad71cd8c | ||
|
|
d126db2413 | ||
|
|
7811d20f21 | ||
|
|
d524e5797d | ||
|
|
8ca4d6542d | ||
|
|
a51e18ea98 | ||
|
|
8bf321f6ae | ||
|
|
5d13207aa6 | ||
|
|
dae2b26765 | ||
|
|
713b2a03dc | ||
|
|
186d0f9d10 | ||
|
|
55b448818e | ||
|
|
b4babf7680 | ||
|
|
85f32752fe | ||
|
|
b757384aba | ||
|
|
a5d21d7c94 | ||
|
|
8f3520e2d5 | ||
|
|
19e4298cf9 | ||
|
|
42ffcd7204 | ||
|
|
d48299e56c | ||
|
|
2e22d9ecf1 | ||
|
|
18597ad1d9 | ||
|
|
0173d3a8fc | ||
|
|
e7658b941e | ||
|
|
a7a62d39d4 | ||
|
|
24ce56b3db | ||
|
|
3220f73f0a | ||
|
|
27a1044e65 | ||
|
|
39c56f20be | ||
|
|
f6b2ec61b2 | ||
|
|
e57d6fd1a6 | ||
|
|
1b40a31a89 | ||
|
|
4fce1063c4 | ||
|
|
f9862a3d88 | ||
|
|
81ad239197 | ||
|
|
ed38c97ed8 | ||
|
|
4f8e7356b3 | ||
|
|
c363f033e8 | ||
|
|
22c25b3615 | ||
|
|
7fe7cdc8c9 | ||
|
|
e26fee78b5 | ||
|
|
63178c6a8c | ||
|
|
6fb2f1ed6e | ||
|
|
38701a6d7b | ||
|
|
31fa92a83f | ||
|
|
0abfc3cac6 | ||
|
|
d483fcb53a | ||
|
|
c7db038c96 | ||
|
|
132d23e55d | ||
|
|
90cbc6362c | ||
|
|
f33ae1bdf4 | ||
|
|
754525be82 | ||
|
|
d9eab7f383 | ||
|
|
f695988915 | ||
|
|
5d19294810 | ||
|
|
77803cf233 | ||
|
|
4acfb76be6 | ||
|
|
fd13526454 | ||
|
|
7718af041c | ||
|
|
30dbf0e589 | ||
|
|
070795a3b4 | ||
|
|
e351d6ffe5 | ||
|
|
46464ac677 | ||
|
|
03d8eb19e0 | ||
|
|
fef632e0e1 | ||
|
|
05061a70b3 | ||
|
|
617a029ae7 | ||
|
|
7ae79b350e | ||
|
|
9a8cd9684e | ||
|
|
18899be4ae | ||
|
|
3ea505bc2d | ||
|
|
e2ae6d288d | ||
|
|
41b26e0520 | ||
|
|
b6053108c1 | ||
|
|
22365a3f12 | ||
|
|
594c0eeb8c | ||
|
|
529040708b | ||
|
|
f0e2fa781f | ||
|
|
87b7446228 | ||
|
|
8a517fdc17 | ||
|
|
373a2d9c32 | ||
|
|
1f8bc9482a | ||
|
|
b85773f332 | ||
|
|
ddc0e9b4d8 | ||
|
|
44a48d0981 | ||
|
|
8bbe7936bd | ||
|
|
9e7865704a | ||
|
|
ac02a775e4 | ||
|
|
7c485a1a4a | ||
|
|
36bc989a27 | ||
|
|
ea2ee33be8 | ||
|
|
5d67986997 | ||
|
|
7dfca3dcb5 | ||
|
|
e0de42bd03 | ||
|
|
614974a8e8 | ||
|
|
6e49c070bb | ||
|
|
08a9702b73 | ||
|
|
042a9043d1 | ||
|
|
a7ac93a899 | ||
|
|
3b2569ebdd | ||
|
|
8b9a520c5c | ||
|
|
ba03289c14 | ||
|
|
d1551b1bd4 | ||
|
|
fab9e1a423 | ||
|
|
59be6c815d | ||
|
|
ff6c11406b | ||
|
|
6f90c7daf6 | ||
|
|
38ed6393fa | ||
|
|
a5a3300fc6 | ||
|
|
0ab03a5fde | ||
|
|
800132970e | ||
|
|
555f13e469 | ||
|
|
9b5101cd8d | ||
|
|
7040995ceb | ||
|
|
5129f256a3 | ||
|
|
b0b4ccf521 | ||
|
|
ed72ff3268 | ||
|
|
89805a5239 | ||
|
|
e00397f9ca | ||
|
|
12f59e1daa | ||
|
|
cf750f62db | ||
|
|
0f28663805 | ||
|
|
f3fad22cb6 | ||
|
|
7bf0bc5208 | ||
|
|
4e5aa7e714 | ||
|
|
46a223f229 | ||
|
|
eb9f0be91a | ||
|
|
4f02b72c9c | ||
|
|
dd670200bb | ||
|
|
8f89a2456a | ||
|
|
407d70a987 | ||
|
|
f1ffb5b51b | ||
|
|
4f1664ec4f | ||
|
|
fcdd95b652 | ||
|
|
470a62dbbe | ||
|
|
2c08cf7175 | ||
|
|
539c15966d | ||
|
|
5f844807cb | ||
|
|
cb86b9ae6e | ||
|
|
3a30a8f2d2 | ||
|
|
60ed004328 | ||
|
|
dbb9132f4d | ||
|
|
5711b6d611 | ||
|
|
f1bed52530 | ||
|
|
23fb4a72bb | ||
|
|
c38b6964b4 | ||
|
|
e202441f0c | ||
|
|
d051d86df6 | ||
|
|
b49475a54f | ||
|
|
797de3257c | ||
|
|
31b22e057d | ||
|
|
078859207d | ||
|
|
a10baf5808 | ||
|
|
0eba55ddbc | ||
|
|
19fa222810 | ||
|
|
b3e3b0e861 | ||
|
|
dde2994d10 | ||
|
|
888ca39ce2 | ||
|
|
f4c95bfec0 | ||
|
|
91d3e4605e | ||
|
|
652c67c90e | ||
|
|
2114c386ad | ||
|
|
6d2b4cbda1 | ||
|
|
562831fc4b | ||
|
|
d04518e65e | ||
|
|
d598b6c79d | ||
|
|
4ec21a5423 | ||
|
|
b64c902354 | ||
|
|
2ada3288e7 | ||
|
|
91966e9ffa | ||
|
|
2ad73246f9 | ||
|
|
d3a802db69 | ||
|
|
b95908daec | ||
|
|
79add5f0b6 | ||
|
|
650ae3eb13 | ||
|
|
0e3059728c | ||
|
|
b7735b3788 | ||
|
|
39b55ae016 | ||
|
|
e82c5eba18 | ||
|
|
1c8ecacddf | ||
|
|
26dc05e0e0 | ||
|
|
49247b4aa4 | ||
|
|
eb58276a2c | ||
|
|
72a9d75330 | ||
|
|
1a7743f3c2 | ||
|
|
0b4459b707 | ||
|
|
c521ac08ee | ||
|
|
29727f3e12 | ||
|
|
51b9a1d8d3 | ||
|
|
ab131cb55e | ||
|
|
269fcf92d9 | ||
|
|
8b682ac83b | ||
|
|
36e4130f1c | ||
|
|
b978536385 | ||
|
|
0a7fe6f2d9 | ||
|
|
b12955c963 | ||
|
|
9133087850 | ||
|
|
25fa0ad1f2 | ||
|
|
df9f088eb4 | ||
|
|
b1600d4ca3 | ||
|
|
0efc3bf780 | ||
|
|
dd16fe16bb | ||
|
|
4d72644db4 | ||
|
|
7ea168227c | ||
|
|
ef8ddffe46 | ||
|
|
81cbcb919e | ||
|
|
1eec6b776b | ||
|
|
776c747978 | ||
|
|
caf4dd4155 | ||
|
|
ee10021ea2 | ||
|
|
ca82acfd3b | ||
|
|
feea5fb063 | ||
|
|
b5cdbd3b0b | ||
|
|
e043f238af | ||
|
|
47a5da25b7 | ||
|
|
f55f4d7156 | ||
|
|
5055e9e1d5 | ||
|
|
c6b5e930dc | ||
|
|
d33e1bf563 | ||
|
|
923466387f | ||
|
|
56f7b0f434 | ||
|
|
c24a16ccb0 | ||
|
|
ab8ee9bbb6 | ||
|
|
37609d6e53 | ||
|
|
fb9b845fda | ||
|
|
9050ce152b | ||
|
|
73901a2777 | ||
|
|
decd1a58d2 | ||
|
|
7f4a5e946d | ||
|
|
4bc64a6aff | ||
|
|
02cf5879a1 | ||
|
|
d495bac307 | ||
|
|
3393b8cad1 | ||
|
|
886f1c0138 | ||
|
|
9588444f0e | ||
|
|
24b11ecf9f | ||
|
|
84989f0d05 | ||
|
|
a93a79568d | ||
|
|
7081a84600 | ||
|
|
1df1e5c38b | ||
|
|
5a513426bd | ||
|
|
611ccb991e | ||
|
|
bde956647f | ||
|
|
8952196bbf | ||
|
|
050dffd269 | ||
|
|
0cdf5e61b0 | ||
|
|
de1cea92ce | ||
|
|
3a58988e4a | ||
|
|
7a67d3d837 | ||
|
|
9050f3d399 | ||
|
|
a21156e3e3 | ||
|
|
716dbbdf8c | ||
|
|
1f2e52a1d6 | ||
|
|
dc788f92b3 | ||
|
|
13774912f4 | ||
|
|
cb9e6d544a | ||
|
|
a6d6bafd13 | ||
|
|
9d1343dce3 | ||
|
|
11c0df07b7 | ||
|
|
ca8a799373 | ||
|
|
710b908290 | ||
|
|
c80ce4fff5 | ||
|
|
bc7b1fdd37 | ||
|
|
1b7d414784 | ||
|
|
6d1219deec | ||
|
|
e019de34ac | ||
|
|
88563fd27a | ||
|
|
18289dabcb | ||
|
|
e70169257e | ||
|
|
2afa87e911 | ||
|
|
281e381cfc | ||
|
|
9a121f6190 | ||
|
|
a20827697c | ||
|
|
9391eaff0e | ||
|
|
e1d52822c5 | ||
|
|
e4eb775b63 | ||
|
|
a3632f5b4f | ||
|
|
2736d7e15e |
32
.dev_scripts/diff_images.py
Normal file
@@ -0,0 +1,32 @@
|
||||
import argparse
|
||||
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
|
||||
|
||||
def read_image_int16(image_path):
|
||||
image = Image.open(image_path)
|
||||
return np.array(image).astype(np.int16)
|
||||
|
||||
|
||||
def calc_images_mean_L1(image1_path, image2_path):
|
||||
image1 = read_image_int16(image1_path)
|
||||
image2 = read_image_int16(image2_path)
|
||||
assert image1.shape == image2.shape
|
||||
|
||||
mean_L1 = np.abs(image1 - image2).mean()
|
||||
return mean_L1
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('image1_path')
|
||||
parser.add_argument('image2_path')
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
args = parse_args()
|
||||
mean_L1 = calc_images_mean_L1(args.image1_path, args.image2_path)
|
||||
print(mean_L1)
|
||||
|
After Width: | Height: | Size: 416 KiB |
1
.dev_scripts/sample_command.txt
Normal file
@@ -0,0 +1 @@
|
||||
"a photograph of an astronaut riding a horse" -s50 -S42
|
||||
19
.dev_scripts/test_regression_txt2img_dream_v1_4.sh
Normal file
@@ -0,0 +1,19 @@
|
||||
# generate an image
|
||||
PROMPT_FILE=".dev_scripts/sample_command.txt"
|
||||
OUT_DIR="outputs/img-samples/test_regression_txt2img_v1_4"
|
||||
SAMPLES_DIR=${OUT_DIR}
|
||||
python scripts/dream.py \
|
||||
--from_file ${PROMPT_FILE} \
|
||||
--outdir ${OUT_DIR} \
|
||||
--sampler plms
|
||||
|
||||
# original output by CompVis/stable-diffusion
|
||||
IMAGE1=".dev_scripts/images/v1_4_astronaut_rides_horse_plms_step50_seed42.png"
|
||||
# new output
|
||||
IMAGE2=`ls -A ${SAMPLES_DIR}/*.png | sort | tail -n 1`
|
||||
|
||||
echo ""
|
||||
echo "comparing the following two images"
|
||||
echo "IMAGE1: ${IMAGE1}"
|
||||
echo "IMAGE2: ${IMAGE2}"
|
||||
python .dev_scripts/diff_images.py ${IMAGE1} ${IMAGE2}
|
||||
23
.dev_scripts/test_regression_txt2img_v1_4.sh
Normal file
@@ -0,0 +1,23 @@
|
||||
# generate an image
|
||||
PROMPT="a photograph of an astronaut riding a horse"
|
||||
OUT_DIR="outputs/txt2img-samples/test_regression_txt2img_v1_4"
|
||||
SAMPLES_DIR="outputs/txt2img-samples/test_regression_txt2img_v1_4/samples"
|
||||
python scripts/orig_scripts/txt2img.py \
|
||||
--prompt "${PROMPT}" \
|
||||
--outdir ${OUT_DIR} \
|
||||
--plms \
|
||||
--ddim_steps 50 \
|
||||
--n_samples 1 \
|
||||
--n_iter 1 \
|
||||
--seed 42
|
||||
|
||||
# original output by CompVis/stable-diffusion
|
||||
IMAGE1=".dev_scripts/images/v1_4_astronaut_rides_horse_plms_step50_seed42.png"
|
||||
# new output
|
||||
IMAGE2=`ls -A ${SAMPLES_DIR}/*.png | sort | tail -n 1`
|
||||
|
||||
echo ""
|
||||
echo "comparing the following two images"
|
||||
echo "IMAGE1: ${IMAGE1}"
|
||||
echo "IMAGE2: ${IMAGE2}"
|
||||
python .dev_scripts/diff_images.py ${IMAGE1} ${IMAGE2}
|
||||
12
.dockerignore
Normal file
@@ -0,0 +1,12 @@
|
||||
*
|
||||
!backend
|
||||
!configs
|
||||
!environments-and-requirements
|
||||
!frontend
|
||||
!installer
|
||||
!ldm
|
||||
!main.py
|
||||
!scripts
|
||||
!server
|
||||
!static
|
||||
!setup.py
|
||||
12
.editorconfig
Normal file
@@ -0,0 +1,12 @@
|
||||
# All files
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
indent_size = 2
|
||||
indent_style = space
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
# Python
|
||||
[*.py]
|
||||
indent_size = 4
|
||||
4
.gitattributes
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
# Auto normalizes line endings on commit so devs don't need to change local settings.
|
||||
# Only affects text files and ignores other file types.
|
||||
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
|
||||
* text=auto
|
||||
7
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
ldm/invoke/pngwriter.py @CapableWeb
|
||||
ldm/invoke/server_legacy.py @CapableWeb
|
||||
scripts/legacy_api.py @CapableWeb
|
||||
tests/legacy_tests.sh @CapableWeb
|
||||
installer/ @tildebyte
|
||||
.github/workflows/ @mauwii
|
||||
docker_build/ @mauwii
|
||||
102
.github/ISSUE_TEMPLATE/BUG_REPORT.yml
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
name: 🐞 Bug Report
|
||||
|
||||
description: File a bug report
|
||||
|
||||
title: '[bug]: '
|
||||
|
||||
labels: ['bug']
|
||||
|
||||
# assignees:
|
||||
# - moderator_bot
|
||||
# - lstein
|
||||
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this Bug Report!
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: |
|
||||
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
|
||||
irst to see if an issue already exists for the bug you encountered.
|
||||
options:
|
||||
- label: I have searched the existing issues
|
||||
required: true
|
||||
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: __Describe your environment__
|
||||
|
||||
- type: dropdown
|
||||
id: os_dropdown
|
||||
attributes:
|
||||
label: OS
|
||||
description: Which operating System did you use when the bug occured
|
||||
multiple: false
|
||||
options:
|
||||
- 'Linux'
|
||||
- 'Windows'
|
||||
- 'macOS'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: gpu_dropdown
|
||||
attributes:
|
||||
label: GPU
|
||||
description: Which kind of Graphic-Adapter is your System using
|
||||
multiple: false
|
||||
options:
|
||||
- 'cuda'
|
||||
- 'amd'
|
||||
- 'mps'
|
||||
- 'cpu'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: vram
|
||||
attributes:
|
||||
label: VRAM
|
||||
description: Size of the VRAM if known
|
||||
placeholder: 8GB
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: what-happened
|
||||
attributes:
|
||||
label: What happened?
|
||||
description: |
|
||||
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
|
||||
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Screenshots
|
||||
description: If applicable, add screenshots to help explain your problem
|
||||
placeholder: this is what the result looked like <screenshot>
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context about the problem here
|
||||
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: input
|
||||
id: contact
|
||||
attributes:
|
||||
label: Contact Details
|
||||
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
|
||||
placeholder: ex. email@example.com, discordname, twitter, ...
|
||||
validations:
|
||||
required: false
|
||||
56
.github/ISSUE_TEMPLATE/FEATURE_REQUEST.yml
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
name: Feature Request
|
||||
description: Commit a idea or Request a new feature
|
||||
title: '[enhancement]: '
|
||||
labels: ['enhancement']
|
||||
# assignees:
|
||||
# - lstein
|
||||
# - tildebyte
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this Feature request!
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: |
|
||||
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
|
||||
to see if a simmilar issue already exists for the feature you want to request
|
||||
options:
|
||||
- label: I have searched the existing issues
|
||||
required: true
|
||||
|
||||
- type: input
|
||||
id: contact
|
||||
attributes:
|
||||
label: Contact Details
|
||||
description: __OPTIONAL__ How could we get in touch with you if we need more info (besides this issue)?
|
||||
placeholder: ex. email@example.com, discordname, twitter, ...
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: whatisexpected
|
||||
attributes:
|
||||
label: What should this feature add?
|
||||
description: Please try to explain the functionality this feature should add
|
||||
placeholder: |
|
||||
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
|
||||
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
|
||||
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
|
||||
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Alternatives
|
||||
description: Describe alternatives you've considered
|
||||
placeholder: A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Aditional Content
|
||||
description: Add any other context or screenshots about the feature request here.
|
||||
placeholder: This is a Mockup of the design how I imagine it <screenshot>
|
||||
14
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Project-Documentation
|
||||
url: https://invoke-ai.github.io/InvokeAI/
|
||||
about: Should be your first place to go when looking for manuals/FAQs regarding our InvokeAI Toolkit
|
||||
- name: Discord
|
||||
url: https://discord.gg/ZmtBAhwWhy
|
||||
about: Our Discord Community could maybe help you out via live-chat
|
||||
- name: GitHub Community Support
|
||||
url: https://github.com/orgs/community/discussions
|
||||
about: Please ask and answer questions regarding the GitHub Platform here.
|
||||
- name: GitHub Security Bug Bounty
|
||||
url: https://bounty.github.com/
|
||||
about: Please report security vulnerabilities of the GitHub Platform here.
|
||||
43
.github/workflows/build-container.yml
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
# Building the Image without pushing to confirm it is still buildable
|
||||
# confirum functionality would unfortunately need way more resources
|
||||
name: build container image
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
- 'update-dockerfile'
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
arch:
|
||||
- x86_64
|
||||
- aarch64
|
||||
pip-requirements:
|
||||
- requirements-lin-amd.txt
|
||||
- requirements-lin-cuda.txt
|
||||
runs-on: ubuntu-latest
|
||||
name: ${{ matrix.pip-requirements }} ${{ matrix.arch }}
|
||||
steps:
|
||||
- name: prepare docker-tag
|
||||
env:
|
||||
repository: ${{ github.repository }}
|
||||
run: echo "dockertag=${repository,,}" >> $GITHUB_ENV
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v2
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v2
|
||||
- name: Build container
|
||||
uses: docker/build-push-action@v3
|
||||
with:
|
||||
context: .
|
||||
file: docker-build/Dockerfile
|
||||
platforms: Linux/${{ matrix.arch }}
|
||||
push: false
|
||||
tags: ${{ env.dockertag }}:${{ matrix.pip-requirements }}-${{ matrix.arch }}
|
||||
build-args: pip_requirements=${{ matrix.pip-requirements }}
|
||||
40
.github/workflows/mkdocs-material.yml
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
name: mkdocs-material
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
|
||||
jobs:
|
||||
mkdocs-material:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: checkout sources
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: setup python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: install requirements
|
||||
run: |
|
||||
python -m \
|
||||
pip install -r docs/requirements-mkdocs.txt
|
||||
|
||||
- name: confirm buildability
|
||||
run: |
|
||||
python -m \
|
||||
mkdocs build \
|
||||
--clean \
|
||||
--verbose
|
||||
|
||||
- name: deploy to gh-pages
|
||||
if: ${{ github.ref == 'refs/heads/main' }}
|
||||
run: |
|
||||
python -m \
|
||||
mkdocs gh-deploy \
|
||||
--clean \
|
||||
--force
|
||||
135
.github/workflows/test-invoke-conda.yml
vendored
Normal file
@@ -0,0 +1,135 @@
|
||||
name: Test invoke.py
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
- 'fix-gh-actions-fork'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
|
||||
jobs:
|
||||
matrix:
|
||||
strategy:
|
||||
matrix:
|
||||
stable-diffusion-model:
|
||||
- 'stable-diffusion-1.5'
|
||||
environment-yaml:
|
||||
- environment-lin-amd.yml
|
||||
- environment-lin-cuda.yml
|
||||
- environment-mac.yml
|
||||
include:
|
||||
- environment-yaml: environment-lin-amd.yml
|
||||
os: ubuntu-latest
|
||||
default-shell: bash -l {0}
|
||||
- environment-yaml: environment-lin-cuda.yml
|
||||
os: ubuntu-latest
|
||||
default-shell: bash -l {0}
|
||||
- environment-yaml: environment-mac.yml
|
||||
os: macos-12
|
||||
default-shell: bash -l {0}
|
||||
- stable-diffusion-model: stable-diffusion-1.5
|
||||
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
|
||||
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
|
||||
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
|
||||
name: ${{ matrix.environment-yaml }} on ${{ matrix.os }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
env:
|
||||
CONDA_ENV_NAME: invokeai
|
||||
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
|
||||
defaults:
|
||||
run:
|
||||
shell: ${{ matrix.default-shell }}
|
||||
steps:
|
||||
- name: Checkout sources
|
||||
id: checkout-sources
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: create models.yaml from example
|
||||
run: |
|
||||
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
|
||||
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
|
||||
|
||||
- name: create environment.yml
|
||||
run: cp "environments-and-requirements/${{ matrix.environment-yaml }}" environment.yml
|
||||
|
||||
- name: Use cached conda packages
|
||||
id: use-cached-conda-packages
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: ~/conda_pkgs_dir
|
||||
key: conda-pkgs-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles(matrix.environment-yaml) }}
|
||||
|
||||
- name: Activate Conda Env
|
||||
id: activate-conda-env
|
||||
uses: conda-incubator/setup-miniconda@v2
|
||||
with:
|
||||
activate-environment: ${{ env.CONDA_ENV_NAME }}
|
||||
environment-file: environment.yml
|
||||
miniconda-version: latest
|
||||
|
||||
- name: set test prompt to main branch validation
|
||||
if: ${{ github.ref == 'refs/heads/main' }}
|
||||
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
|
||||
|
||||
- name: set test prompt to development branch validation
|
||||
if: ${{ github.ref == 'refs/heads/development' }}
|
||||
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
|
||||
|
||||
- name: set test prompt to Pull Request validation
|
||||
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
|
||||
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
|
||||
|
||||
- name: Use Cached Stable Diffusion Model
|
||||
id: cache-sd-model
|
||||
uses: actions/cache@v3
|
||||
env:
|
||||
cache-name: cache-${{ matrix.stable-diffusion-model }}
|
||||
with:
|
||||
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
|
||||
key: ${{ env.cache-name }}
|
||||
|
||||
- name: Download ${{ matrix.stable-diffusion-model }}
|
||||
id: download-stable-diffusion-model
|
||||
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
|
||||
run: |
|
||||
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
|
||||
curl \
|
||||
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
|
||||
-o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" \
|
||||
-L ${{ matrix.stable-diffusion-model-url }}
|
||||
|
||||
- name: run configure_invokeai.py
|
||||
id: run-preload-models
|
||||
run: |
|
||||
python scripts/configure_invokeai.py --no-interactive --yes
|
||||
|
||||
- name: cat ~/.invokeai
|
||||
id: cat-invokeai
|
||||
run: cat ~/.invokeai
|
||||
|
||||
- name: Run the tests
|
||||
id: run-tests
|
||||
run: |
|
||||
time python scripts/invoke.py \
|
||||
--no-patchmatch \
|
||||
--no-nsfw_checker \
|
||||
--model ${{ matrix.stable-diffusion-model }} \
|
||||
--from_file ${{ env.TEST_PROMPTS }} \
|
||||
--root="${{ env.INVOKEAI_ROOT }}" \
|
||||
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
|
||||
|
||||
- name: export conda env
|
||||
id: export-conda-env
|
||||
run: |
|
||||
mkdir -p outputs/img-samples
|
||||
conda env export --name ${{ env.CONDA_ENV_NAME }} > outputs/img-samples/environment-${{ runner.os }}-${{ runner.arch }}.yml
|
||||
|
||||
- name: Archive results
|
||||
id: archive-results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
|
||||
path: ${{ env.INVOKEAI_ROOT }}/outputs
|
||||
128
.github/workflows/test-invoke-pip.yml
vendored
Normal file
@@ -0,0 +1,128 @@
|
||||
name: Test invoke.py pip
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
pull_request:
|
||||
branches:
|
||||
- 'main'
|
||||
- 'development'
|
||||
|
||||
jobs:
|
||||
matrix:
|
||||
strategy:
|
||||
matrix:
|
||||
stable-diffusion-model:
|
||||
- stable-diffusion-1.5
|
||||
requirements-file:
|
||||
- requirements-lin-cuda.txt
|
||||
- requirements-lin-amd.txt
|
||||
- requirements-mac-mps-cpu.txt
|
||||
python-version:
|
||||
# - '3.9'
|
||||
- '3.10'
|
||||
include:
|
||||
- requirements-file: requirements-lin-cuda.txt
|
||||
os: ubuntu-latest
|
||||
default-shell: bash -l {0}
|
||||
- requirements-file: requirements-lin-amd.txt
|
||||
os: ubuntu-latest
|
||||
default-shell: bash -l {0}
|
||||
- requirements-file: requirements-mac-mps-cpu.txt
|
||||
os: macOS-12
|
||||
default-shell: bash -l {0}
|
||||
- stable-diffusion-model: stable-diffusion-1.5
|
||||
stable-diffusion-model-url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
|
||||
stable-diffusion-model-dl-path: models/ldm/stable-diffusion-v1
|
||||
stable-diffusion-model-dl-name: v1-5-pruned-emaonly.ckpt
|
||||
name: ${{ matrix.requirements-file }} on ${{ matrix.python-version }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
defaults:
|
||||
run:
|
||||
shell: ${{ matrix.default-shell }}
|
||||
env:
|
||||
INVOKEAI_ROOT: '${{ github.workspace }}/invokeai'
|
||||
steps:
|
||||
- name: Checkout sources
|
||||
id: checkout-sources
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: create models.yaml from example
|
||||
run: |
|
||||
mkdir -p ${{ env.INVOKEAI_ROOT }}/configs
|
||||
cp configs/models.yaml.example ${{ env.INVOKEAI_ROOT }}/configs/models.yaml
|
||||
|
||||
- name: set test prompt to main branch validation
|
||||
if: ${{ github.ref == 'refs/heads/main' }}
|
||||
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> $GITHUB_ENV
|
||||
|
||||
- name: set test prompt to development branch validation
|
||||
if: ${{ github.ref == 'refs/heads/development' }}
|
||||
run: echo "TEST_PROMPTS=tests/dev_prompts.txt" >> $GITHUB_ENV
|
||||
|
||||
- name: set test prompt to Pull Request validation
|
||||
if: ${{ github.ref != 'refs/heads/main' && github.ref != 'refs/heads/development' }}
|
||||
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> $GITHUB_ENV
|
||||
|
||||
- name: create requirements.txt
|
||||
run: cp 'environments-and-requirements/${{ matrix.requirements-file }}' '${{ matrix.requirements-file }}'
|
||||
|
||||
- name: setup python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
cache: 'pip'
|
||||
cache-dependency-path: ${{ matrix.requirements-file }}
|
||||
|
||||
# - name: install dependencies
|
||||
# run: ${{ env.pythonLocation }}/bin/pip install --upgrade pip setuptools wheel
|
||||
|
||||
- name: install requirements
|
||||
run: ${{ env.pythonLocation }}/bin/pip install -r '${{ matrix.requirements-file }}'
|
||||
|
||||
- name: Use Cached Stable Diffusion Model
|
||||
id: cache-sd-model
|
||||
uses: actions/cache@v3
|
||||
env:
|
||||
cache-name: cache-${{ matrix.stable-diffusion-model }}
|
||||
with:
|
||||
path: ${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}
|
||||
key: ${{ env.cache-name }}
|
||||
|
||||
- name: Download ${{ matrix.stable-diffusion-model }}
|
||||
id: download-stable-diffusion-model
|
||||
if: ${{ steps.cache-sd-model.outputs.cache-hit != 'true' }}
|
||||
run: |
|
||||
mkdir -p "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}"
|
||||
curl \
|
||||
-H "Authorization: Bearer ${{ secrets.HUGGINGFACE_TOKEN }}" \
|
||||
-o "${{ env.INVOKEAI_ROOT }}/${{ matrix.stable-diffusion-model-dl-path }}/${{ matrix.stable-diffusion-model-dl-name }}" \
|
||||
-L ${{ matrix.stable-diffusion-model-url }}
|
||||
|
||||
- name: run configure_invokeai.py
|
||||
id: run-preload-models
|
||||
run: |
|
||||
${{ env.pythonLocation }}/bin/python scripts/configure_invokeai.py --no-interactive --yes
|
||||
|
||||
- name: cat ~/.invokeai
|
||||
id: cat-invokeai
|
||||
run: cat ~/.invokeai
|
||||
|
||||
- name: Run the tests
|
||||
id: run-tests
|
||||
run: |
|
||||
time ${{ env.pythonLocation }}/bin/python scripts/invoke.py \
|
||||
--no-patchmatch \
|
||||
--no-nsfw_checker \
|
||||
--model ${{ matrix.stable-diffusion-model }} \
|
||||
--from_file ${{ env.TEST_PROMPTS }} \
|
||||
--root="${{ env.INVOKEAI_ROOT }}" \
|
||||
--outdir="${{ env.INVOKEAI_ROOT }}/outputs"
|
||||
|
||||
- name: Archive results
|
||||
id: archive-results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: results_${{ matrix.requirements-file }}_${{ matrix.python-version }}
|
||||
path: ${{ env.INVOKEAI_ROOT }}/outputs
|
||||
236
.gitignore
vendored
Normal file
@@ -0,0 +1,236 @@
|
||||
# ignore default image save location and model symbolic link
|
||||
outputs/
|
||||
models/ldm/stable-diffusion-v1/model.ckpt
|
||||
**/restoration/codeformer/weights
|
||||
|
||||
# ignore user models config
|
||||
configs/models.user.yaml
|
||||
config/models.user.yml
|
||||
|
||||
# ignore the Anaconda/Miniconda installer used while building Docker image
|
||||
anaconda.sh
|
||||
|
||||
# ignore a directory which serves as a place for initial images
|
||||
inputs/
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# emacs autosave and recovery files
|
||||
*~
|
||||
.#*
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
pip-wheel-metadata/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
.python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
|
||||
# pdm
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
||||
#pdm.lock
|
||||
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
||||
# in version control.
|
||||
# https://pdm.fming.dev/#use-with-ide
|
||||
.pdm.toml
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
#.idea/
|
||||
|
||||
src
|
||||
**/__pycache__/
|
||||
outputs
|
||||
|
||||
# Logs and associated folders
|
||||
# created from generated embeddings.
|
||||
logs
|
||||
testtube
|
||||
checkpoints
|
||||
# If it's a Mac
|
||||
.DS_Store
|
||||
|
||||
# Let the frontend manage its own gitignore
|
||||
!frontend/*
|
||||
|
||||
# Scratch folder
|
||||
.scratch/
|
||||
.vscode/
|
||||
gfpgan/
|
||||
models/ldm/stable-diffusion-v1/*.sha256
|
||||
|
||||
|
||||
# GFPGAN model files
|
||||
gfpgan/
|
||||
|
||||
# config file (will be created by installer)
|
||||
configs/models.yaml
|
||||
|
||||
# weights (will be created by installer)
|
||||
models/ldm/stable-diffusion-v1/*.ckpt
|
||||
models/clipseg
|
||||
models/gfpgan
|
||||
|
||||
# ignore initfile
|
||||
.invokeai
|
||||
|
||||
# ignore environment.yml and requirements.txt
|
||||
# these are links to the real files in environments-and-requirements
|
||||
environment.yml
|
||||
requirements.txt
|
||||
|
||||
# source installer files
|
||||
source_installer/*zip
|
||||
source_installer/invokeAI
|
||||
install.bat
|
||||
install.sh
|
||||
update.bat
|
||||
update.sh
|
||||
|
||||
# this may be present if the user created a venv
|
||||
invokeai
|
||||
|
||||
# no longer stored in source directory
|
||||
models
|
||||
0
.gitmodules
vendored
Normal file
13
.prettierrc.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
endOfLine: lf
|
||||
tabWidth: 2
|
||||
useTabs: false
|
||||
singleQuote: true
|
||||
quoteProps: as-needed
|
||||
embeddedLanguageFormatting: auto
|
||||
overrides:
|
||||
- files: '*.md'
|
||||
options:
|
||||
proseWrap: always
|
||||
printWidth: 80
|
||||
parser: markdown
|
||||
cursorOffset: -1
|
||||
128
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||
may be reported to the community leaders responsible for enforcement
|
||||
at https://github.com/invoke-ai/InvokeAI/issues. All complaints will
|
||||
be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
||||
85
InvokeAI_Statement_of_Values.md
Normal file
@@ -0,0 +1,85 @@
|
||||
<img src="docs/assets/invoke_ai_banner.png" align="center">
|
||||
|
||||
Invoke-AI is a community of software developers, researchers, and user
|
||||
interface experts who have come together on a voluntary basis to build
|
||||
software tools which support cutting edge AI text-to-image
|
||||
applications. This community is open to anyone who wishes to
|
||||
contribute to the effort and has the skill and time to do so.
|
||||
|
||||
# Our Values
|
||||
|
||||
The InvokeAI team is a diverse community which includes individuals
|
||||
from various parts of the world and many walks of life. Despite our
|
||||
differences, we share a number of core values which we ask prospective
|
||||
contributors to understand and respect. We believe:
|
||||
|
||||
1. That Open Source Software is a positive force in the world. We
|
||||
create software that can be used, reused, and redistributed, without
|
||||
restrictions, under a straightforward Open Source license (MIT). We
|
||||
believe that Open Source benefits society as a whole by increasing the
|
||||
availability of high quality software to all.
|
||||
|
||||
2. That those who create software should receive proper attribution
|
||||
for their creative work. While we support the exchange and reuse of
|
||||
Open Source Software, we feel strongly that the original authors of a
|
||||
piece of code should receive credit for their contribution, and we
|
||||
endeavor to do so whenever possible.
|
||||
|
||||
3. That there is moral ambiguity surrounding AI-assisted art. We are
|
||||
aware of the moral and ethical issues surrounding the release of the
|
||||
Stable Diffusion model and similar products. We are aware that, due to
|
||||
the composition of their training sets, current AI-generated image
|
||||
models are biased against certain ethnic groups, cultural concepts of
|
||||
beauty, ethnic stereotypes, and gender roles.
|
||||
|
||||
1. We recognize the potential for harm to these groups that these biases
|
||||
represent and trust that future AI models will take steps towards
|
||||
reducing or eliminating the biases noted above, respect and give due
|
||||
credit to the artists whose work is sourced, and call on developers
|
||||
and users to favor these models over the older ones as they become
|
||||
available.
|
||||
|
||||
4. We are deeply committed to ensuring that this technology benefits
|
||||
everyone, including artists. We see AI art not as a replacement for
|
||||
the artist, but rather as a tool to empower them. With that
|
||||
in mind, we are constantly debating how to build systems that put
|
||||
artists’ needs first: tools which can be readily integrated into an
|
||||
artist’s existing workflows and practices, enhancing their work and
|
||||
helping them to push it further. Every decision we take as a team,
|
||||
which includes several artists, aims to build towards that goal.
|
||||
|
||||
5. That artificial intelligence can be a force for good in the world,
|
||||
but must be used responsibly. Artificial intelligence technologies
|
||||
have the potential to improve society, in everything from cancer care,
|
||||
to customer service, to creative writing.
|
||||
|
||||
1. While we do not believe that software should arbitrarily limit what
|
||||
users can do with it, we recognize that when used irresponsibly, AI
|
||||
has the potential to do much harm. Our Discord server is actively
|
||||
moderated in order to minimize the potential of harm from
|
||||
user-contributed images. In addition, we ask users of our software to
|
||||
refrain from using it in any way that would cause mental, emotional or
|
||||
physical harm to individuals and vulnerable populations including (but
|
||||
not limited to) women; minors; ethnic minorities; religious groups;
|
||||
members of LGBTQIA communities; and people with disabilities or
|
||||
impairments.
|
||||
|
||||
2. Note that some of the image generation AI models which the Invoke-AI
|
||||
toolkit supports carry licensing agreements which impose restrictions
|
||||
on how the model is used. We ask that our users read and agree to
|
||||
these terms if they wish to make use of these models. These agreements
|
||||
are distinct from the MIT license which applies to the InvokeAI
|
||||
software and source code.
|
||||
|
||||
6. That mutual respect is key to a healthy software development
|
||||
community. Members of the InvokeAI community are expected to treat
|
||||
each other with respect, beneficence, and empathy. Each of us has a
|
||||
different background and a unique set of skills. We strive to help
|
||||
each other grow and gain new skills, and we apportion expectations in
|
||||
a way that balances the members' time, skillset, and interest
|
||||
area. Disputes are resolved by open and honest communication.
|
||||
|
||||
## Signature
|
||||
|
||||
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, and **keturn**. Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.
|
||||
|
||||
19
LICENSE
@@ -1,9 +1,16 @@
|
||||
All rights reserved by the authors.
|
||||
You must not distribute the weights provided to you directly or indirectly without explicit consent of the authors.
|
||||
You must not distribute harmful, offensive, dehumanizing content or otherwise harmful representations of people or their environments, cultures, religions, etc. produced with the model weights
|
||||
or other generated content described in the "Misuse and Malicious Use" section in the model card.
|
||||
The model weights are provided for research purposes only.
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022 InvokeAI Team
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
@@ -11,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
SOFTWARE.
|
||||
|
||||
294
LICENSE-ModelWeights.txt
Normal file
@@ -0,0 +1,294 @@
|
||||
Copyright (c) 2022 Robin Rombach and Patrick Esser and contributors
|
||||
|
||||
CreativeML Open RAIL-M
|
||||
dated August 22, 2022
|
||||
|
||||
Section I: PREAMBLE
|
||||
|
||||
Multimodal generative models are being widely adopted and used, and
|
||||
have the potential to transform the way artists, among other
|
||||
individuals, conceive and benefit from AI or ML technologies as a tool
|
||||
for content creation.
|
||||
|
||||
Notwithstanding the current and potential benefits that these
|
||||
artifacts can bring to society at large, there are also concerns about
|
||||
potential misuses of them, either due to their technical limitations
|
||||
or ethical considerations.
|
||||
|
||||
In short, this license strives for both the open and responsible
|
||||
downstream use of the accompanying model. When it comes to the open
|
||||
character, we took inspiration from open source permissive licenses
|
||||
regarding the grant of IP rights. Referring to the downstream
|
||||
responsible use, we added use-based restrictions not permitting the
|
||||
use of the Model in very specific scenarios, in order for the licensor
|
||||
to be able to enforce the license in case potential misuses of the
|
||||
Model may occur. At the same time, we strive to promote open and
|
||||
responsible research on generative models for art and content
|
||||
generation.
|
||||
|
||||
Even though downstream derivative versions of the model could be
|
||||
released under different licensing terms, the latter will always have
|
||||
to include - at minimum - the same use-based restrictions as the ones
|
||||
in the original license (this license). We believe in the intersection
|
||||
between open and responsible AI development; thus, this License aims
|
||||
to strike a balance between both in order to enable responsible
|
||||
open-science in the field of AI.
|
||||
|
||||
This License governs the use of the model (and its derivatives) and is
|
||||
informed by the model card associated with the model.
|
||||
|
||||
NOW THEREFORE, You and Licensor agree as follows:
|
||||
|
||||
1. Definitions
|
||||
|
||||
- "License" means the terms and conditions for use, reproduction, and
|
||||
Distribution as defined in this document.
|
||||
|
||||
- "Data" means a collection of information and/or content extracted
|
||||
from the dataset used with the Model, including to train, pretrain,
|
||||
or otherwise evaluate the Model. The Data is not licensed under this
|
||||
License.
|
||||
|
||||
- "Output" means the results of operating a Model as embodied in
|
||||
informational content resulting therefrom.
|
||||
|
||||
- "Model" means any accompanying machine-learning based assemblies
|
||||
(including checkpoints), consisting of learnt weights, parameters
|
||||
(including optimizer states), corresponding to the model
|
||||
architecture as embodied in the Complementary Material, that have
|
||||
been trained or tuned, in whole or in part on the Data, using the
|
||||
Complementary Material.
|
||||
|
||||
- "Derivatives of the Model" means all modifications to the Model,
|
||||
works based on the Model, or any other model which is created or
|
||||
initialized by transfer of patterns of the weights, parameters,
|
||||
activations or output of the Model, to the other model, in order to
|
||||
cause the other model to perform similarly to the Model, including -
|
||||
but not limited to - distillation methods entailing the use of
|
||||
intermediate data representations or methods based on the generation
|
||||
of synthetic data by the Model for training the other model.
|
||||
|
||||
- "Complementary Material" means the accompanying source code and
|
||||
scripts used to define, run, load, benchmark or evaluate the Model,
|
||||
and used to prepare data for training or evaluation, if any. This
|
||||
includes any accompanying documentation, tutorials, examples, etc,
|
||||
if any.
|
||||
|
||||
- "Distribution" means any transmission, reproduction, publication or
|
||||
other sharing of the Model or Derivatives of the Model to a third
|
||||
party, including providing the Model as a hosted service made
|
||||
available by electronic or other remote means - e.g. API-based or
|
||||
web access.
|
||||
|
||||
- "Licensor" means the copyright owner or entity authorized by the
|
||||
copyright owner that is granting the License, including the persons
|
||||
or entities that may have rights in the Model and/or distributing
|
||||
the Model.
|
||||
|
||||
- "You" (or "Your") means an individual or Legal Entity exercising
|
||||
permissions granted by this License and/or making use of the Model
|
||||
for whichever purpose and in any field of use, including usage of
|
||||
the Model in an end-use application - e.g. chatbot, translator,
|
||||
image generator.
|
||||
|
||||
- "Third Parties" means individuals or legal entities that are not
|
||||
under common control with Licensor or You.
|
||||
|
||||
- "Contribution" means any work of authorship, including the original
|
||||
version of the Model and any modifications or additions to that
|
||||
Model or Derivatives of the Model thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Model by the copyright
|
||||
owner or by an individual or Legal Entity authorized to submit on
|
||||
behalf of the copyright owner. For the purposes of this definition,
|
||||
"submitted" means any form of electronic, verbal, or written
|
||||
communication sent to the Licensor or its representatives, including
|
||||
but not limited to communication on electronic mailing lists, source
|
||||
code control systems, and issue tracking systems that are managed
|
||||
by, or on behalf of, the Licensor for the purpose of discussing and
|
||||
improving the Model, but excluding communication that is
|
||||
conspicuously marked or otherwise designated in writing by the
|
||||
copyright owner as "Not a Contribution."
|
||||
|
||||
- "Contributor" means Licensor and any individual or Legal Entity on
|
||||
behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Model.
|
||||
|
||||
Section II: INTELLECTUAL PROPERTY RIGHTS
|
||||
|
||||
Both copyright and patent grants apply to the Model, Derivatives of
|
||||
the Model and Complementary Material. The Model and Derivatives of the
|
||||
Model are subject to additional terms as described in Section III.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare, publicly display, publicly
|
||||
perform, sublicense, and distribute the Complementary Material, the
|
||||
Model, and Derivatives of the Model.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License and where and as applicable, each Contributor hereby
|
||||
grants to You a perpetual, worldwide, non-exclusive, no-charge,
|
||||
royalty-free, irrevocable (except as stated in this paragraph) patent
|
||||
license to make, have made, use, offer to sell, sell, import, and
|
||||
otherwise transfer the Model and the Complementary Material, where
|
||||
such license applies only to those patent claims licensable by such
|
||||
Contributor that are necessarily infringed by their Contribution(s)
|
||||
alone or by combination of their Contribution(s) with the Model to
|
||||
which such Contribution(s) was submitted. If You institute patent
|
||||
litigation against any entity (including a cross-claim or counterclaim
|
||||
in a lawsuit) alleging that the Model and/or Complementary Material or
|
||||
a Contribution incorporated within the Model and/or Complementary
|
||||
Material constitutes direct or contributory patent infringement, then
|
||||
any patent licenses granted to You under this License for the Model
|
||||
and/or Work shall terminate as of the date such litigation is asserted
|
||||
or filed.
|
||||
|
||||
Section III: CONDITIONS OF USAGE, DISTRIBUTION AND REDISTRIBUTION
|
||||
|
||||
4. Distribution and Redistribution. You may host for Third Party
|
||||
remote access purposes (e.g. software-as-a-service), reproduce and
|
||||
distribute copies of the Model or Derivatives of the Model thereof in
|
||||
any medium, with or without modifications, provided that You meet the
|
||||
following conditions: Use-based restrictions as referenced in
|
||||
paragraph 5 MUST be included as an enforceable provision by You in any
|
||||
type of legal agreement (e.g. a license) governing the use and/or
|
||||
distribution of the Model or Derivatives of the Model, and You shall
|
||||
give notice to subsequent users You Distribute to, that the Model or
|
||||
Derivatives of the Model are subject to paragraph 5. This provision
|
||||
does not apply to the use of Complementary Material. You must give
|
||||
any Third Party recipients of the Model or Derivatives of the Model a
|
||||
copy of this License; You must cause any modified files to carry
|
||||
prominent notices stating that You changed the files; You must retain
|
||||
all copyright, patent, trademark, and attribution notices excluding
|
||||
those notices that do not pertain to any part of the Model,
|
||||
Derivatives of the Model. You may add Your own copyright statement to
|
||||
Your modifications and may provide additional or different license
|
||||
terms and conditions - respecting paragraph 4.a. - for use,
|
||||
reproduction, or Distribution of Your modifications, or for any such
|
||||
Derivatives of the Model as a whole, provided Your use, reproduction,
|
||||
and Distribution of the Model otherwise complies with the conditions
|
||||
stated in this License.
|
||||
|
||||
5. Use-based restrictions. The restrictions set forth in Attachment A
|
||||
are considered Use-based restrictions. Therefore You cannot use the
|
||||
Model and the Derivatives of the Model for the specified restricted
|
||||
uses. You may use the Model subject to this License, including only
|
||||
for lawful purposes and in accordance with the License. Use may
|
||||
include creating any content with, finetuning, updating, running,
|
||||
training, evaluating and/or reparametrizing the Model. You shall
|
||||
require all of Your users who use the Model or a Derivative of the
|
||||
Model to comply with the terms of this paragraph (paragraph 5).
|
||||
|
||||
6. The Output You Generate. Except as set forth herein, Licensor
|
||||
claims no rights in the Output You generate using the Model. You are
|
||||
accountable for the Output you generate and its subsequent uses. No
|
||||
use of the output can contravene any provision as stated in the
|
||||
License.
|
||||
|
||||
Section IV: OTHER PROVISIONS
|
||||
|
||||
7. Updates and Runtime Restrictions. To the maximum extent permitted
|
||||
by law, Licensor reserves the right to restrict (remotely or
|
||||
otherwise) usage of the Model in violation of this License, update the
|
||||
Model through electronic means, or modify the Output of the Model
|
||||
based on updates. You shall undertake reasonable efforts to use the
|
||||
latest version of the Model.
|
||||
|
||||
8. Trademarks and related. Nothing in this License permits You to make
|
||||
use of Licensors’ trademarks, trade names, logos or to otherwise
|
||||
suggest endorsement or misrepresent the relationship between the
|
||||
parties; and any rights not expressly granted herein are reserved by
|
||||
the Licensors.
|
||||
|
||||
9. Disclaimer of Warranty. Unless required by applicable law or agreed
|
||||
to in writing, Licensor provides the Model and the Complementary
|
||||
Material (and each Contributor provides its Contributions) on an "AS
|
||||
IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
|
||||
express or implied, including, without limitation, any warranties or
|
||||
conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR
|
||||
A PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Model, Derivatives of
|
||||
the Model, and the Complementary Material and assume any risks
|
||||
associated with Your exercise of permissions under this License.
|
||||
|
||||
10. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise, unless
|
||||
required by applicable law (such as deliberate and grossly negligent
|
||||
acts) or agreed to in writing, shall any Contributor be liable to You
|
||||
for damages, including any direct, indirect, special, incidental, or
|
||||
consequential damages of any character arising as a result of this
|
||||
License or out of the use or inability to use the Model and the
|
||||
Complementary Material (including but not limited to damages for loss
|
||||
of goodwill, work stoppage, computer failure or malfunction, or any
|
||||
and all other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
11. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Model, Derivatives of the Model and the Complementary Material
|
||||
thereof, You may choose to offer, and charge a fee for, acceptance of
|
||||
support, warranty, indemnity, or other liability obligations and/or
|
||||
rights consistent with this License. However, in accepting such
|
||||
obligations, You may act only on Your own behalf and on Your sole
|
||||
responsibility, not on behalf of any other Contributor, and only if
|
||||
You agree to indemnify, defend, and hold each Contributor harmless for
|
||||
any liability incurred by, or claims asserted against, such
|
||||
Contributor by reason of your accepting any such warranty or
|
||||
additional liability.
|
||||
|
||||
12. If any provision of this License is held to be invalid, illegal or
|
||||
unenforceable, the remaining provisions shall be unaffected thereby
|
||||
and remain valid as if such provision had not been set forth herein.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
|
||||
|
||||
|
||||
Attachment A
|
||||
|
||||
Use Restrictions
|
||||
|
||||
You agree not to use the Model or Derivatives of the Model:
|
||||
|
||||
- In any way that violates any applicable national, federal, state,
|
||||
local or international law or regulation;
|
||||
|
||||
- For the purpose of exploiting, harming or attempting to exploit or
|
||||
harm minors in any way;
|
||||
|
||||
- To generate or disseminate verifiably false information and/or
|
||||
content with the purpose of harming others;
|
||||
|
||||
- To generate or disseminate personal identifiable information that
|
||||
can be used to harm an individual;
|
||||
|
||||
- To defame, disparage or otherwise harass others;
|
||||
|
||||
- For fully automated decision making that adversely impacts an
|
||||
individual’s legal rights or otherwise creates or modifies a
|
||||
binding, enforceable obligation;
|
||||
|
||||
pp- For any use intended to or which has the effect of discriminating
|
||||
against or harming individuals or groups based on online or offline
|
||||
social behavior or known or predicted personal or personality
|
||||
characteristics;
|
||||
|
||||
- To exploit any of the vulnerabilities of a specific group of persons
|
||||
based on their age, social, physical or mental characteristics, in
|
||||
order to materially distort the behavior of a person pertaining to
|
||||
that group in a manner that causes or is likely to cause that person
|
||||
or another person physical or psychological harm;
|
||||
|
||||
- For any use intended to or which has the effect of discriminating
|
||||
against individuals or groups based on legally protected
|
||||
characteristics or categories;
|
||||
|
||||
- To provide medical advice and medical results interpretation;
|
||||
|
||||
- To generate or disseminate information for the purpose to be used
|
||||
for administration of justice, law enforcement, immigration or
|
||||
asylum processes, such as predicting an individual will commit
|
||||
fraud/crime commitment (e.g. by text profiling, drawing causal
|
||||
relationships between assertions made in documents, indiscriminate
|
||||
and arbitrarily-targeted use).
|
||||
679
README.md
@@ -1,513 +1,208 @@
|
||||
# Stable Diffusion Dream Script
|
||||
<div align="center">
|
||||
|
||||
This is a fork of CompVis/stable-diffusion, the wonderful open source
|
||||
text-to-image generator.
|
||||
# InvokeAI: A Stable Diffusion Toolkit
|
||||
|
||||
The original has been modified in several ways:
|
||||
_Formerly known as lstein/stable-diffusion_
|
||||
|
||||
## Interactive command-line interface similar to the Discord bot
|
||||

|
||||
|
||||
The *dream.py* script, located in scripts/dream.py,
|
||||
provides an interactive interface to image generation similar to
|
||||
the "dream mothership" bot that Stable AI provided on its Discord
|
||||
server. Unlike the txt2img.py and img2img.py scripts provided in the
|
||||
original CompViz/stable-diffusion source code repository, the
|
||||
time-consuming initialization of the AI model
|
||||
initialization only happens once. After that image generation
|
||||
from the command-line interface is very fast.
|
||||
[![discord badge]][discord link]
|
||||
|
||||
The script uses the readline library to allow for in-line editing,
|
||||
command history (up and down arrows), autocompletion, and more. To help
|
||||
keep track of which prompts generated which images, the script writes a
|
||||
log file of image names and prompts to the selected output directory.
|
||||
In addition, as of version 1.02, it also writes the prompt into the PNG
|
||||
file's metadata where it can be retrieved using scripts/images2prompt.py
|
||||
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
|
||||
|
||||
The script is confirmed to work on Linux and Windows systems. It should
|
||||
work on MacOSX as well, but this is not confirmed. Note that this script
|
||||
runs from the command-line (CMD or Terminal window), and does not have a GUI.
|
||||
[![CI checks on main badge]][CI checks on main link] [![CI checks on dev badge]][CI checks on dev link] [![latest commit to dev badge]][latest commit to dev link]
|
||||
|
||||
~~~~
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
|
||||
* Initializing, be patient...
|
||||
Loading model from models/ldm/text2img-large/model.ckpt
|
||||
LatentDiffusion: Running in eps-prediction mode
|
||||
DiffusionWrapper has 872.30 M params.
|
||||
making attention of type 'vanilla' with 512 in_channels
|
||||
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
|
||||
making attention of type 'vanilla' with 512 in_channels
|
||||
Loading Bert tokenizer from "models/bert"
|
||||
setting sampler to plms
|
||||
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link]
|
||||
|
||||
* Initialization done! Awaiting your command...
|
||||
dream> ashley judd riding a camel -n2 -s150
|
||||
Outputs:
|
||||
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
||||
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
|
||||
[CI checks on dev badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/development?label=CI%20status%20on%20dev&cache=900&icon=github
|
||||
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
|
||||
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
|
||||
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
|
||||
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
|
||||
[discord link]: https://discord.gg/ZmtBAhwWhy
|
||||
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
|
||||
[github forks link]: https://useful-forks.github.io/?repo=invoke-ai%2FInvokeAI
|
||||
[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
|
||||
[github open issues link]: https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen
|
||||
[github open prs badge]: https://flat.badgen.net/github/open-prs/invoke-ai/InvokeAI?icon=github
|
||||
[github open prs link]: https://github.com/invoke-ai/InvokeAI/pulls?q=is%3Apr+is%3Aopen
|
||||
[github stars badge]: https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
|
||||
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
|
||||
[latest commit to dev badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
|
||||
[latest commit to dev link]: https://github.com/invoke-ai/InvokeAI/commits/development
|
||||
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
|
||||
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
|
||||
</div>
|
||||
|
||||
dream> "there's a fly in my soup" -n6 -g
|
||||
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
||||
dream> q
|
||||
This is a fork of
|
||||
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion),
|
||||
the open source text-to-image generator. It provides a streamlined
|
||||
process with various new features and options to aid the image
|
||||
generation process. It runs on Windows, Mac and Linux machines, with
|
||||
GPU cards with as little as 4 GB of RAM. It provides both a polished
|
||||
Web interface (see below), and an easy-to-use command-line interface.
|
||||
|
||||
# this shows how to retrieve the prompt stored in the saved image's metadata
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/images2prompt.py outputs/img_samples/*.png
|
||||
00009.png: "ashley judd riding a camel" -s150 -S 416354203
|
||||
00010.png: "ashley judd riding a camel" -s150 -S 1362479620
|
||||
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
~~~~
|
||||
**Quick links**: [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
|
||||
|
||||
The dream> prompt's arguments are pretty much
|
||||
identical to those used in the Discord bot, except you don't need to
|
||||
type "!dream" (it doesn't hurt if you do). A significant change is that creation of individual images
|
||||
is now the default
|
||||
unless --grid (-g) is given. For backward compatibility, the -i switch is recognized.
|
||||
For command-line help type -h (or --help) at the dream> prompt.
|
||||
|
||||
The script itself also recognizes a series of command-line switches that will change
|
||||
important global defaults, such as the directory for image outputs and the location
|
||||
of the model weight files.
|
||||
|
||||
## Image-to-Image
|
||||
|
||||
This script also provides an img2img feature that lets you seed your
|
||||
creations with a drawing or photo. This is a really cool feature that tells
|
||||
stable diffusion to build the prompt on top of the image you provide, preserving
|
||||
the original's basic shape and layout. To use it, provide the --init_img
|
||||
option as shown here:
|
||||
|
||||
~~~~
|
||||
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
~~~~
|
||||
|
||||
The --init_img (-I) option gives the path to the seed picture. --strength (-f) controls how much
|
||||
the original will be modified, ranging from 0.0 (keep the original intact), to 1.0 (ignore the original
|
||||
completely). The default is 0.75, and ranges from 0.25-0.75 give interesting results.
|
||||
|
||||
## Changes
|
||||
|
||||
* v1.03 (22 August 2022)
|
||||
* The original txt2img and img2img scripts from the CompViz repository have been moved into
|
||||
a subfolder named "orig_scripts", to reduce confusion.
|
||||
|
||||
* v1.02 (21 August 2022)
|
||||
* A copy of the prompt and all of its switches and options is now stored in the corresponding
|
||||
image in a tEXt metadata field named "Dream". You can read the prompt using scripts/images2prompt.py,
|
||||
or an image editor that allows you to explore the full metadata.
|
||||
**Please run "conda env update -f environment.yaml" to load the k_lms dependencies!!**
|
||||
|
||||
* v1.01 (21 August 2022)
|
||||
* added k_lms sampling.
|
||||
**Please run "conda env update -f environment.yaml" to load the k_lms dependencies!!**
|
||||
* use half precision arithmetic by default, resulting in faster execution and lower memory requirements
|
||||
Pass argument --full_precision to dream.py to get slower but more accurate image generation
|
||||
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
|
||||
|
||||
|
||||
## Installation
|
||||
_Note: This fork is rapidly evolving. Please use the
|
||||
[Issues](https://github.com/invoke-ai/InvokeAI/issues) tab to report bugs and make feature
|
||||
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
|
||||
|
||||
### Linux/Mac
|
||||
## Table of Contents
|
||||
|
||||
1. You will need to install the following prerequisites if they are not already available. Use your
|
||||
operating system's preferred installer
|
||||
* Python (version 3.8.5 recommended; higher may work)
|
||||
* git
|
||||
1. [Installation](#installation)
|
||||
2. [Hardware Requirements](#hardware-requirements)
|
||||
3. [Features](#features)
|
||||
4. [Latest Changes](#latest-changes)
|
||||
5. [Troubleshooting](#troubleshooting)
|
||||
6. [Contributing](#contributing)
|
||||
7. [Contributors](#contributors)
|
||||
8. [Support](#support)
|
||||
9. [Further Reading](#further-reading)
|
||||
|
||||
2. Install the Python Anaconda environment manager using pip3.
|
||||
```
|
||||
~$ pip3 install anaconda
|
||||
```
|
||||
After installing anaconda, you should log out of your system and log back in. If the installation
|
||||
worked, your command prompt will be prefixed by the name of the current anaconda environment, "(base)".
|
||||
### Installation
|
||||
|
||||
3. Copy the stable-diffusion source code from GitHub:
|
||||
```
|
||||
(base) ~$ git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
This fork is supported across Linux, Windows and Macintosh. Linux
|
||||
users can use either an Nvidia-based card (with CUDA support) or an
|
||||
AMD card (using the ROCm driver). For full installation and upgrade
|
||||
instructions, please see:
|
||||
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
||||
|
||||
4. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
```
|
||||
(base) ~$ cd stable-diffusion
|
||||
(base) ~/stable-diffusion$
|
||||
```
|
||||
5. Use anaconda to copy necessary python packages, create a new python environment named "ldm",
|
||||
and activate the environment.
|
||||
```
|
||||
(base) ~/stable-diffusion$ conda env create -f environment.yaml
|
||||
(base) ~/stable-diffusion$ conda activate ldm
|
||||
(ldm) ~/stable-diffusion$
|
||||
```
|
||||
After these steps, your command prompt will be prefixed by "(ldm)" as shown above.
|
||||
### Hardware Requirements
|
||||
|
||||
6. Load a couple of small machine-learning models required by stable diffusion:
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
|
||||
#### System
|
||||
|
||||
You wil need one of the following:
|
||||
|
||||
- An NVIDIA-based graphics card with 4 GB or more VRAM memory.
|
||||
- An Apple computer with an M1 chip.
|
||||
|
||||
#### Memory
|
||||
|
||||
- At least 12 GB Main Memory RAM.
|
||||
|
||||
#### Disk
|
||||
|
||||
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
||||
|
||||
**Note**
|
||||
|
||||
If you have a Nvidia 10xx series card (e.g. the 1080ti), please
|
||||
run the dream script in full-precision mode as shown below.
|
||||
|
||||
Similarly, specify full-precision mode on Apple M1 hardware.
|
||||
|
||||
Precision is auto configured based on the device. If however you encounter
|
||||
errors like 'expected type Float but found Half' or 'not implemented for Half'
|
||||
you can try starting `invoke.py` with the `--precision=float32` flag:
|
||||
|
||||
```bash
|
||||
(invokeai) ~/InvokeAI$ python scripts/invoke.py --precision=float32
|
||||
```
|
||||
|
||||
7. Now you need to install the weights for the stable diffusion model.
|
||||
|
||||
For testing prior to the release of the real weights, you can use an older weight file that produces low-quality images. Create a directory within stable-diffusion named "models/ldm/text2img-large", and use the wget URL downloader tool to copy the weight file into it:
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/text2img-large
|
||||
(ldm) ~/stable-diffusion$ wget -O models/ldm/text2img-large/model.ckpt https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt
|
||||
```
|
||||
For testing with the released weighs, you will do something similar, but with a directory named "models/ldm/stable-diffusion-v1"
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ mkdir -p models/ldm/stable-diffusion-v1
|
||||
(ldm) ~/stable-diffusion$ wget -O models/ldm/stable-diffusion-v1/model.ckpt <ENTER URL HERE>
|
||||
```
|
||||
These weight files are ~5 GB in size, so downloading may take a while.
|
||||
|
||||
8. Start generating images!
|
||||
```
|
||||
# for the pre-release weights use the -l or --liaon400m switch
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -l
|
||||
|
||||
# for the post-release weights do not use the switch
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py
|
||||
|
||||
# for additional configuration switches and arguments, use -h or --help
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py -h
|
||||
```
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the "stable-diffusion"
|
||||
directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple ModuleNotFound errors.
|
||||
|
||||
### Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ git pull
|
||||
```
|
||||
This will bring your local copy into sync with the remote one.
|
||||
|
||||
### Windows
|
||||
|
||||
1. Install Python version 3.8.5 from here: https://www.python.org/downloads/windows/
|
||||
(note that several users have reported that later versions do not work properly)
|
||||
|
||||
2. Install Anaconda3 (miniconda3 version) from here: https://docs.anaconda.com/anaconda/install/windows/
|
||||
|
||||
3. Install Git from here: https://git-scm.com/download/win
|
||||
|
||||
4. Launch Anaconda from the Windows Start menu. This will bring up a command window. Type all the remaining commands in this window.
|
||||
|
||||
5. Run the command:
|
||||
```
|
||||
git clone https://github.com/lstein/stable-diffusion.git
|
||||
```
|
||||
This will create stable-diffusion folder where you will follow the rest of the steps.
|
||||
|
||||
6. Enter the newly-created stable-diffusion folder. From this step forward make sure that you are working in the stable-diffusion directory!
|
||||
```
|
||||
cd stable-diffusion
|
||||
```
|
||||
|
||||
7. Run the following two commands:
|
||||
```
|
||||
conda env create -f environment.yaml (step 7a)
|
||||
conda activate ldm (step 7b)
|
||||
```
|
||||
This will install all python requirements and activate the "ldm" environment which sets PATH and other environment variables properly.
|
||||
|
||||
8. Run the command:
|
||||
```
|
||||
python scripts\preload_models.py
|
||||
```
|
||||
This installs two machine learning models that stable diffusion requires.
|
||||
|
||||
9. Now you need to install the weights for the big stable diffusion model.
|
||||
|
||||
For testing prior to the release of the real weights, create a directory within stable-diffusion named "models\ldm\text2img-large".
|
||||
|
||||
For testing with the released weights, create a directory within stable-diffusion named "models\ldm\stable-diffusion-v1".
|
||||
|
||||
Then use a web browser to copy model.ckpt into the appropriate directory. For the text2img-large (pre-release) model, the weights are at https://ommer-lab.com/files/latent-diffusion/nitro/txt2img-f8-large/model.ckpt. Check back here later for the release URL.
|
||||
|
||||
10. Start generating images!
|
||||
```
|
||||
# for the pre-release weights
|
||||
python scripts\dream.py -l
|
||||
|
||||
# for the post-release weights
|
||||
python scripts\dream.py
|
||||
```
|
||||
11. Subsequently, to relaunch the script, first activate the Anaconda command window (step 4), enter the stable-diffusion directory (step 6, "cd \path\to\stable-diffusion"), run "conda activate ldm" (step 7b), and then launch the dream script (step 10).
|
||||
|
||||
### Updating to newer versions of the script
|
||||
|
||||
This distribution is changing rapidly. If you used the "git clone" method (step 5) to download the stable-diffusion directory, then to update to the latest and greatest version, launch the Anaconda window, enter "stable-diffusion", and type:
|
||||
```
|
||||
git pull
|
||||
```
|
||||
This will bring your local copy into sync with the remote one.
|
||||
|
||||
## Simplified API for text to image generation
|
||||
|
||||
For programmers who wish to incorporate stable-diffusion into other
|
||||
products, this repository includes a simplified API for text to image generation, which
|
||||
lets you create images from a prompt in just three lines of code:
|
||||
|
||||
~~~~
|
||||
from ldm.simplet2i import T2I
|
||||
model = T2I()
|
||||
outputs = model.text2image("a unicorn in manhattan")
|
||||
~~~~
|
||||
|
||||
Outputs is a list of lists in the format [[filename1,seed1],[filename2,seed2]...]
|
||||
Please see ldm/simplet2i.py for more information.
|
||||
|
||||
|
||||
## Workaround for machines with limited internet connectivity
|
||||
|
||||
My development machine is a GPU node in a high-performance compute
|
||||
cluster which has no connection to the internet. During model
|
||||
initialization, stable-diffusion tries to download the Bert tokenizer
|
||||
and a file needed by the kornia library. This obviously didn't work
|
||||
for me.
|
||||
|
||||
To work around this, I have modified ldm/modules/encoders/modules.py
|
||||
to look for locally cached Bert files rather than attempting to
|
||||
download them. For this to work, you must run
|
||||
"scripts/preload_models.py" once from an internet-connected machine
|
||||
prior to running the code on an isolated one. This assumes that both
|
||||
machines share a common network-mounted filesystem with a common
|
||||
.cache directory.
|
||||
|
||||
~~~~
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
|
||||
preloading bert tokenizer...
|
||||
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 455k/455k [00:00<00:00, 4.36MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 570/570 [00:00<00:00, 477kB/s]
|
||||
...success
|
||||
preloading kornia requirements...
|
||||
Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /u/lstein/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
|
||||
100%|███████████████████████████████████████████████| 5.10M/5.10M [00:00<00:00, 101MB/s]
|
||||
...success
|
||||
~~~~
|
||||
|
||||
If you don't need this change and want to download the files just in
|
||||
time, copy over the file ldm/modules/encoders/modules.py from the
|
||||
CompVis/stable-diffusion repository. Or you can run preload_models.py
|
||||
on the target machine.
|
||||
|
||||
## Support
|
||||
|
||||
For support,
|
||||
please use this repository's GitHub Issues tracking service. Feel free
|
||||
to send me an email if you use and like the script.
|
||||
|
||||
*Original Author:* Lincoln D. Stein <lincoln.stein@gmail.com>
|
||||
|
||||
*Contributions by:* [Peter Kowalczyk](https://github.com/slix), [Henry Harrison](https://github.com/hwharrison), [xraxra](https://github.com/xraxra), and [bmaltais](https://github.com/bmaltais)
|
||||
|
||||
# Original README from CompViz/stable-diffusion
|
||||
*Stable Diffusion was made possible thanks to a collaboration with [Stability AI](https://stability.ai/) and [Runway](https://runwayml.com/) and builds upon our previous work:*
|
||||
|
||||
[**High-Resolution Image Synthesis with Latent Diffusion Models**](https://ommer-lab.com/research/latent-diffusion-models/)<br/>
|
||||
[Robin Rombach](https://github.com/rromb)\*,
|
||||
[Andreas Blattmann](https://github.com/ablattmann)\*,
|
||||
[Dominik Lorenz](https://github.com/qp-qp)\,
|
||||
[Patrick Esser](https://github.com/pesser),
|
||||
[Björn Ommer](https://hci.iwr.uni-heidelberg.de/Staff/bommer)<br/>
|
||||
|
||||
**CVPR '22 Oral**
|
||||
|
||||
which is available on [GitHub](https://github.com/CompVis/latent-diffusion). PDF at [arXiv](https://arxiv.org/abs/2112.10752). Please also visit our [Project page](https://ommer-lab.com/research/latent-diffusion-models/).
|
||||
|
||||

|
||||
[Stable Diffusion](#stable-diffusion-v1) is a latent text-to-image diffusion
|
||||
model.
|
||||
Thanks to a generous compute donation from [Stability AI](https://stability.ai/) and support from [LAION](https://laion.ai/), we were able to train a Latent Diffusion Model on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database.
|
||||
Similar to Google's [Imagen](https://arxiv.org/abs/2205.11487),
|
||||
this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
|
||||
With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.
|
||||
See [this section](#stable-diffusion-v1) below and the [model card](https://huggingface.co/CompVis/stable-diffusion).
|
||||
|
||||
|
||||
## Requirements
|
||||
|
||||
A suitable [conda](https://conda.io/) environment named `ldm` can be created
|
||||
and activated with:
|
||||
|
||||
```
|
||||
conda env create -f environment.yaml
|
||||
conda activate ldm
|
||||
```
|
||||
|
||||
You can also update an existing [latent diffusion](https://github.com/CompVis/latent-diffusion) environment by running
|
||||
|
||||
```
|
||||
conda install pytorch torchvision -c pytorch
|
||||
pip install transformers==4.19.2
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Stable Diffusion v1
|
||||
|
||||
Stable Diffusion v1 refers to a specific configuration of the model
|
||||
architecture that uses a downsampling-factor 8 autoencoder with an 860M UNet
|
||||
and CLIP ViT-L/14 text encoder for the diffusion model. The model was pretrained on 256x256 images and
|
||||
then finetuned on 512x512 images.
|
||||
|
||||
*Note: Stable Diffusion v1 is a general text-to-image diffusion model and therefore mirrors biases and (mis-)conceptions that are present
|
||||
in its training data.
|
||||
Details on the training procedure and data, as well as the intended use of the model can be found in the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion).
|
||||
Research into the safe deployment of general text-to-image models is an ongoing effort. To prevent misuse and harm, we currently provide access to the checkpoints only for [academic research purposes upon request](https://stability.ai/academia-access-form).
|
||||
**This is an experiment in safe and community-driven publication of a capable and general text-to-image model. We are working on a public release with a more permissive license that also incorporates ethical considerations.***
|
||||
|
||||
[Request access to Stable Diffusion v1 checkpoints for academic research](https://stability.ai/academia-access-form)
|
||||
|
||||
### Weights
|
||||
|
||||
We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`,
|
||||
which were trained as follows,
|
||||
|
||||
- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
||||
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
|
||||
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
|
||||
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
|
||||
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
||||
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
||||
|
||||
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
||||
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
||||
steps show the relative improvements of the checkpoints:
|
||||

|
||||
|
||||
|
||||
|
||||
### Text-to-Image with Stable Diffusion
|
||||

|
||||

|
||||
|
||||
Stable Diffusion is a latent diffusion model conditioned on the (non-pooled) text embeddings of a CLIP ViT-L/14 text encoder.
|
||||
|
||||
|
||||
#### Sampling Script
|
||||
|
||||
After [obtaining the weights](#weights), link them
|
||||
```
|
||||
mkdir -p models/ldm/stable-diffusion-v1/
|
||||
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt
|
||||
```
|
||||
and sample with
|
||||
```
|
||||
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
|
||||
```
|
||||
|
||||
By default, this uses a guidance scale of `--scale 7.5`, [Katherine Crowson's implementation](https://github.com/CompVis/latent-diffusion/pull/51) of the [PLMS](https://arxiv.org/abs/2202.09778) sampler,
|
||||
and renders images of size 512x512 (which it was trained on) in 50 steps. All supported arguments are listed below (type `python scripts/txt2img.py --help`).
|
||||
|
||||
```commandline
|
||||
usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA] [--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS]
|
||||
[--scale SCALE] [--from-file FROM_FILE] [--config CONFIG] [--ckpt CKPT] [--seed SEED] [--precision {full,autocast}]
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
--prompt [PROMPT] the prompt to render
|
||||
--outdir [OUTDIR] dir to write results to
|
||||
--skip_grid do not save a grid, only individual samples. Helpful when evaluating lots of samples
|
||||
--skip_save do not save individual samples. For speed measurements.
|
||||
--ddim_steps DDIM_STEPS
|
||||
number of ddim sampling steps
|
||||
--plms use plms sampling
|
||||
--laion400m uses the LAION400M model
|
||||
--fixed_code if enabled, uses the same starting code across samples
|
||||
--ddim_eta DDIM_ETA ddim eta (eta=0.0 corresponds to deterministic sampling
|
||||
--n_iter N_ITER sample this often
|
||||
--H H image height, in pixel space
|
||||
--W W image width, in pixel space
|
||||
--C C latent channels
|
||||
--f F downsampling factor
|
||||
--n_samples N_SAMPLES
|
||||
how many samples to produce for each given prompt. A.k.a. batch size
|
||||
--n_rows N_ROWS rows in the grid (default: n_samples)
|
||||
--scale SCALE unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))
|
||||
--from-file FROM_FILE
|
||||
if specified, load prompts from this file
|
||||
--config CONFIG path to config which constructs model
|
||||
--ckpt CKPT path to checkpoint of model
|
||||
--seed SEED the seed (for reproducible sampling)
|
||||
--precision {full,autocast}
|
||||
evaluate at this precision
|
||||
|
||||
```
|
||||
Note: The inference config for all v1 versions is designed to be used with EMA-only checkpoints.
|
||||
For this reason `use_ema=False` is set in the configuration, otherwise the code will try to switch from
|
||||
non-EMA to EMA weights. If you want to examine the effect of EMA vs no EMA, we provide "full" checkpoints
|
||||
which contain both types of weights. For these, `use_ema=False` will load and use the non-EMA weights.
|
||||
|
||||
|
||||
#### Diffusers Integration
|
||||
|
||||
Another way to download and sample Stable Diffusion is by using the [diffusers library](https://github.com/huggingface/diffusers/tree/main#new--stable-diffusion-is-now-fully-compatible-with-diffusers)
|
||||
```py
|
||||
# make sure you're logged in with `huggingface-cli login`
|
||||
from torch import autocast
|
||||
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
|
||||
|
||||
pipe = StableDiffusionPipeline.from_pretrained(
|
||||
"CompVis/stable-diffusion-v1-3-diffusers",
|
||||
use_auth_token=True
|
||||
)
|
||||
|
||||
prompt = "a photo of an astronaut riding a horse on mars"
|
||||
with autocast("cuda"):
|
||||
image = pipe(prompt)["sample"][0]
|
||||
|
||||
image.save("astronaut_rides_horse.png")
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Image Modification with Stable Diffusion
|
||||
|
||||
By using a diffusion-denoising mechanism as first proposed by [SDEdit](https://arxiv.org/abs/2108.01073), the model can be used for different
|
||||
tasks such as text-guided image-to-image translation and upscaling. Similar to the txt2img sampling script,
|
||||
we provide a script to perform image modification with Stable Diffusion.
|
||||
|
||||
The following describes an example where a rough sketch made in [Pinta](https://www.pinta-project.com/) is converted into a detailed artwork.
|
||||
```
|
||||
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img <path-to-img.jpg> --strength 0.8
|
||||
```
|
||||
Here, strength is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image.
|
||||
Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. See the following example.
|
||||
|
||||
**Input**
|
||||
|
||||

|
||||
|
||||
**Outputs**
|
||||
|
||||

|
||||

|
||||
|
||||
This procedure can, for example, also be used to upscale samples from the base model.
|
||||
|
||||
|
||||
## Comments
|
||||
|
||||
- Our codebase for the diffusion models builds heavily on [OpenAI's ADM codebase](https://github.com/openai/guided-diffusion)
|
||||
and [https://github.com/lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch).
|
||||
Thanks for open-sourcing!
|
||||
|
||||
- The implementation of the transformer encoder is from [x-transformers](https://github.com/lucidrains/x-transformers) by [lucidrains](https://github.com/lucidrains?tab=repositories).
|
||||
|
||||
|
||||
## BibTeX
|
||||
|
||||
```
|
||||
@misc{rombach2021highresolution,
|
||||
title={High-Resolution Image Synthesis with Latent Diffusion Models},
|
||||
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
|
||||
year={2021},
|
||||
eprint={2112.10752},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CV}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
### Features
|
||||
|
||||
#### Major Features
|
||||
|
||||
- [Web Server](https://invoke-ai.github.io/InvokeAI/features/WEB/)
|
||||
- [Interactive Command Line Interface](https://invoke-ai.github.io/InvokeAI/features/CLI/)
|
||||
- [Image To Image](https://invoke-ai.github.io/InvokeAI/features/IMG2IMG/)
|
||||
- [Inpainting Support](https://invoke-ai.github.io/InvokeAI/features/INPAINTING/)
|
||||
- [Outpainting Support](https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/)
|
||||
- [Upscaling, face-restoration and outpainting](https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/)
|
||||
- [Reading Prompts From File](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#reading-prompts-from-a-file)
|
||||
- [Prompt Blending](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-blending)
|
||||
- [Thresholding and Perlin Noise Initialization Options](https://invoke-ai.github.io/InvokeAI/features/OTHER/#thresholding-and-perlin-noise-initialization-options)
|
||||
- [Negative/Unconditioned Prompts](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts)
|
||||
- [Variations](https://invoke-ai.github.io/InvokeAI/features/VARIATIONS/)
|
||||
- [Personalizing Text-to-Image Generation](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/)
|
||||
- [Simplified API for text to image generation](https://invoke-ai.github.io/InvokeAI/features/OTHER/#simplified-api)
|
||||
|
||||
#### Other Features
|
||||
|
||||
- [Google Colab](https://invoke-ai.github.io/InvokeAI/features/OTHER/#google-colab)
|
||||
- [Seamless Tiling](https://invoke-ai.github.io/InvokeAI/features/OTHER/#seamless-tiling)
|
||||
- [Shortcut: Reusing Seeds](https://invoke-ai.github.io/InvokeAI/features/OTHER/#shortcuts-reusing-seeds)
|
||||
- [Preload Models](https://invoke-ai.github.io/InvokeAI/features/OTHER/#preload-models)
|
||||
|
||||
### Latest Changes
|
||||
|
||||
- v2.0.1 (13 October 2022)
|
||||
- fix noisy images at high step count when using k* samplers
|
||||
- dream.py script now calls invoke.py module directly rather than
|
||||
via a new python process (which could break the environment)
|
||||
|
||||
- v2.0.0 (9 October 2022)
|
||||
|
||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains
|
||||
for backward compatibility.
|
||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
||||
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/INPAINTING/">inpainting</a> and <a href="https://invoke-ai.github.io/InvokeAI/features/OUTPAINTING/">outpainting</a>
|
||||
- img2img runs on all k* samplers
|
||||
- Support for <a href="https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#negative-and-unconditioned-prompts">negative prompts</a>
|
||||
- Support for CodeFormer face reconstruction
|
||||
- Support for Textual Inversion on Macintoshes
|
||||
- Support in both WebGUI and CLI for <a href="https://invoke-ai.github.io/InvokeAI/features/POSTPROCESS/">post-processing of previously-generated images</a>
|
||||
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E infinite canvas),
|
||||
and "embiggen" upscaling. See the `!fix` command.
|
||||
- New `--hires` option on `invoke>` line allows <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/#txt2img">larger images to be created without duplicating elements</a>, at the cost of some performance.
|
||||
- New `--perlin` and `--threshold` options allow you to add and control variation
|
||||
during image generation (see <a href="https://github.com/invoke-ai/InvokeAI/blob/main/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options">Thresholding and Perlin Noise Initialization</a>
|
||||
- Extensive metadata now written into PNG files, allowing reliable regeneration of images
|
||||
and tweaking of previous settings.
|
||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac platforms.
|
||||
- Improved <a href="https://invoke-ai.github.io/InvokeAI/features/CLI/">command-line completion behavior</a>.
|
||||
New commands added:
|
||||
- List command-line history with `!history`
|
||||
- Search command-line history with `!search`
|
||||
- Clear history with `!clear`
|
||||
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
||||
configure. To switch away from auto use the new flag like `--precision=float32`.
|
||||
|
||||
For older changelogs, please visit the **[CHANGELOG](https://invoke-ai.github.io/InvokeAI/CHANGELOG#v114-11-september-2022)**.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
||||
problems and other issues.
|
||||
|
||||
# Contributing
|
||||
|
||||
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
||||
cleanup, testing, or code reviews, is very much encouraged to do so. To join, just raise your hand on the InvokeAI
|
||||
Discord server or discussion board.
|
||||
|
||||
If you are unfamiliar with how
|
||||
to contribute to GitHub projects, here is a
|
||||
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress, but for now the most
|
||||
important thing is to **make your pull request against the "development" branch**, and not against
|
||||
"main". This will help keep public breakage to a minimum and will allow you to propose more radical
|
||||
changes.
|
||||
|
||||
We hope you enjoy using our software as much as we enjoy creating it,
|
||||
and we hope that some of those of you who are reading this will elect
|
||||
to become part of our community.
|
||||
|
||||
Welcome to InvokeAI!
|
||||
|
||||
### Contributors
|
||||
|
||||
This fork is a combined effort of various people from across the world.
|
||||
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
|
||||
their time, hard work and effort.
|
||||
|
||||
### Support
|
||||
|
||||
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an
|
||||
email if you use and like the script.
|
||||
|
||||
Original portions of the software are Copyright (c) 2020
|
||||
[Lincoln D. Stein](https://github.com/lstein)
|
||||
|
||||
### Further Reading
|
||||
|
||||
Please see the original README for more information on this software and underlying algorithm,
|
||||
located in the file [README-CompViz.md](https://invoke-ai.github.io/InvokeAI/other/README-CompViz/).
|
||||
|
||||
BIN
assets/caution.png
Normal file
|
After Width: | Height: | Size: 33 KiB |
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 466 KiB After Width: | Height: | Size: 466 KiB |
|
Before Width: | Height: | Size: 7.4 KiB After Width: | Height: | Size: 7.4 KiB |
|
Before Width: | Height: | Size: 539 KiB After Width: | Height: | Size: 539 KiB |
|
Before Width: | Height: | Size: 7.6 KiB After Width: | Height: | Size: 7.6 KiB |
|
Before Width: | Height: | Size: 450 KiB After Width: | Height: | Size: 450 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 553 KiB After Width: | Height: | Size: 553 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 418 KiB After Width: | Height: | Size: 418 KiB |
|
Before Width: | Height: | Size: 6.1 KiB After Width: | Height: | Size: 6.1 KiB |
|
Before Width: | Height: | Size: 542 KiB After Width: | Height: | Size: 542 KiB |
|
Before Width: | Height: | Size: 9.5 KiB After Width: | Height: | Size: 9.5 KiB |
|
Before Width: | Height: | Size: 395 KiB After Width: | Height: | Size: 395 KiB |
|
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
|
Before Width: | Height: | Size: 465 KiB After Width: | Height: | Size: 465 KiB |
|
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
0
backend/__init__.py
Normal file
1626
backend/invoke_ai_web_server.py
Normal file
0
backend/modules/__init__.py
Normal file
55
backend/modules/create_cmd_parser.py
Normal file
@@ -0,0 +1,55 @@
|
||||
import argparse
|
||||
import os
|
||||
from ldm.invoke.args import PRECISION_CHOICES
|
||||
|
||||
|
||||
def create_cmd_parser():
|
||||
parser = argparse.ArgumentParser(description="InvokeAI web UI")
|
||||
parser.add_argument(
|
||||
"--host",
|
||||
type=str,
|
||||
help="The host to serve on",
|
||||
default="localhost",
|
||||
)
|
||||
parser.add_argument("--port", type=int, help="The port to serve on", default=9090)
|
||||
parser.add_argument(
|
||||
"--cors",
|
||||
nargs="*",
|
||||
type=str,
|
||||
help="Additional allowed origins, comma-separated",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--embedding_path",
|
||||
type=str,
|
||||
help="Path to a pre-trained embedding manager checkpoint - can only be set on command line",
|
||||
)
|
||||
# TODO: Can't get flask to serve images from any dir (saving to the dir does work when specified)
|
||||
# parser.add_argument(
|
||||
# "--output_dir",
|
||||
# default="outputs/",
|
||||
# type=str,
|
||||
# help="Directory for output images",
|
||||
# )
|
||||
parser.add_argument(
|
||||
"-v",
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Enables verbose logging",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--precision",
|
||||
dest="precision",
|
||||
type=str,
|
||||
choices=PRECISION_CHOICES,
|
||||
metavar="PRECISION",
|
||||
help=f'Set model precision. Defaults to auto selected based on device. Options: {", ".join(PRECISION_CHOICES)}',
|
||||
default="auto",
|
||||
)
|
||||
parser.add_argument(
|
||||
'--free_gpu_mem',
|
||||
dest='free_gpu_mem',
|
||||
action='store_true',
|
||||
help='Force free gpu memory before final decoding',
|
||||
)
|
||||
|
||||
return parser
|
||||
117
backend/modules/get_canvas_generation_mode.py
Normal file
@@ -0,0 +1,117 @@
|
||||
from PIL import Image, ImageChops
|
||||
from PIL.Image import Image as ImageType
|
||||
from typing import Union, Literal
|
||||
|
||||
# https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent
|
||||
def check_for_any_transparency(img: Union[ImageType, str]) -> bool:
|
||||
if type(img) is str:
|
||||
img = Image.open(str)
|
||||
|
||||
if img.info.get("transparency", None) is not None:
|
||||
return True
|
||||
if img.mode == "P":
|
||||
transparent = img.info.get("transparency", -1)
|
||||
for _, index in img.getcolors():
|
||||
if index == transparent:
|
||||
return True
|
||||
elif img.mode == "RGBA":
|
||||
extrema = img.getextrema()
|
||||
if extrema[3][0] < 255:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def get_canvas_generation_mode(
|
||||
init_img: Union[ImageType, str], init_mask: Union[ImageType, str]
|
||||
) -> Literal["txt2img", "outpainting", "inpainting", "img2img",]:
|
||||
if type(init_img) is str:
|
||||
init_img = Image.open(init_img)
|
||||
|
||||
if type(init_mask) is str:
|
||||
init_mask = Image.open(init_mask)
|
||||
|
||||
init_img = init_img.convert("RGBA")
|
||||
|
||||
# Get alpha from init_img
|
||||
init_img_alpha = init_img.split()[-1]
|
||||
init_img_alpha_mask = init_img_alpha.convert("L")
|
||||
init_img_has_transparency = check_for_any_transparency(init_img)
|
||||
|
||||
if init_img_has_transparency:
|
||||
init_img_is_fully_transparent = (
|
||||
True if init_img_alpha_mask.getbbox() is None else False
|
||||
)
|
||||
|
||||
"""
|
||||
Mask images are white in areas where no change should be made, black where changes
|
||||
should be made.
|
||||
"""
|
||||
|
||||
# Fit the mask to init_img's size and convert it to greyscale
|
||||
init_mask = init_mask.resize(init_img.size).convert("L")
|
||||
|
||||
"""
|
||||
PIL.Image.getbbox() returns the bounding box of non-zero areas of the image, so we first
|
||||
invert the mask image so that masked areas are white and other areas black == zero.
|
||||
getbbox() now tells us if the are any masked areas.
|
||||
"""
|
||||
init_mask_bbox = ImageChops.invert(init_mask).getbbox()
|
||||
init_mask_exists = False if init_mask_bbox is None else True
|
||||
|
||||
if init_img_has_transparency:
|
||||
if init_img_is_fully_transparent:
|
||||
return "txt2img"
|
||||
else:
|
||||
return "outpainting"
|
||||
else:
|
||||
if init_mask_exists:
|
||||
return "inpainting"
|
||||
else:
|
||||
return "img2img"
|
||||
|
||||
|
||||
def main():
|
||||
# Testing
|
||||
init_img_opaque = "test_images/init-img_opaque.png"
|
||||
init_img_partial_transparency = "test_images/init-img_partial_transparency.png"
|
||||
init_img_full_transparency = "test_images/init-img_full_transparency.png"
|
||||
init_mask_no_mask = "test_images/init-mask_no_mask.png"
|
||||
init_mask_has_mask = "test_images/init-mask_has_mask.png"
|
||||
|
||||
print(
|
||||
"OPAQUE IMAGE, NO MASK, expect img2img, got ",
|
||||
get_canvas_generation_mode(init_img_opaque, init_mask_no_mask),
|
||||
)
|
||||
|
||||
print(
|
||||
"IMAGE WITH TRANSPARENCY, NO MASK, expect outpainting, got ",
|
||||
get_canvas_generation_mode(
|
||||
init_img_partial_transparency, init_mask_no_mask
|
||||
),
|
||||
)
|
||||
|
||||
print(
|
||||
"FULLY TRANSPARENT IMAGE NO MASK, expect txt2img, got ",
|
||||
get_canvas_generation_mode(init_img_full_transparency, init_mask_no_mask),
|
||||
)
|
||||
|
||||
print(
|
||||
"OPAQUE IMAGE, WITH MASK, expect inpainting, got ",
|
||||
get_canvas_generation_mode(init_img_opaque, init_mask_has_mask),
|
||||
)
|
||||
|
||||
print(
|
||||
"IMAGE WITH TRANSPARENCY, WITH MASK, expect outpainting, got ",
|
||||
get_canvas_generation_mode(
|
||||
init_img_partial_transparency, init_mask_has_mask
|
||||
),
|
||||
)
|
||||
|
||||
print(
|
||||
"FULLY TRANSPARENT IMAGE WITH MASK, expect txt2img, got ",
|
||||
get_canvas_generation_mode(init_img_full_transparency, init_mask_has_mask),
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
71
backend/modules/parameters.py
Normal file
@@ -0,0 +1,71 @@
|
||||
from backend.modules.parse_seed_weights import parse_seed_weights
|
||||
import argparse
|
||||
|
||||
SAMPLER_CHOICES = [
|
||||
"ddim",
|
||||
"k_dpm_2_a",
|
||||
"k_dpm_2",
|
||||
"k_dpmpp_2_a",
|
||||
"k_dpmpp_2",
|
||||
"k_euler_a",
|
||||
"k_euler",
|
||||
"k_heun",
|
||||
"k_lms",
|
||||
"plms",
|
||||
]
|
||||
|
||||
|
||||
def parameters_to_command(params):
|
||||
"""
|
||||
Converts dict of parameters into a `invoke.py` REPL command.
|
||||
"""
|
||||
|
||||
switches = list()
|
||||
|
||||
if "prompt" in params:
|
||||
switches.append(f'"{params["prompt"]}"')
|
||||
if "steps" in params:
|
||||
switches.append(f'-s {params["steps"]}')
|
||||
if "seed" in params:
|
||||
switches.append(f'-S {params["seed"]}')
|
||||
if "width" in params:
|
||||
switches.append(f'-W {params["width"]}')
|
||||
if "height" in params:
|
||||
switches.append(f'-H {params["height"]}')
|
||||
if "cfg_scale" in params:
|
||||
switches.append(f'-C {params["cfg_scale"]}')
|
||||
if "sampler_name" in params:
|
||||
switches.append(f'-A {params["sampler_name"]}')
|
||||
if "seamless" in params and params["seamless"] == True:
|
||||
switches.append(f"--seamless")
|
||||
if "hires_fix" in params and params["hires_fix"] == True:
|
||||
switches.append(f"--hires")
|
||||
if "init_img" in params and len(params["init_img"]) > 0:
|
||||
switches.append(f'-I {params["init_img"]}')
|
||||
if "init_mask" in params and len(params["init_mask"]) > 0:
|
||||
switches.append(f'-M {params["init_mask"]}')
|
||||
if "init_color" in params and len(params["init_color"]) > 0:
|
||||
switches.append(f'--init_color {params["init_color"]}')
|
||||
if "strength" in params and "init_img" in params:
|
||||
switches.append(f'-f {params["strength"]}')
|
||||
if "fit" in params and params["fit"] == True:
|
||||
switches.append(f"--fit")
|
||||
if "facetool" in params:
|
||||
switches.append(f'-ft {params["facetool"]}')
|
||||
if "facetool_strength" in params and params["facetool_strength"]:
|
||||
switches.append(f'-G {params["facetool_strength"]}')
|
||||
elif "gfpgan_strength" in params and params["gfpgan_strength"]:
|
||||
switches.append(f'-G {params["gfpgan_strength"]}')
|
||||
if "codeformer_fidelity" in params:
|
||||
switches.append(f'-cf {params["codeformer_fidelity"]}')
|
||||
if "upscale" in params and params["upscale"]:
|
||||
switches.append(f'-U {params["upscale"][0]} {params["upscale"][1]}')
|
||||
if "variation_amount" in params and params["variation_amount"] > 0:
|
||||
switches.append(f'-v {params["variation_amount"]}')
|
||||
if "with_variations" in params:
|
||||
seed_weight_pairs = ",".join(
|
||||
f"{seed}:{weight}" for seed, weight in params["with_variations"]
|
||||
)
|
||||
switches.append(f"-V {seed_weight_pairs}")
|
||||
|
||||
return " ".join(switches)
|
||||
47
backend/modules/parse_seed_weights.py
Normal file
@@ -0,0 +1,47 @@
|
||||
def parse_seed_weights(seed_weights):
|
||||
"""
|
||||
Accepts seed weights as string in "12345:0.1,23456:0.2,3456:0.3" format
|
||||
Validates them
|
||||
If valid: returns as [[12345, 0.1], [23456, 0.2], [3456, 0.3]]
|
||||
If invalid: returns False
|
||||
"""
|
||||
|
||||
# Must be a string
|
||||
if not isinstance(seed_weights, str):
|
||||
return False
|
||||
# String must not be empty
|
||||
if len(seed_weights) == 0:
|
||||
return False
|
||||
|
||||
pairs = []
|
||||
|
||||
for pair in seed_weights.split(","):
|
||||
split_values = pair.split(":")
|
||||
|
||||
# Seed and weight are required
|
||||
if len(split_values) != 2:
|
||||
return False
|
||||
|
||||
if len(split_values[0]) == 0 or len(split_values[1]) == 1:
|
||||
return False
|
||||
|
||||
# Try casting the seed to int and weight to float
|
||||
try:
|
||||
seed = int(split_values[0])
|
||||
weight = float(split_values[1])
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
# Seed must be 0 or above
|
||||
if not seed >= 0:
|
||||
return False
|
||||
|
||||
# Weight must be between 0 and 1
|
||||
if not (weight >= 0 and weight <= 1):
|
||||
return False
|
||||
|
||||
# This pair is valid
|
||||
pairs.append([seed, weight])
|
||||
|
||||
# All pairs are valid
|
||||
return pairs
|
||||
BIN
backend/modules/test_images/init-img_full_transparency.png
Normal file
|
After Width: | Height: | Size: 2.7 KiB |
BIN
backend/modules/test_images/init-img_opaque.png
Normal file
|
After Width: | Height: | Size: 292 KiB |
BIN
backend/modules/test_images/init-img_partial_transparency.png
Normal file
|
After Width: | Height: | Size: 164 KiB |
BIN
backend/modules/test_images/init-mask_has_mask.png
Normal file
|
After Width: | Height: | Size: 9.5 KiB |
BIN
backend/modules/test_images/init-mask_no_mask.png
Normal file
|
After Width: | Height: | Size: 3.4 KiB |
80
configs/INITIAL_MODELS.yaml
Normal file
@@ -0,0 +1,80 @@
|
||||
stable-diffusion-1.5:
|
||||
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
|
||||
repo_id: runwayml/stable-diffusion-v1-5
|
||||
config: v1-inference.yaml
|
||||
file: v1-5-pruned-emaonly.ckpt
|
||||
recommended: true
|
||||
width: 512
|
||||
height: 512
|
||||
inpainting-1.5:
|
||||
description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
|
||||
repo_id: runwayml/stable-diffusion-inpainting
|
||||
config: v1-inpainting-inference.yaml
|
||||
file: sd-v1-5-inpainting.ckpt
|
||||
recommended: True
|
||||
width: 512
|
||||
height: 512
|
||||
ft-mse-improved-autoencoder-840000:
|
||||
description: StabilityAI improved autoencoder fine-tuned for human faces (recommended; 335 MB)
|
||||
repo_id: stabilityai/sd-vae-ft-mse-original
|
||||
config: VAE/default
|
||||
file: vae-ft-mse-840000-ema-pruned.ckpt
|
||||
recommended: True
|
||||
width: 512
|
||||
height: 512
|
||||
stable-diffusion-1.4:
|
||||
description: The original Stable Diffusion version 1.4 weight file (4.27 GB)
|
||||
repo_id: CompVis/stable-diffusion-v-1-4-original
|
||||
config: v1-inference.yaml
|
||||
file: sd-v1-4.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
waifu-diffusion-1.3:
|
||||
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
|
||||
repo_id: hakurei/waifu-diffusion-v1-3
|
||||
config: v1-inference.yaml
|
||||
file: model-epoch09-float32.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
trinart-2.0:
|
||||
description: An SD model finetuned with ~40,000 assorted high resolution manga/anime-style pictures (2.13 GB)
|
||||
repo_id: naclbit/trinart_stable_diffusion_v2
|
||||
config: v1-inference.yaml
|
||||
file: trinart2_step95000.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
trinart_characters-1.0:
|
||||
description: An SD model finetuned with 19.2M anime/manga style images (2.13 GB)
|
||||
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
|
||||
config: v1-inference.yaml
|
||||
file: trinart_characters_it4_v1.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
trinart_vae:
|
||||
description: Custom autoencoder for trinart_characters
|
||||
repo_id: naclbit/trinart_characters_19.2m_stable_diffusion_v1
|
||||
config: VAE/trinart
|
||||
file: autoencoder_fix_kl-f8-trinart_characters.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
papercut-1.0:
|
||||
description: SD 1.5 fine-tuned for papercut art (use "PaperCut" in your prompts) (2.13 GB)
|
||||
repo_id: Fictiverse/Stable_Diffusion_PaperCut_Model
|
||||
config: v1-inference.yaml
|
||||
file: PaperCut_v1.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
voxel_art-1.0:
|
||||
description: Stable Diffusion trained on voxel art (use "VoxelArt" in your prompts) (4.27 GB)
|
||||
repo_id: Fictiverse/Stable_Diffusion_VoxelArt_Model
|
||||
config: v1-inference.yaml
|
||||
file: VoxelArt_v1.ckpt
|
||||
recommended: False
|
||||
width: 512
|
||||
height: 512
|
||||
@@ -1,54 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 4.5e-6
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
monitor: "val/rec_loss"
|
||||
embed_dim: 16
|
||||
lossconfig:
|
||||
target: ldm.modules.losses.LPIPSWithDiscriminator
|
||||
params:
|
||||
disc_start: 50001
|
||||
kl_weight: 0.000001
|
||||
disc_weight: 0.5
|
||||
|
||||
ddconfig:
|
||||
double_z: True
|
||||
z_channels: 16
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: [16]
|
||||
dropout: 0.0
|
||||
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 12
|
||||
wrap: True
|
||||
train:
|
||||
target: ldm.data.imagenet.ImageNetSRTrain
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
validation:
|
||||
target: ldm.data.imagenet.ImageNetSRValidation
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 1000
|
||||
max_images: 8
|
||||
increase_log_steps: True
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
accumulate_grad_batches: 2
|
||||
@@ -1,53 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 4.5e-6
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
monitor: "val/rec_loss"
|
||||
embed_dim: 4
|
||||
lossconfig:
|
||||
target: ldm.modules.losses.LPIPSWithDiscriminator
|
||||
params:
|
||||
disc_start: 50001
|
||||
kl_weight: 0.000001
|
||||
disc_weight: 0.5
|
||||
|
||||
ddconfig:
|
||||
double_z: True
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult: [ 1,2,4,4 ] # num_down = len(ch_mult)-1
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: [ ]
|
||||
dropout: 0.0
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 12
|
||||
wrap: True
|
||||
train:
|
||||
target: ldm.data.imagenet.ImageNetSRTrain
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
validation:
|
||||
target: ldm.data.imagenet.ImageNetSRValidation
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 1000
|
||||
max_images: 8
|
||||
increase_log_steps: True
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
accumulate_grad_batches: 2
|
||||
@@ -1,54 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 4.5e-6
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
monitor: "val/rec_loss"
|
||||
embed_dim: 3
|
||||
lossconfig:
|
||||
target: ldm.modules.losses.LPIPSWithDiscriminator
|
||||
params:
|
||||
disc_start: 50001
|
||||
kl_weight: 0.000001
|
||||
disc_weight: 0.5
|
||||
|
||||
ddconfig:
|
||||
double_z: True
|
||||
z_channels: 3
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult: [ 1,2,4 ] # num_down = len(ch_mult)-1
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: [ ]
|
||||
dropout: 0.0
|
||||
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 12
|
||||
wrap: True
|
||||
train:
|
||||
target: ldm.data.imagenet.ImageNetSRTrain
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
validation:
|
||||
target: ldm.data.imagenet.ImageNetSRValidation
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 1000
|
||||
max_images: 8
|
||||
increase_log_steps: True
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
accumulate_grad_batches: 2
|
||||
@@ -1,53 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 4.5e-6
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
monitor: "val/rec_loss"
|
||||
embed_dim: 64
|
||||
lossconfig:
|
||||
target: ldm.modules.losses.LPIPSWithDiscriminator
|
||||
params:
|
||||
disc_start: 50001
|
||||
kl_weight: 0.000001
|
||||
disc_weight: 0.5
|
||||
|
||||
ddconfig:
|
||||
double_z: True
|
||||
z_channels: 64
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult: [ 1,1,2,2,4,4] # num_down = len(ch_mult)-1
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: [16,8]
|
||||
dropout: 0.0
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 12
|
||||
wrap: True
|
||||
train:
|
||||
target: ldm.data.imagenet.ImageNetSRTrain
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
validation:
|
||||
target: ldm.data.imagenet.ImageNetSRValidation
|
||||
params:
|
||||
size: 256
|
||||
degradation: pil_nearest
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 1000
|
||||
max_images: 8
|
||||
increase_log_steps: True
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
accumulate_grad_batches: 2
|
||||
@@ -1,86 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 2.0e-06
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.0195
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
image_size: 64
|
||||
channels: 3
|
||||
monitor: val/loss_simple_ema
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 64
|
||||
in_channels: 3
|
||||
out_channels: 3
|
||||
model_channels: 224
|
||||
attention_resolutions:
|
||||
# note: this isn\t actually the resolution but
|
||||
# the downsampling factor, i.e. this corresnponds to
|
||||
# attention on spatial resolution 8,16,32, as the
|
||||
# spatial reolution of the latents is 64 for f4
|
||||
- 8
|
||||
- 4
|
||||
- 2
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 4
|
||||
num_head_channels: 32
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.VQModelInterface
|
||||
params:
|
||||
embed_dim: 3
|
||||
n_embed: 8192
|
||||
ckpt_path: models/first_stage_models/vq-f4/model.ckpt
|
||||
ddconfig:
|
||||
double_z: false
|
||||
z_channels: 3
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
cond_stage_config: __is_unconditional__
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 48
|
||||
num_workers: 5
|
||||
wrap: false
|
||||
train:
|
||||
target: taming.data.faceshq.CelebAHQTrain
|
||||
params:
|
||||
size: 256
|
||||
validation:
|
||||
target: taming.data.faceshq.CelebAHQValidation
|
||||
params:
|
||||
size: 256
|
||||
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 5000
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
@@ -1,98 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 1.0e-06
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.0195
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
cond_stage_key: class_label
|
||||
image_size: 32
|
||||
channels: 4
|
||||
cond_stage_trainable: true
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 256
|
||||
attention_resolutions:
|
||||
#note: this isn\t actually the resolution but
|
||||
# the downsampling factor, i.e. this corresnponds to
|
||||
# attention on spatial resolution 8,16,32, as the
|
||||
# spatial reolution of the latents is 32 for f8
|
||||
- 4
|
||||
- 2
|
||||
- 1
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
num_head_channels: 32
|
||||
use_spatial_transformer: true
|
||||
transformer_depth: 1
|
||||
context_dim: 512
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.VQModelInterface
|
||||
params:
|
||||
embed_dim: 4
|
||||
n_embed: 16384
|
||||
ckpt_path: configs/first_stage_models/vq-f8/model.yaml
|
||||
ddconfig:
|
||||
double_z: false
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 2
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions:
|
||||
- 32
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.ClassEmbedder
|
||||
params:
|
||||
embed_dim: 512
|
||||
key: class_label
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 64
|
||||
num_workers: 12
|
||||
wrap: false
|
||||
train:
|
||||
target: ldm.data.imagenet.ImageNetTrain
|
||||
params:
|
||||
config:
|
||||
size: 256
|
||||
validation:
|
||||
target: ldm.data.imagenet.ImageNetValidation
|
||||
params:
|
||||
config:
|
||||
size: 256
|
||||
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 5000
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
@@ -1,68 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 0.0001
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.0195
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
cond_stage_key: class_label
|
||||
image_size: 64
|
||||
channels: 3
|
||||
cond_stage_trainable: true
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss
|
||||
use_ema: False
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 64
|
||||
in_channels: 3
|
||||
out_channels: 3
|
||||
model_channels: 192
|
||||
attention_resolutions:
|
||||
- 8
|
||||
- 4
|
||||
- 2
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 5
|
||||
num_heads: 1
|
||||
use_spatial_transformer: true
|
||||
transformer_depth: 1
|
||||
context_dim: 512
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.VQModelInterface
|
||||
params:
|
||||
embed_dim: 3
|
||||
n_embed: 8192
|
||||
ddconfig:
|
||||
double_z: false
|
||||
z_channels: 3
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.ClassEmbedder
|
||||
params:
|
||||
n_classes: 1001
|
||||
embed_dim: 512
|
||||
key: class_label
|
||||
@@ -1,85 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 2.0e-06
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.0195
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
image_size: 64
|
||||
channels: 3
|
||||
monitor: val/loss_simple_ema
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 64
|
||||
in_channels: 3
|
||||
out_channels: 3
|
||||
model_channels: 224
|
||||
attention_resolutions:
|
||||
# note: this isn\t actually the resolution but
|
||||
# the downsampling factor, i.e. this corresnponds to
|
||||
# attention on spatial resolution 8,16,32, as the
|
||||
# spatial reolution of the latents is 64 for f4
|
||||
- 8
|
||||
- 4
|
||||
- 2
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 4
|
||||
num_head_channels: 32
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.VQModelInterface
|
||||
params:
|
||||
embed_dim: 3
|
||||
n_embed: 8192
|
||||
ckpt_path: configs/first_stage_models/vq-f4/model.yaml
|
||||
ddconfig:
|
||||
double_z: false
|
||||
z_channels: 3
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
cond_stage_config: __is_unconditional__
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 42
|
||||
num_workers: 5
|
||||
wrap: false
|
||||
train:
|
||||
target: taming.data.faceshq.FFHQTrain
|
||||
params:
|
||||
size: 256
|
||||
validation:
|
||||
target: taming.data.faceshq.FFHQValidation
|
||||
params:
|
||||
size: 256
|
||||
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 5000
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
@@ -1,85 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 2.0e-06
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.0195
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
image_size: 64
|
||||
channels: 3
|
||||
monitor: val/loss_simple_ema
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 64
|
||||
in_channels: 3
|
||||
out_channels: 3
|
||||
model_channels: 224
|
||||
attention_resolutions:
|
||||
# note: this isn\t actually the resolution but
|
||||
# the downsampling factor, i.e. this corresnponds to
|
||||
# attention on spatial resolution 8,16,32, as the
|
||||
# spatial reolution of the latents is 64 for f4
|
||||
- 8
|
||||
- 4
|
||||
- 2
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 4
|
||||
num_head_channels: 32
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.VQModelInterface
|
||||
params:
|
||||
ckpt_path: configs/first_stage_models/vq-f4/model.yaml
|
||||
embed_dim: 3
|
||||
n_embed: 8192
|
||||
ddconfig:
|
||||
double_z: false
|
||||
z_channels: 3
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
cond_stage_config: __is_unconditional__
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 48
|
||||
num_workers: 5
|
||||
wrap: false
|
||||
train:
|
||||
target: ldm.data.lsun.LSUNBedroomsTrain
|
||||
params:
|
||||
size: 256
|
||||
validation:
|
||||
target: ldm.data.lsun.LSUNBedroomsValidation
|
||||
params:
|
||||
size: 256
|
||||
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 5000
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
@@ -1,91 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 5.0e-5 # set to target_lr by starting main.py with '--scale_lr False'
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.0155
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
loss_type: l1
|
||||
first_stage_key: "image"
|
||||
cond_stage_key: "image"
|
||||
image_size: 32
|
||||
channels: 4
|
||||
cond_stage_trainable: False
|
||||
concat_mode: False
|
||||
scale_by_std: True
|
||||
monitor: 'val/loss_simple_ema'
|
||||
|
||||
scheduler_config: # 10000 warmup steps
|
||||
target: ldm.lr_scheduler.LambdaLinearScheduler
|
||||
params:
|
||||
warm_up_steps: [10000]
|
||||
cycle_lengths: [10000000000000]
|
||||
f_start: [1.e-6]
|
||||
f_max: [1.]
|
||||
f_min: [ 1.]
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 192
|
||||
attention_resolutions: [ 1, 2, 4, 8 ] # 32, 16, 8, 4
|
||||
num_res_blocks: 2
|
||||
channel_mult: [ 1,2,2,4,4 ] # 32, 16, 8, 4, 2
|
||||
num_heads: 8
|
||||
use_scale_shift_norm: True
|
||||
resblock_updown: True
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: "val/rec_loss"
|
||||
ckpt_path: "models/first_stage_models/kl-f8/model.ckpt"
|
||||
ddconfig:
|
||||
double_z: True
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult: [ 1,2,4,4 ] # num_down = len(ch_mult)-1
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: [ ]
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config: "__is_unconditional__"
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 96
|
||||
num_workers: 5
|
||||
wrap: False
|
||||
train:
|
||||
target: ldm.data.lsun.LSUNChurchesTrain
|
||||
params:
|
||||
size: 256
|
||||
validation:
|
||||
target: ldm.data.lsun.LSUNChurchesValidation
|
||||
params:
|
||||
size: 256
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 5000
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
@@ -1,71 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 5.0e-05
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.00085
|
||||
linear_end: 0.012
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
cond_stage_key: caption
|
||||
image_size: 32
|
||||
channels: 4
|
||||
cond_stage_trainable: true
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
scale_factor: 0.18215
|
||||
use_ema: False
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions:
|
||||
- 4
|
||||
- 2
|
||||
- 1
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_heads: 8
|
||||
use_spatial_transformer: true
|
||||
transformer_depth: 1
|
||||
context_dim: 1280
|
||||
use_checkpoint: true
|
||||
legacy: False
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.BERTEmbedder
|
||||
params:
|
||||
n_embed: 1280
|
||||
n_layer: 32
|
||||
27
configs/models.yaml.example
Normal file
@@ -0,0 +1,27 @@
|
||||
# This file describes the alternative machine learning models
|
||||
# available to InvokeAI script.
|
||||
#
|
||||
# To add a new model, follow the examples below. Each
|
||||
# model requires a model config file, a weights file,
|
||||
# and the width and height of the images it
|
||||
# was trained on.
|
||||
stable-diffusion-1.5:
|
||||
description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
|
||||
weights: models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
|
||||
config: configs/stable-diffusion/v1-inference.yaml
|
||||
width: 512
|
||||
height: 512
|
||||
vae: ./models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||
default: true
|
||||
stable-diffusion-1.4:
|
||||
description: Stable Diffusion inference model version 1.4
|
||||
config: configs/stable-diffusion/v1-inference.yaml
|
||||
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
|
||||
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||
width: 512
|
||||
height: 512
|
||||
inpainting-1.5:
|
||||
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
|
||||
config: configs/stable-diffusion/v1-inpainting-inference.yaml
|
||||
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
|
||||
description: RunwayML SD 1.5 model optimized for inpainting
|
||||
@@ -1,68 +0,0 @@
|
||||
model:
|
||||
base_learning_rate: 0.0001
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.0015
|
||||
linear_end: 0.015
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: jpg
|
||||
cond_stage_key: nix
|
||||
image_size: 48
|
||||
channels: 16
|
||||
cond_stage_trainable: false
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
scale_by_std: false
|
||||
scale_factor: 0.22765929
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 48
|
||||
in_channels: 16
|
||||
out_channels: 16
|
||||
model_channels: 448
|
||||
attention_resolutions:
|
||||
- 4
|
||||
- 2
|
||||
- 1
|
||||
num_res_blocks: 2
|
||||
channel_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 4
|
||||
use_scale_shift_norm: false
|
||||
resblock_updown: false
|
||||
num_head_channels: 32
|
||||
use_spatial_transformer: true
|
||||
transformer_depth: 1
|
||||
context_dim: 768
|
||||
use_checkpoint: true
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
monitor: val/rec_loss
|
||||
embed_dim: 16
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 16
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 1
|
||||
- 2
|
||||
- 2
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions:
|
||||
- 16
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
cond_stage_config:
|
||||
target: torch.nn.Identity
|
||||
803
configs/sd-concepts.txt
Normal file
@@ -0,0 +1,803 @@
|
||||
sd-concepts-library/001glitch-core
|
||||
sd-concepts-library/2814-roth
|
||||
sd-concepts-library/3d-female-cyborgs
|
||||
sd-concepts-library/4tnght
|
||||
sd-concepts-library/80s-anime-ai
|
||||
sd-concepts-library/80s-anime-ai-being
|
||||
sd-concepts-library/852style-girl
|
||||
sd-concepts-library/8bit
|
||||
sd-concepts-library/8sconception
|
||||
sd-concepts-library/Aflac-duck
|
||||
sd-concepts-library/Akitsuki
|
||||
sd-concepts-library/Atako
|
||||
sd-concepts-library/Exodus-Styling
|
||||
sd-concepts-library/RINGAO
|
||||
sd-concepts-library/a-female-hero-from-the-legend-of-mir
|
||||
sd-concepts-library/a-hat-kid
|
||||
sd-concepts-library/a-tale-of-two-empires
|
||||
sd-concepts-library/aadhav-face
|
||||
sd-concepts-library/aavegotchi
|
||||
sd-concepts-library/abby-face
|
||||
sd-concepts-library/abstract-concepts
|
||||
sd-concepts-library/accurate-angel
|
||||
sd-concepts-library/agm-style-nao
|
||||
sd-concepts-library/aj-fosik
|
||||
sd-concepts-library/alberto-mielgo
|
||||
sd-concepts-library/alex-portugal
|
||||
sd-concepts-library/alex-thumbnail-object-2000-steps
|
||||
sd-concepts-library/aleyna-tilki
|
||||
sd-concepts-library/alf
|
||||
sd-concepts-library/alicebeta
|
||||
sd-concepts-library/alien-avatar
|
||||
sd-concepts-library/alisa
|
||||
sd-concepts-library/all-rings-albuns
|
||||
sd-concepts-library/altvent
|
||||
sd-concepts-library/altyn-helmet
|
||||
sd-concepts-library/amine
|
||||
sd-concepts-library/amogus
|
||||
sd-concepts-library/anders-zorn
|
||||
sd-concepts-library/angus-mcbride-style
|
||||
sd-concepts-library/animalve3-1500seq
|
||||
sd-concepts-library/anime-background-style
|
||||
sd-concepts-library/anime-background-style-v2
|
||||
sd-concepts-library/anime-boy
|
||||
sd-concepts-library/anime-girl
|
||||
sd-concepts-library/anyXtronXredshift
|
||||
sd-concepts-library/anya-forger
|
||||
sd-concepts-library/apex-wingman
|
||||
sd-concepts-library/apulian-rooster-v0-1
|
||||
sd-concepts-library/arcane-face
|
||||
sd-concepts-library/arcane-style-jv
|
||||
sd-concepts-library/arcimboldo-style
|
||||
sd-concepts-library/armando-reveron-style
|
||||
sd-concepts-library/armor-concept
|
||||
sd-concepts-library/arq-render
|
||||
sd-concepts-library/art-brut
|
||||
sd-concepts-library/arthur1
|
||||
sd-concepts-library/artist-yukiko-kanagai
|
||||
sd-concepts-library/arwijn
|
||||
sd-concepts-library/ashiok
|
||||
sd-concepts-library/at-wolf-boy-object
|
||||
sd-concepts-library/atm-ant
|
||||
sd-concepts-library/atm-ant-2
|
||||
sd-concepts-library/axe-tattoo
|
||||
sd-concepts-library/ayush-spider-spr
|
||||
sd-concepts-library/azura-from-vibrant-venture
|
||||
sd-concepts-library/ba-shiroko
|
||||
sd-concepts-library/babau
|
||||
sd-concepts-library/babs-bunny
|
||||
sd-concepts-library/babushork
|
||||
sd-concepts-library/backrooms
|
||||
sd-concepts-library/bad_Hub_Hugh
|
||||
sd-concepts-library/bada-club
|
||||
sd-concepts-library/baldi
|
||||
sd-concepts-library/baluchitherian
|
||||
sd-concepts-library/bamse
|
||||
sd-concepts-library/bamse-og-kylling
|
||||
sd-concepts-library/bee
|
||||
sd-concepts-library/beholder
|
||||
sd-concepts-library/beldam
|
||||
sd-concepts-library/belen
|
||||
sd-concepts-library/bella-goth
|
||||
sd-concepts-library/belle-delphine
|
||||
sd-concepts-library/bert-muppet
|
||||
sd-concepts-library/better-collage3
|
||||
sd-concepts-library/between2-mt-fade
|
||||
sd-concepts-library/birb-style
|
||||
sd-concepts-library/black-and-white-design
|
||||
sd-concepts-library/black-waifu
|
||||
sd-concepts-library/bloo
|
||||
sd-concepts-library/blue-haired-boy
|
||||
sd-concepts-library/blue-zombie
|
||||
sd-concepts-library/blue-zombiee
|
||||
sd-concepts-library/bluebey
|
||||
sd-concepts-library/bluebey-2
|
||||
sd-concepts-library/bobs-burgers
|
||||
sd-concepts-library/boissonnard
|
||||
sd-concepts-library/bonzi-monkey
|
||||
sd-concepts-library/borderlands
|
||||
sd-concepts-library/bored-ape-textual-inversion
|
||||
sd-concepts-library/boris-anderson
|
||||
sd-concepts-library/bozo-22
|
||||
sd-concepts-library/breakcore
|
||||
sd-concepts-library/brittney-williams-art
|
||||
sd-concepts-library/bruma
|
||||
sd-concepts-library/brunnya
|
||||
sd-concepts-library/buddha-statue
|
||||
sd-concepts-library/bullvbear
|
||||
sd-concepts-library/button-eyes
|
||||
sd-concepts-library/canadian-goose
|
||||
sd-concepts-library/canary-cap
|
||||
sd-concepts-library/cancer_style
|
||||
sd-concepts-library/captain-haddock
|
||||
sd-concepts-library/captainkirb
|
||||
sd-concepts-library/car-toy-rk
|
||||
sd-concepts-library/carasibana
|
||||
sd-concepts-library/carlitos-el-mago
|
||||
sd-concepts-library/carrascharacter
|
||||
sd-concepts-library/cartoona-animals
|
||||
sd-concepts-library/cat-toy
|
||||
sd-concepts-library/centaur
|
||||
sd-concepts-library/cgdonny1
|
||||
sd-concepts-library/cham
|
||||
sd-concepts-library/chandra-nalaar
|
||||
sd-concepts-library/char-con
|
||||
sd-concepts-library/character-pingu
|
||||
sd-concepts-library/cheburashka
|
||||
sd-concepts-library/chen-1
|
||||
sd-concepts-library/child-zombie
|
||||
sd-concepts-library/chillpill
|
||||
sd-concepts-library/chonkfrog
|
||||
sd-concepts-library/chop
|
||||
sd-concepts-library/christo-person
|
||||
sd-concepts-library/chuck-walton
|
||||
sd-concepts-library/chucky
|
||||
sd-concepts-library/chungus-poodl-pet
|
||||
sd-concepts-library/cindlop
|
||||
sd-concepts-library/collage-cutouts
|
||||
sd-concepts-library/collage14
|
||||
sd-concepts-library/collage3
|
||||
sd-concepts-library/collage3-hubcity
|
||||
sd-concepts-library/cologne
|
||||
sd-concepts-library/color-page
|
||||
sd-concepts-library/colossus
|
||||
sd-concepts-library/command-and-conquer-remastered-cameos
|
||||
sd-concepts-library/concept-art
|
||||
sd-concepts-library/conner-fawcett-style
|
||||
sd-concepts-library/conway-pirate
|
||||
sd-concepts-library/coop-himmelblau
|
||||
sd-concepts-library/coraline
|
||||
sd-concepts-library/cornell-box
|
||||
sd-concepts-library/cortana
|
||||
sd-concepts-library/covid-19-rapid-test
|
||||
sd-concepts-library/cow-uwu
|
||||
sd-concepts-library/cowboy
|
||||
sd-concepts-library/crazy-1
|
||||
sd-concepts-library/crazy-2
|
||||
sd-concepts-library/crb-portraits
|
||||
sd-concepts-library/crb-surrealz
|
||||
sd-concepts-library/crbart
|
||||
sd-concepts-library/crested-gecko
|
||||
sd-concepts-library/crinos-form-garou
|
||||
sd-concepts-library/cry-baby-style
|
||||
sd-concepts-library/crybaby-style-2-0
|
||||
sd-concepts-library/csgo-awp-object
|
||||
sd-concepts-library/csgo-awp-texture-map
|
||||
sd-concepts-library/cubex
|
||||
sd-concepts-library/cumbia-peruana
|
||||
sd-concepts-library/cute-bear
|
||||
sd-concepts-library/cute-cat
|
||||
sd-concepts-library/cute-game-style
|
||||
sd-concepts-library/cyberpunk-lucy
|
||||
sd-concepts-library/dabotap
|
||||
sd-concepts-library/dan-mumford
|
||||
sd-concepts-library/dan-seagrave-art-style
|
||||
sd-concepts-library/dark-penguin-pinguinanimations
|
||||
sd-concepts-library/darkpenguinanimatronic
|
||||
sd-concepts-library/darkplane
|
||||
sd-concepts-library/david-firth-artstyle
|
||||
sd-concepts-library/david-martinez-cyberpunk
|
||||
sd-concepts-library/david-martinez-edgerunners
|
||||
sd-concepts-library/david-moreno-architecture
|
||||
sd-concepts-library/daycare-attendant-sun-fnaf
|
||||
sd-concepts-library/ddattender
|
||||
sd-concepts-library/degods
|
||||
sd-concepts-library/degodsheavy
|
||||
sd-concepts-library/depthmap
|
||||
sd-concepts-library/depthmap-style
|
||||
sd-concepts-library/design
|
||||
sd-concepts-library/detectivedinosaur1
|
||||
sd-concepts-library/diaosu-toy
|
||||
sd-concepts-library/dicoo
|
||||
sd-concepts-library/dicoo2
|
||||
sd-concepts-library/dishonored-portrait-styles
|
||||
sd-concepts-library/disquieting-muses
|
||||
sd-concepts-library/ditko
|
||||
sd-concepts-library/dlooak
|
||||
sd-concepts-library/doc
|
||||
sd-concepts-library/doener-red-line-art
|
||||
sd-concepts-library/dog
|
||||
sd-concepts-library/dog-django
|
||||
sd-concepts-library/doge-pound
|
||||
sd-concepts-library/dong-ho
|
||||
sd-concepts-library/dong-ho2
|
||||
sd-concepts-library/doose-s-realistic-art-style
|
||||
sd-concepts-library/dq10-anrushia
|
||||
sd-concepts-library/dr-livesey
|
||||
sd-concepts-library/dr-strange
|
||||
sd-concepts-library/dragonborn
|
||||
sd-concepts-library/dreamcore
|
||||
sd-concepts-library/dreamy-painting
|
||||
sd-concepts-library/drive-scorpion-jacket
|
||||
sd-concepts-library/dsmuses
|
||||
sd-concepts-library/dtv-pkmn
|
||||
sd-concepts-library/dullboy-caricature
|
||||
sd-concepts-library/duranduran
|
||||
sd-concepts-library/durer-style
|
||||
sd-concepts-library/dyoudim-style
|
||||
sd-concepts-library/early-mishima-kurone
|
||||
sd-concepts-library/eastward
|
||||
sd-concepts-library/eddie
|
||||
sd-concepts-library/edgerunners-style
|
||||
sd-concepts-library/edgerunners-style-v2
|
||||
sd-concepts-library/el-salvador-style-style
|
||||
sd-concepts-library/elegant-flower
|
||||
sd-concepts-library/elspeth-tirel
|
||||
sd-concepts-library/eru-chitanda-casual
|
||||
sd-concepts-library/erwin-olaf-style
|
||||
sd-concepts-library/ettblackteapot
|
||||
sd-concepts-library/explosions-cat
|
||||
sd-concepts-library/eye-of-agamotto
|
||||
sd-concepts-library/f-22
|
||||
sd-concepts-library/facadeplace
|
||||
sd-concepts-library/fairy-tale-painting-style
|
||||
sd-concepts-library/fairytale
|
||||
sd-concepts-library/fang-yuan-001
|
||||
sd-concepts-library/faraon-love-shady
|
||||
sd-concepts-library/fasina
|
||||
sd-concepts-library/felps
|
||||
sd-concepts-library/female-kpop-singer
|
||||
sd-concepts-library/fergal-cat
|
||||
sd-concepts-library/filename-2
|
||||
sd-concepts-library/fileteado-porteno
|
||||
sd-concepts-library/final-fantasy-logo
|
||||
sd-concepts-library/fireworks-over-water
|
||||
sd-concepts-library/fish
|
||||
sd-concepts-library/flag-ussr
|
||||
sd-concepts-library/flatic
|
||||
sd-concepts-library/floral
|
||||
sd-concepts-library/fluid-acrylic-jellyfish-creatures-style-of-carl-ingram-art
|
||||
sd-concepts-library/fnf-boyfriend
|
||||
sd-concepts-library/fold-structure
|
||||
sd-concepts-library/fox-purple
|
||||
sd-concepts-library/fractal
|
||||
sd-concepts-library/fractal-flame
|
||||
sd-concepts-library/fractal-temple-style
|
||||
sd-concepts-library/frank-frazetta
|
||||
sd-concepts-library/franz-unterberger
|
||||
sd-concepts-library/freddy-fazbear
|
||||
sd-concepts-library/freefonix-style
|
||||
sd-concepts-library/furrpopasthetic
|
||||
sd-concepts-library/fursona
|
||||
sd-concepts-library/fzk
|
||||
sd-concepts-library/galaxy-explorer
|
||||
sd-concepts-library/ganyu-genshin-impact
|
||||
sd-concepts-library/garcon-the-cat
|
||||
sd-concepts-library/garfield-pizza-plush
|
||||
sd-concepts-library/garfield-pizza-plush-v2
|
||||
sd-concepts-library/gba-fe-class-cards
|
||||
sd-concepts-library/gba-pokemon-sprites
|
||||
sd-concepts-library/geggin
|
||||
sd-concepts-library/ggplot2
|
||||
sd-concepts-library/ghost-style
|
||||
sd-concepts-library/ghostproject-men
|
||||
sd-concepts-library/gibasachan-v0
|
||||
sd-concepts-library/gim
|
||||
sd-concepts-library/gio
|
||||
sd-concepts-library/giygas
|
||||
sd-concepts-library/glass-pipe
|
||||
sd-concepts-library/glass-prism-cube
|
||||
sd-concepts-library/glow-forest
|
||||
sd-concepts-library/goku
|
||||
sd-concepts-library/gram-tops
|
||||
sd-concepts-library/green-blue-shanshui
|
||||
sd-concepts-library/green-tent
|
||||
sd-concepts-library/grifter
|
||||
sd-concepts-library/grisstyle
|
||||
sd-concepts-library/grit-toy
|
||||
sd-concepts-library/gt-color-paint-2
|
||||
sd-concepts-library/gta5-artwork
|
||||
sd-concepts-library/guttestreker
|
||||
sd-concepts-library/gymnastics-leotard-v2
|
||||
sd-concepts-library/half-life-2-dog
|
||||
sd-concepts-library/handstand
|
||||
sd-concepts-library/hanfu-anime-style
|
||||
sd-concepts-library/happy-chaos
|
||||
sd-concepts-library/happy-person12345
|
||||
sd-concepts-library/happy-person12345-assets
|
||||
sd-concepts-library/harley-quinn
|
||||
sd-concepts-library/harmless-ai-1
|
||||
sd-concepts-library/harmless-ai-house-style-1
|
||||
sd-concepts-library/hd-emoji
|
||||
sd-concepts-library/heather
|
||||
sd-concepts-library/henjo-techno-show
|
||||
sd-concepts-library/herge-style
|
||||
sd-concepts-library/hiten-style-nao
|
||||
sd-concepts-library/hitokomoru-style-nao
|
||||
sd-concepts-library/hiyuki-chan
|
||||
sd-concepts-library/hk-bamboo
|
||||
sd-concepts-library/hk-betweenislands
|
||||
sd-concepts-library/hk-bicycle
|
||||
sd-concepts-library/hk-blackandwhite
|
||||
sd-concepts-library/hk-breakfast
|
||||
sd-concepts-library/hk-buses
|
||||
sd-concepts-library/hk-clouds
|
||||
sd-concepts-library/hk-goldbuddha
|
||||
sd-concepts-library/hk-goldenlantern
|
||||
sd-concepts-library/hk-hkisland
|
||||
sd-concepts-library/hk-leaves
|
||||
sd-concepts-library/hk-market
|
||||
sd-concepts-library/hk-oldcamera
|
||||
sd-concepts-library/hk-opencamera
|
||||
sd-concepts-library/hk-peach
|
||||
sd-concepts-library/hk-phonevax
|
||||
sd-concepts-library/hk-streetpeople
|
||||
sd-concepts-library/hk-vintage
|
||||
sd-concepts-library/hoi4
|
||||
sd-concepts-library/hoi4-leaders
|
||||
sd-concepts-library/homestuck-sprite
|
||||
sd-concepts-library/homestuck-troll
|
||||
sd-concepts-library/hours-sentry-fade
|
||||
sd-concepts-library/hours-style
|
||||
sd-concepts-library/hrgiger-drmacabre
|
||||
sd-concepts-library/huang-guang-jian
|
||||
sd-concepts-library/huatli
|
||||
sd-concepts-library/huayecai820-greyscale
|
||||
sd-concepts-library/hub-city
|
||||
sd-concepts-library/hubris-oshri
|
||||
sd-concepts-library/huckleberry
|
||||
sd-concepts-library/hydrasuit
|
||||
sd-concepts-library/i-love-chaos
|
||||
sd-concepts-library/ibere-thenorio
|
||||
sd-concepts-library/ic0n
|
||||
sd-concepts-library/ie-gravestone
|
||||
sd-concepts-library/ikea-fabler
|
||||
sd-concepts-library/illustration-style
|
||||
sd-concepts-library/ilo-kunst
|
||||
sd-concepts-library/ilya-shkipin
|
||||
sd-concepts-library/im-poppy
|
||||
sd-concepts-library/ina-art
|
||||
sd-concepts-library/indian-watercolor-portraits
|
||||
sd-concepts-library/indiana
|
||||
sd-concepts-library/ingmar-bergman
|
||||
sd-concepts-library/insidewhale
|
||||
sd-concepts-library/interchanges
|
||||
sd-concepts-library/inuyama-muneto-style-nao
|
||||
sd-concepts-library/irasutoya
|
||||
sd-concepts-library/iridescent-illustration-style
|
||||
sd-concepts-library/iridescent-photo-style
|
||||
sd-concepts-library/isabell-schulte-pv-pvii-3000steps
|
||||
sd-concepts-library/isabell-schulte-pviii-1-image-style
|
||||
sd-concepts-library/isabell-schulte-pviii-1024px-1500-steps-style
|
||||
sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style
|
||||
sd-concepts-library/isabell-schulte-pviii-4-tiles-1-lr-3000-steps-style
|
||||
sd-concepts-library/isabell-schulte-pviii-4-tiles-3-lr-5000-steps-style
|
||||
sd-concepts-library/isabell-schulte-pviii-4tiles-500steps
|
||||
sd-concepts-library/isabell-schulte-pviii-4tiles-6000steps
|
||||
sd-concepts-library/isabell-schulte-pviii-style
|
||||
sd-concepts-library/isometric-tile-test
|
||||
sd-concepts-library/jacqueline-the-unicorn
|
||||
sd-concepts-library/james-web-space-telescope
|
||||
sd-concepts-library/jamie-hewlett-style
|
||||
sd-concepts-library/jamiels
|
||||
sd-concepts-library/jang-sung-rak-style
|
||||
sd-concepts-library/jetsetdreamcastcovers
|
||||
sd-concepts-library/jin-kisaragi
|
||||
sd-concepts-library/jinjoon-lee-they
|
||||
sd-concepts-library/jm-bergling-monogram
|
||||
sd-concepts-library/joe-mad
|
||||
sd-concepts-library/joe-whiteford-art-style
|
||||
sd-concepts-library/joemad
|
||||
sd-concepts-library/john-blanche
|
||||
sd-concepts-library/johnny-silverhand
|
||||
sd-concepts-library/jojo-bizzare-adventure-manga-lineart
|
||||
sd-concepts-library/jos-de-kat
|
||||
sd-concepts-library/junji-ito-artstyle
|
||||
sd-concepts-library/kaleido
|
||||
sd-concepts-library/kaneoya-sachiko
|
||||
sd-concepts-library/kanovt
|
||||
sd-concepts-library/kanv1
|
||||
sd-concepts-library/karan-gloomy
|
||||
sd-concepts-library/karl-s-lzx-1
|
||||
sd-concepts-library/kasumin
|
||||
sd-concepts-library/kawaii-colors
|
||||
sd-concepts-library/kawaii-girl-plus-object
|
||||
sd-concepts-library/kawaii-girl-plus-style
|
||||
sd-concepts-library/kawaii-girl-plus-style-v1-1
|
||||
sd-concepts-library/kay
|
||||
sd-concepts-library/kaya-ghost-assasin
|
||||
sd-concepts-library/ki
|
||||
sd-concepts-library/kinda-sus
|
||||
sd-concepts-library/kings-quest-agd
|
||||
sd-concepts-library/kiora
|
||||
sd-concepts-library/kira-sensei
|
||||
sd-concepts-library/kirby
|
||||
sd-concepts-library/klance
|
||||
sd-concepts-library/kodakvision500t
|
||||
sd-concepts-library/kogatan-shiny
|
||||
sd-concepts-library/kogecha
|
||||
sd-concepts-library/kojima-ayami
|
||||
sd-concepts-library/koko-dog
|
||||
sd-concepts-library/kuvshinov
|
||||
sd-concepts-library/kysa-v-style
|
||||
sd-concepts-library/laala-character
|
||||
sd-concepts-library/larrette
|
||||
sd-concepts-library/lavko
|
||||
sd-concepts-library/lazytown-stephanie
|
||||
sd-concepts-library/ldr
|
||||
sd-concepts-library/ldrs
|
||||
sd-concepts-library/led-toy
|
||||
sd-concepts-library/lego-astronaut
|
||||
sd-concepts-library/leica
|
||||
sd-concepts-library/leif-jones
|
||||
sd-concepts-library/lex
|
||||
sd-concepts-library/liliana
|
||||
sd-concepts-library/liliana-vess
|
||||
sd-concepts-library/liminal-spaces-2-0
|
||||
sd-concepts-library/liminalspaces
|
||||
sd-concepts-library/line-art
|
||||
sd-concepts-library/line-style
|
||||
sd-concepts-library/linnopoke
|
||||
sd-concepts-library/liquid-light
|
||||
sd-concepts-library/liqwid-aquafarmer
|
||||
sd-concepts-library/lizardman
|
||||
sd-concepts-library/loab-character
|
||||
sd-concepts-library/loab-style
|
||||
sd-concepts-library/lofa
|
||||
sd-concepts-library/logo-with-face-on-shield
|
||||
sd-concepts-library/lolo
|
||||
sd-concepts-library/looney-anime
|
||||
sd-concepts-library/lost-rapper
|
||||
sd-concepts-library/lphr-style
|
||||
sd-concepts-library/lucario
|
||||
sd-concepts-library/lucky-luke
|
||||
sd-concepts-library/lugal-ki-en
|
||||
sd-concepts-library/luinv2
|
||||
sd-concepts-library/lula-13
|
||||
sd-concepts-library/lumio
|
||||
sd-concepts-library/lxj-o4
|
||||
sd-concepts-library/m-geo
|
||||
sd-concepts-library/m-geoo
|
||||
sd-concepts-library/madhubani-art
|
||||
sd-concepts-library/mafalda-character
|
||||
sd-concepts-library/magic-pengel
|
||||
sd-concepts-library/malika-favre-art-style
|
||||
sd-concepts-library/manga-style
|
||||
sd-concepts-library/marbling-art
|
||||
sd-concepts-library/margo
|
||||
sd-concepts-library/marty
|
||||
sd-concepts-library/marty6
|
||||
sd-concepts-library/mass
|
||||
sd-concepts-library/masyanya
|
||||
sd-concepts-library/masyunya
|
||||
sd-concepts-library/mate
|
||||
sd-concepts-library/matthew-stone
|
||||
sd-concepts-library/mattvidpro
|
||||
sd-concepts-library/maurice-quentin-de-la-tour-style
|
||||
sd-concepts-library/maus
|
||||
sd-concepts-library/max-foley
|
||||
sd-concepts-library/mayor-richard-irvin
|
||||
sd-concepts-library/mechasoulall
|
||||
sd-concepts-library/medazzaland
|
||||
sd-concepts-library/memnarch-mtg
|
||||
sd-concepts-library/metagabe
|
||||
sd-concepts-library/meyoco
|
||||
sd-concepts-library/meze-audio-elite-headphones
|
||||
sd-concepts-library/midjourney-style
|
||||
sd-concepts-library/mikako-method
|
||||
sd-concepts-library/mikako-methodi2i
|
||||
sd-concepts-library/miko-3-robot
|
||||
sd-concepts-library/milady
|
||||
sd-concepts-library/mildemelwe-style
|
||||
sd-concepts-library/million-live-akane-15k
|
||||
sd-concepts-library/million-live-akane-3k
|
||||
sd-concepts-library/million-live-akane-shifuku-3k
|
||||
sd-concepts-library/million-live-spade-q-object-3k
|
||||
sd-concepts-library/million-live-spade-q-style-3k
|
||||
sd-concepts-library/minecraft-concept-art
|
||||
sd-concepts-library/mishima-kurone
|
||||
sd-concepts-library/mizkif
|
||||
sd-concepts-library/moeb-style
|
||||
sd-concepts-library/moebius
|
||||
sd-concepts-library/mokoko
|
||||
sd-concepts-library/mokoko-seed
|
||||
sd-concepts-library/monster-girl
|
||||
sd-concepts-library/monster-toy
|
||||
sd-concepts-library/monte-novo
|
||||
sd-concepts-library/moo-moo
|
||||
sd-concepts-library/morino-hon-style
|
||||
sd-concepts-library/moxxi
|
||||
sd-concepts-library/msg
|
||||
sd-concepts-library/mtg-card
|
||||
sd-concepts-library/mtl-longsky
|
||||
sd-concepts-library/mu-sadr
|
||||
sd-concepts-library/munch-leaks-style
|
||||
sd-concepts-library/museum-by-coop-himmelblau
|
||||
sd-concepts-library/muxoyara
|
||||
sd-concepts-library/my-hero-academia-style
|
||||
sd-concepts-library/my-mug
|
||||
sd-concepts-library/mycat
|
||||
sd-concepts-library/mystical-nature
|
||||
sd-concepts-library/naf
|
||||
sd-concepts-library/nahiri
|
||||
sd-concepts-library/namine-ritsu
|
||||
sd-concepts-library/naoki-saito
|
||||
sd-concepts-library/nard-style
|
||||
sd-concepts-library/naruto
|
||||
sd-concepts-library/natasha-johnston
|
||||
sd-concepts-library/nathan-wyatt
|
||||
sd-concepts-library/naval-portrait
|
||||
sd-concepts-library/nazuna
|
||||
sd-concepts-library/nebula
|
||||
sd-concepts-library/ned-flanders
|
||||
sd-concepts-library/neon-pastel
|
||||
sd-concepts-library/new-priests
|
||||
sd-concepts-library/nic-papercuts
|
||||
sd-concepts-library/nikodim
|
||||
sd-concepts-library/nissa-revane
|
||||
sd-concepts-library/nixeu
|
||||
sd-concepts-library/noggles
|
||||
sd-concepts-library/nomad
|
||||
sd-concepts-library/nouns-glasses
|
||||
sd-concepts-library/obama-based-on-xi
|
||||
sd-concepts-library/obama-self-2
|
||||
sd-concepts-library/og-mox-style
|
||||
sd-concepts-library/ohisashiburi-style
|
||||
sd-concepts-library/oleg-kuvaev
|
||||
sd-concepts-library/olli-olli
|
||||
sd-concepts-library/on-kawara
|
||||
sd-concepts-library/one-line-drawing
|
||||
sd-concepts-library/onepunchman
|
||||
sd-concepts-library/onzpo
|
||||
sd-concepts-library/orangejacket
|
||||
sd-concepts-library/ori
|
||||
sd-concepts-library/ori-toor
|
||||
sd-concepts-library/orientalist-art
|
||||
sd-concepts-library/osaka-jyo
|
||||
sd-concepts-library/osaka-jyo2
|
||||
sd-concepts-library/osrsmini2
|
||||
sd-concepts-library/osrstiny
|
||||
sd-concepts-library/other-mother
|
||||
sd-concepts-library/ouroboros
|
||||
sd-concepts-library/outfit-items
|
||||
sd-concepts-library/overprettified
|
||||
sd-concepts-library/owl-house
|
||||
sd-concepts-library/painted-by-silver-of-999
|
||||
sd-concepts-library/painted-by-silver-of-999-2
|
||||
sd-concepts-library/painted-student
|
||||
sd-concepts-library/painting
|
||||
sd-concepts-library/pantone-milk
|
||||
sd-concepts-library/paolo-bonolis
|
||||
sd-concepts-library/party-girl
|
||||
sd-concepts-library/pascalsibertin
|
||||
sd-concepts-library/pastelartstyle
|
||||
sd-concepts-library/paul-noir
|
||||
sd-concepts-library/pen-ink-portraits-bennorthen
|
||||
sd-concepts-library/phan
|
||||
sd-concepts-library/phan-s-collage
|
||||
sd-concepts-library/phc
|
||||
sd-concepts-library/phoenix-01
|
||||
sd-concepts-library/pineda-david
|
||||
sd-concepts-library/pink-beast-pastelae-style
|
||||
sd-concepts-library/pintu
|
||||
sd-concepts-library/pion-by-august-semionov
|
||||
sd-concepts-library/piotr-jablonski
|
||||
sd-concepts-library/pixel-mania
|
||||
sd-concepts-library/pixel-toy
|
||||
sd-concepts-library/pjablonski-style
|
||||
sd-concepts-library/plant-style
|
||||
sd-concepts-library/plen-ki-mun
|
||||
sd-concepts-library/pokemon-conquest-sprites
|
||||
sd-concepts-library/pool-test
|
||||
sd-concepts-library/poolrooms
|
||||
sd-concepts-library/poring-ragnarok-online
|
||||
sd-concepts-library/poutine-dish
|
||||
sd-concepts-library/princess-knight-art
|
||||
sd-concepts-library/progress-chip
|
||||
sd-concepts-library/puerquis-toy
|
||||
sd-concepts-library/purplefishli
|
||||
sd-concepts-library/pyramidheadcosplay
|
||||
sd-concepts-library/qpt-atrium
|
||||
sd-concepts-library/quiesel
|
||||
sd-concepts-library/r-crumb-style
|
||||
sd-concepts-library/rahkshi-bionicle
|
||||
sd-concepts-library/raichu
|
||||
sd-concepts-library/rail-scene
|
||||
sd-concepts-library/rail-scene-style
|
||||
sd-concepts-library/ralph-mcquarrie
|
||||
sd-concepts-library/ransom
|
||||
sd-concepts-library/rayne-weynolds
|
||||
sd-concepts-library/rcrumb-portraits-style
|
||||
sd-concepts-library/rd-chaos
|
||||
sd-concepts-library/rd-paintings
|
||||
sd-concepts-library/red-glasses
|
||||
sd-concepts-library/reeducation-camp
|
||||
sd-concepts-library/reksio-dog
|
||||
sd-concepts-library/rektguy
|
||||
sd-concepts-library/remert
|
||||
sd-concepts-library/renalla
|
||||
sd-concepts-library/repeat
|
||||
sd-concepts-library/retro-girl
|
||||
sd-concepts-library/retro-mecha-rangers
|
||||
sd-concepts-library/retropixelart-pinguin
|
||||
sd-concepts-library/rex-deno
|
||||
sd-concepts-library/rhizomuse-machine-bionic-sculpture
|
||||
sd-concepts-library/ricar
|
||||
sd-concepts-library/rickyart
|
||||
sd-concepts-library/rico-face
|
||||
sd-concepts-library/riker-doll
|
||||
sd-concepts-library/rikiart
|
||||
sd-concepts-library/rikiboy-art
|
||||
sd-concepts-library/rilakkuma
|
||||
sd-concepts-library/rishusei-style
|
||||
sd-concepts-library/rj-palmer
|
||||
sd-concepts-library/rl-pkmn-test
|
||||
sd-concepts-library/road-to-ruin
|
||||
sd-concepts-library/robertnava
|
||||
sd-concepts-library/roblox-avatar
|
||||
sd-concepts-library/roy-lichtenstein
|
||||
sd-concepts-library/ruan-jia
|
||||
sd-concepts-library/russian
|
||||
sd-concepts-library/s1m-naoto-ohshima
|
||||
sd-concepts-library/saheeli-rai
|
||||
sd-concepts-library/sakimi-style
|
||||
sd-concepts-library/salmonid
|
||||
sd-concepts-library/sam-yang
|
||||
sd-concepts-library/sanguo-guanyu
|
||||
sd-concepts-library/sas-style
|
||||
sd-concepts-library/scarlet-witch
|
||||
sd-concepts-library/schloss-mosigkau
|
||||
sd-concepts-library/scrap-style
|
||||
sd-concepts-library/scratch-project
|
||||
sd-concepts-library/sculptural-style
|
||||
sd-concepts-library/sd-concepts-library-uma-meme
|
||||
sd-concepts-library/seamless-ground
|
||||
sd-concepts-library/selezneva-alisa
|
||||
sd-concepts-library/sem-mac2n
|
||||
sd-concepts-library/senneca
|
||||
sd-concepts-library/seraphimmoonshadow-art
|
||||
sd-concepts-library/sewerslvt
|
||||
sd-concepts-library/she-hulk-law-art
|
||||
sd-concepts-library/she-mask
|
||||
sd-concepts-library/sherhook-painting
|
||||
sd-concepts-library/sherhook-painting-v2
|
||||
sd-concepts-library/shev-linocut
|
||||
sd-concepts-library/shigure-ui-style
|
||||
sd-concepts-library/shiny-polyman
|
||||
sd-concepts-library/shrunken-head
|
||||
sd-concepts-library/shu-doll
|
||||
sd-concepts-library/shvoren-style
|
||||
sd-concepts-library/sims-2-portrait
|
||||
sd-concepts-library/singsing
|
||||
sd-concepts-library/singsing-doll
|
||||
sd-concepts-library/sintez-ico
|
||||
sd-concepts-library/skyfalls
|
||||
sd-concepts-library/slm
|
||||
sd-concepts-library/smarties
|
||||
sd-concepts-library/smiling-friend-style
|
||||
sd-concepts-library/smooth-pencils
|
||||
sd-concepts-library/smurf-style
|
||||
sd-concepts-library/smw-map
|
||||
sd-concepts-library/society-finch
|
||||
sd-concepts-library/sorami-style
|
||||
sd-concepts-library/spider-gwen
|
||||
sd-concepts-library/spritual-monsters
|
||||
sd-concepts-library/stable-diffusion-conceptualizer
|
||||
sd-concepts-library/star-tours-posters
|
||||
sd-concepts-library/stardew-valley-pixel-art
|
||||
sd-concepts-library/starhavenmachinegods
|
||||
sd-concepts-library/sterling-archer
|
||||
sd-concepts-library/stretch-re1-robot
|
||||
sd-concepts-library/stuffed-penguin-toy
|
||||
sd-concepts-library/style-of-marc-allante
|
||||
sd-concepts-library/summie-style
|
||||
sd-concepts-library/sunfish
|
||||
sd-concepts-library/super-nintendo-cartridge
|
||||
sd-concepts-library/supitcha-mask
|
||||
sd-concepts-library/sushi-pixel
|
||||
sd-concepts-library/swamp-choe-2
|
||||
sd-concepts-library/t-skrang
|
||||
sd-concepts-library/takuji-kawano
|
||||
sd-concepts-library/tamiyo
|
||||
sd-concepts-library/tangles
|
||||
sd-concepts-library/tb303
|
||||
sd-concepts-library/tcirle
|
||||
sd-concepts-library/teelip-ir-landscape
|
||||
sd-concepts-library/teferi
|
||||
sd-concepts-library/tela-lenca
|
||||
sd-concepts-library/tela-lenca2
|
||||
sd-concepts-library/terraria-style
|
||||
sd-concepts-library/tesla-bot
|
||||
sd-concepts-library/test
|
||||
sd-concepts-library/test-epson
|
||||
sd-concepts-library/test2
|
||||
sd-concepts-library/testing
|
||||
sd-concepts-library/thalasin
|
||||
sd-concepts-library/thegeneral
|
||||
sd-concepts-library/thorneworks
|
||||
sd-concepts-library/threestooges
|
||||
sd-concepts-library/thunderdome-cover
|
||||
sd-concepts-library/thunderdome-covers
|
||||
sd-concepts-library/ti-junglepunk-v0
|
||||
sd-concepts-library/tili-concept
|
||||
sd-concepts-library/titan-robot
|
||||
sd-concepts-library/tnj
|
||||
sd-concepts-library/toho-pixel
|
||||
sd-concepts-library/tomcat
|
||||
sd-concepts-library/tonal1
|
||||
sd-concepts-library/tony-diterlizzi-s-planescape-art
|
||||
sd-concepts-library/towerplace
|
||||
sd-concepts-library/toy
|
||||
sd-concepts-library/toy-bonnie-plush
|
||||
sd-concepts-library/toyota-sera
|
||||
sd-concepts-library/transmutation-circles
|
||||
sd-concepts-library/trash-polka-artstyle
|
||||
sd-concepts-library/travis-bedel
|
||||
sd-concepts-library/trigger-studio
|
||||
sd-concepts-library/trust-support
|
||||
sd-concepts-library/trypophobia
|
||||
sd-concepts-library/ttte
|
||||
sd-concepts-library/tubby
|
||||
sd-concepts-library/tubby-cats
|
||||
sd-concepts-library/tudisco
|
||||
sd-concepts-library/turtlepics
|
||||
sd-concepts-library/type
|
||||
sd-concepts-library/ugly-sonic
|
||||
sd-concepts-library/uliana-kudinova
|
||||
sd-concepts-library/uma
|
||||
sd-concepts-library/uma-clean-object
|
||||
sd-concepts-library/uma-meme
|
||||
sd-concepts-library/uma-meme-style
|
||||
sd-concepts-library/uma-style-classic
|
||||
sd-concepts-library/unfinished-building
|
||||
sd-concepts-library/urivoldemort
|
||||
sd-concepts-library/uzumaki
|
||||
sd-concepts-library/valorantstyle
|
||||
sd-concepts-library/vb-mox
|
||||
sd-concepts-library/vcr-classique
|
||||
sd-concepts-library/venice
|
||||
sd-concepts-library/vespertine
|
||||
sd-concepts-library/victor-narm
|
||||
sd-concepts-library/vietstoneking
|
||||
sd-concepts-library/vivien-reid
|
||||
sd-concepts-library/vkuoo1
|
||||
sd-concepts-library/vraska
|
||||
sd-concepts-library/w3u
|
||||
sd-concepts-library/walter-wick-photography
|
||||
sd-concepts-library/warhammer-40k-drawing-style
|
||||
sd-concepts-library/waterfallshadow
|
||||
sd-concepts-library/wayne-reynolds-character
|
||||
sd-concepts-library/wedding
|
||||
sd-concepts-library/wedding-HandPainted
|
||||
sd-concepts-library/werebloops
|
||||
sd-concepts-library/wheatland
|
||||
sd-concepts-library/wheatland-arknight
|
||||
sd-concepts-library/wheelchair
|
||||
sd-concepts-library/wildkat
|
||||
sd-concepts-library/willy-hd
|
||||
sd-concepts-library/wire-angels
|
||||
sd-concepts-library/wish-artist-stile
|
||||
sd-concepts-library/wlop-style
|
||||
sd-concepts-library/wojak
|
||||
sd-concepts-library/wojaks-now
|
||||
sd-concepts-library/wojaks-now-now-now
|
||||
sd-concepts-library/xatu
|
||||
sd-concepts-library/xatu2
|
||||
sd-concepts-library/xbh
|
||||
sd-concepts-library/xi
|
||||
sd-concepts-library/xidiversity
|
||||
sd-concepts-library/xioboma
|
||||
sd-concepts-library/xuna
|
||||
sd-concepts-library/xyz
|
||||
sd-concepts-library/yb-anime
|
||||
sd-concepts-library/yerba-mate
|
||||
sd-concepts-library/yesdelete
|
||||
sd-concepts-library/yf21
|
||||
sd-concepts-library/yilanov2
|
||||
sd-concepts-library/yinit
|
||||
sd-concepts-library/yoji-shinkawa-style
|
||||
sd-concepts-library/yolandi-visser
|
||||
sd-concepts-library/yoshi
|
||||
sd-concepts-library/youpi2
|
||||
sd-concepts-library/youtooz-candy
|
||||
sd-concepts-library/yuji-himukai-style
|
||||
sd-concepts-library/zaney
|
||||
sd-concepts-library/zaneypixelz
|
||||
sd-concepts-library/zdenek-art
|
||||
sd-concepts-library/zero
|
||||
sd-concepts-library/zero-bottle
|
||||
sd-concepts-library/zero-suit-samus
|
||||
sd-concepts-library/zillertal-can
|
||||
sd-concepts-library/zizigooloo
|
||||
sd-concepts-library/zk
|
||||
sd-concepts-library/zoroark
|
||||
110
configs/stable-diffusion/v1-finetune.yaml
Normal file
@@ -0,0 +1,110 @@
|
||||
model:
|
||||
base_learning_rate: 5.0e-03
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.00085
|
||||
linear_end: 0.0120
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
cond_stage_key: caption
|
||||
image_size: 64
|
||||
channels: 4
|
||||
cond_stage_trainable: true # Note: different from the one we trained before
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
scale_factor: 0.18215
|
||||
use_ema: False
|
||||
embedding_reg_weight: 0.0
|
||||
|
||||
personalization_config:
|
||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
||||
params:
|
||||
placeholder_strings: ["*"]
|
||||
initializer_words: ["sculpture"]
|
||||
per_image_tokens: false
|
||||
num_vectors_per_token: 1
|
||||
progressive_words: False
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32 # unused
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions: [ 4, 2, 1 ]
|
||||
num_res_blocks: 2
|
||||
channel_mult: [ 1, 2, 4, 4 ]
|
||||
num_heads: 8
|
||||
use_spatial_transformer: True
|
||||
transformer_depth: 1
|
||||
context_dim: 768
|
||||
use_checkpoint: True
|
||||
legacy: False
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 1
|
||||
num_workers: 2
|
||||
wrap: false
|
||||
train:
|
||||
target: ldm.data.personalized.PersonalizedBase
|
||||
params:
|
||||
size: 512
|
||||
set: train
|
||||
per_image_tokens: false
|
||||
repeats: 100
|
||||
validation:
|
||||
target: ldm.data.personalized.PersonalizedBase
|
||||
params:
|
||||
size: 512
|
||||
set: val
|
||||
per_image_tokens: false
|
||||
repeats: 10
|
||||
|
||||
lightning:
|
||||
modelcheckpoint:
|
||||
params:
|
||||
every_n_train_steps: 500
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 500
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
max_steps: 4000000
|
||||
# max_steps: 4000
|
||||
|
||||
103
configs/stable-diffusion/v1-finetune_style.yaml
Normal file
@@ -0,0 +1,103 @@
|
||||
model:
|
||||
base_learning_rate: 5.0e-03
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.00085
|
||||
linear_end: 0.0120
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
cond_stage_key: caption
|
||||
image_size: 64
|
||||
channels: 4
|
||||
cond_stage_trainable: true # Note: different from the one we trained before
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
scale_factor: 0.18215
|
||||
use_ema: False
|
||||
embedding_reg_weight: 0.0
|
||||
|
||||
personalization_config:
|
||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
||||
params:
|
||||
placeholder_strings: ["*"]
|
||||
initializer_words: ["painting"]
|
||||
per_image_tokens: false
|
||||
num_vectors_per_token: 1
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32 # unused
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions: [ 4, 2, 1 ]
|
||||
num_res_blocks: 2
|
||||
channel_mult: [ 1, 2, 4, 4 ]
|
||||
num_heads: 8
|
||||
use_spatial_transformer: True
|
||||
transformer_depth: 1
|
||||
context_dim: 768
|
||||
use_checkpoint: True
|
||||
legacy: False
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 2
|
||||
num_workers: 16
|
||||
wrap: false
|
||||
train:
|
||||
target: ldm.data.personalized_style.PersonalizedBase
|
||||
params:
|
||||
size: 512
|
||||
set: train
|
||||
per_image_tokens: false
|
||||
repeats: 100
|
||||
validation:
|
||||
target: ldm.data.personalized_style.PersonalizedBase
|
||||
params:
|
||||
size: 512
|
||||
set: val
|
||||
per_image_tokens: false
|
||||
repeats: 10
|
||||
|
||||
lightning:
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 500
|
||||
max_images: 8
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: True
|
||||
@@ -26,6 +26,15 @@ model:
|
||||
f_max: [ 1. ]
|
||||
f_min: [ 1. ]
|
||||
|
||||
personalization_config:
|
||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
||||
params:
|
||||
placeholder_strings: ["*"]
|
||||
initializer_words: ['sculpture']
|
||||
per_image_tokens: false
|
||||
num_vectors_per_token: 8
|
||||
progressive_words: False
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
@@ -67,4 +76,4 @@ model:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
||||
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder
|
||||
|
||||
79
configs/stable-diffusion/v1-inpainting-inference.yaml
Normal file
@@ -0,0 +1,79 @@
|
||||
model:
|
||||
base_learning_rate: 7.5e-05
|
||||
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
|
||||
params:
|
||||
linear_start: 0.00085
|
||||
linear_end: 0.0120
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: "jpg"
|
||||
cond_stage_key: "txt"
|
||||
image_size: 64
|
||||
channels: 4
|
||||
cond_stage_trainable: false # Note: different from the one we trained before
|
||||
conditioning_key: hybrid # important
|
||||
monitor: val/loss_simple_ema
|
||||
scale_factor: 0.18215
|
||||
finetune_keys: null
|
||||
|
||||
scheduler_config: # 10000 warmup steps
|
||||
target: ldm.lr_scheduler.LambdaLinearScheduler
|
||||
params:
|
||||
warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch
|
||||
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
|
||||
f_start: [ 1.e-6 ]
|
||||
f_max: [ 1. ]
|
||||
f_min: [ 1. ]
|
||||
|
||||
personalization_config:
|
||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
||||
params:
|
||||
placeholder_strings: ["*"]
|
||||
initializer_words: ['sculpture']
|
||||
per_image_tokens: false
|
||||
num_vectors_per_token: 8
|
||||
progressive_words: False
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32 # unused
|
||||
in_channels: 9 # 4 data + 4 downscaled image + 1 mask
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions: [ 4, 2, 1 ]
|
||||
num_res_blocks: 2
|
||||
channel_mult: [ 1, 2, 4, 4 ]
|
||||
num_heads: 8
|
||||
use_spatial_transformer: True
|
||||
transformer_depth: 1
|
||||
context_dim: 768
|
||||
use_checkpoint: True
|
||||
legacy: False
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.WeightedFrozenCLIPEmbedder
|
||||
110
configs/stable-diffusion/v1-m1-finetune.yaml
Normal file
@@ -0,0 +1,110 @@
|
||||
model:
|
||||
base_learning_rate: 5.0e-03
|
||||
target: ldm.models.diffusion.ddpm.LatentDiffusion
|
||||
params:
|
||||
linear_start: 0.00085
|
||||
linear_end: 0.0120
|
||||
num_timesteps_cond: 1
|
||||
log_every_t: 200
|
||||
timesteps: 1000
|
||||
first_stage_key: image
|
||||
cond_stage_key: caption
|
||||
image_size: 64
|
||||
channels: 4
|
||||
cond_stage_trainable: true # Note: different from the one we trained before
|
||||
conditioning_key: crossattn
|
||||
monitor: val/loss_simple_ema
|
||||
scale_factor: 0.18215
|
||||
use_ema: False
|
||||
embedding_reg_weight: 0.0
|
||||
|
||||
personalization_config:
|
||||
target: ldm.modules.embedding_manager.EmbeddingManager
|
||||
params:
|
||||
placeholder_strings: ["*"]
|
||||
initializer_words: ['sculpture']
|
||||
per_image_tokens: false
|
||||
num_vectors_per_token: 6
|
||||
progressive_words: False
|
||||
|
||||
unet_config:
|
||||
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
image_size: 32 # unused
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions: [ 4, 2, 1 ]
|
||||
num_res_blocks: 2
|
||||
channel_mult: [ 1, 2, 4, 4 ]
|
||||
num_heads: 8
|
||||
use_spatial_transformer: True
|
||||
transformer_depth: 1
|
||||
context_dim: 768
|
||||
use_checkpoint: True
|
||||
legacy: False
|
||||
|
||||
first_stage_config:
|
||||
target: ldm.models.autoencoder.AutoencoderKL
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult:
|
||||
- 1
|
||||
- 2
|
||||
- 4
|
||||
- 4
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
|
||||
cond_stage_config:
|
||||
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
|
||||
|
||||
data:
|
||||
target: main.DataModuleFromConfig
|
||||
params:
|
||||
batch_size: 1
|
||||
num_workers: 2
|
||||
wrap: false
|
||||
train:
|
||||
target: ldm.data.personalized.PersonalizedBase
|
||||
params:
|
||||
size: 512
|
||||
set: train
|
||||
per_image_tokens: false
|
||||
repeats: 100
|
||||
validation:
|
||||
target: ldm.data.personalized.PersonalizedBase
|
||||
params:
|
||||
size: 512
|
||||
set: val
|
||||
per_image_tokens: false
|
||||
repeats: 10
|
||||
|
||||
lightning:
|
||||
modelcheckpoint:
|
||||
params:
|
||||
every_n_train_steps: 500
|
||||
callbacks:
|
||||
image_logger:
|
||||
target: main.ImageLogger
|
||||
params:
|
||||
batch_frequency: 500
|
||||
max_images: 5
|
||||
increase_log_steps: False
|
||||
|
||||
trainer:
|
||||
benchmark: False
|
||||
max_steps: 6200
|
||||
# max_steps: 4000
|
||||
|
||||
34
docker-build/Dockerfile
Normal file
@@ -0,0 +1,34 @@
|
||||
FROM ubuntu:22.10
|
||||
|
||||
# use bash
|
||||
SHELL [ "/bin/bash", "-c" ]
|
||||
|
||||
# Install necesarry packages
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y \
|
||||
--no-install-recommends \
|
||||
build-essential \
|
||||
gcc \
|
||||
git \
|
||||
libgl1-mesa-glx \
|
||||
libglib2.0-0 \
|
||||
pip \
|
||||
python3 \
|
||||
python3-dev \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# set workdir and copy sources
|
||||
WORKDIR /invokeai
|
||||
ARG PIP_REQUIREMENTS=requirements-lin-cuda.txt
|
||||
COPY . ./environments-and-requirements/${PIP_REQUIREMENTS} ./
|
||||
|
||||
# install requirements and link outputs folder
|
||||
RUN pip install \
|
||||
--no-cache-dir \
|
||||
-r ${PIP_REQUIREMENTS}
|
||||
|
||||
# set Environment, Entrypoint and default CMD
|
||||
ENV INVOKEAI_ROOT /data
|
||||
ENTRYPOINT [ "python3", "scripts/invoke.py", "--outdir=/data/outputs" ]
|
||||
CMD [ "--web", "--host=0.0.0.0" ]
|
||||
49
docker-build/build.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
# IMPORTANT: You need to have a token on huggingface.co to be able to download the checkpoints!!!
|
||||
# configure values by using env when executing build.sh f.e. `env ARCH=aarch64 ./build.sh`
|
||||
|
||||
source ./docker-build/env.sh \
|
||||
|| echo "please execute docker-build/build.sh from repository root" \
|
||||
|| exit 1
|
||||
|
||||
pip_requirements=${PIP_REQUIREMENTS:-requirements-lin-cuda.txt}
|
||||
dockerfile=${INVOKE_DOCKERFILE:-docker-build/Dockerfile}
|
||||
|
||||
# print the settings
|
||||
echo "You are using these values:"
|
||||
echo -e "Dockerfile:\t\t ${dockerfile}"
|
||||
echo -e "requirements:\t\t ${pip_requirements}"
|
||||
echo -e "volumename:\t\t ${volumename}"
|
||||
echo -e "arch:\t\t\t ${arch}"
|
||||
echo -e "platform:\t\t ${platform}"
|
||||
echo -e "invokeai_tag:\t\t ${invokeai_tag}\n"
|
||||
|
||||
if [[ -n "$(docker volume ls -f name="${volumename}" -q)" ]]; then
|
||||
echo "Volume already exists"
|
||||
echo
|
||||
else
|
||||
echo -n "createing docker volume "
|
||||
docker volume create "${volumename}"
|
||||
fi
|
||||
|
||||
# Build Container
|
||||
docker build \
|
||||
--platform="${platform}" \
|
||||
--tag="${invokeai_tag}" \
|
||||
--build-arg="PIP_REQUIREMENTS=${pip_requirements}" \
|
||||
--file="${dockerfile}" \
|
||||
.
|
||||
|
||||
docker run \
|
||||
--rm \
|
||||
--platform="$platform" \
|
||||
--name="$project_name" \
|
||||
--hostname="$project_name" \
|
||||
--mount="source=$volumename,target=/data" \
|
||||
--mount="type=bind,source=$HOME/.huggingface,target=/root/.huggingface" \
|
||||
--env="HUGGINGFACE_TOKEN=${HUGGINGFACE_TOKEN}" \
|
||||
--entrypoint="python3" \
|
||||
"${invokeai_tag}" \
|
||||
scripts/configure_invokeai.py --yes
|
||||
13
docker-build/env.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
project_name=${PROJECT_NAME:-invokeai}
|
||||
volumename=${VOLUMENAME:-${project_name}_data}
|
||||
arch=${ARCH:-x86_64}
|
||||
platform=${PLATFORM:-Linux/${arch}}
|
||||
invokeai_tag=${INVOKEAI_TAG:-${project_name}:${arch}}
|
||||
|
||||
export project_name
|
||||
export volumename
|
||||
export arch
|
||||
export platform
|
||||
export invokeai_tag
|
||||
15
docker-build/run.sh
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env bash
|
||||
set -e
|
||||
|
||||
source ./docker-build/env.sh || echo "please run from repository root" || exit 1
|
||||
|
||||
docker run \
|
||||
--interactive \
|
||||
--tty \
|
||||
--rm \
|
||||
--platform="$platform" \
|
||||
--name="$project_name" \
|
||||
--hostname="$project_name" \
|
||||
--mount="source=$volumename,target=/data" \
|
||||
--publish=9090:9090 \
|
||||
"$invokeai_tag" ${1:+$@}
|
||||
390
docs/CHANGELOG.md
Normal file
@@ -0,0 +1,390 @@
|
||||
---
|
||||
title: Changelog
|
||||
---
|
||||
|
||||
# :octicons-log-16: **Changelog**
|
||||
|
||||
## v2.1.0 <small>(2 November 2022)</small>
|
||||
|
||||
- update mac instructions to use invokeai for env name by @willwillems in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1030
|
||||
- Update .gitignore by @blessedcoolant in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1040
|
||||
- reintroduce fix for m1 from https://github.com/invoke-ai/InvokeAI/pull/579
|
||||
missing after merge by @skurovec in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1056
|
||||
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1060
|
||||
- Print out the device type which is used by @manzke in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1073
|
||||
- Hires Addition by @hipsterusername in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1063
|
||||
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
||||
@skurovec in https://github.com/invoke-ai/InvokeAI/pull/1081
|
||||
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
||||
warning by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1077
|
||||
- fix noisy images at high step counts by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1086
|
||||
- Generalize facetool strength argument by @db3000 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1078
|
||||
- Enable fast switching among models at the invoke> command line by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1066
|
||||
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1095
|
||||
- Update generate.py by @unreleased in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1109
|
||||
- Update 'ldm' env to 'invokeai' in troubleshooting steps by @19wolf in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1125
|
||||
- Fixed documentation typos and resolved merge conflicts by @rupeshs in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1123
|
||||
- Fix broken doc links, fix malaprop in the project subtitle by @majick in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1131
|
||||
- Only output facetool parameters if enhancing faces by @db3000 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1119
|
||||
- Update gitignore to ignore codeformer weights at new location by
|
||||
@spezialspezial in https://github.com/invoke-ai/InvokeAI/pull/1136
|
||||
- fix links to point to invoke-ai.github.io #1117 by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1143
|
||||
- Rework-mkdocs by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1144
|
||||
- add option to CLI and pngwriter that allows user to set PNG compression level
|
||||
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1127
|
||||
- Fix img2img DDIM index out of bound by @wfng92 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1137
|
||||
- Fix gh actions by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1128
|
||||
- update mac instructions to use invokeai for env name by @willwillems in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1030
|
||||
- Update .gitignore by @blessedcoolant in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1040
|
||||
- reintroduce fix for m1 from https://github.com/invoke-ai/InvokeAI/pull/579
|
||||
missing after merge by @skurovec in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1056
|
||||
- Update Stable_Diffusion_AI_Notebook.ipynb (Take 2) by @ChloeL19 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1060
|
||||
- Print out the device type which is used by @manzke in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1073
|
||||
- Hires Addition by @hipsterusername in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1063
|
||||
- fix for "1 leaked semaphore objects to clean up at shutdown" on M1 by
|
||||
@skurovec in https://github.com/invoke-ai/InvokeAI/pull/1081
|
||||
- Forward dream.py to invoke.py using the same interpreter, add deprecation
|
||||
warning by @db3000 in https://github.com/invoke-ai/InvokeAI/pull/1077
|
||||
- fix noisy images at high step counts by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1086
|
||||
- Generalize facetool strength argument by @db3000 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1078
|
||||
- Enable fast switching among models at the invoke> command line by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1066
|
||||
- Fix Typo, committed changing ldm environment to invokeai by @jdries3 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1095
|
||||
- Fixed documentation typos and resolved merge conflicts by @rupeshs in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1123
|
||||
- Only output facetool parameters if enhancing faces by @db3000 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1119
|
||||
- add option to CLI and pngwriter that allows user to set PNG compression level
|
||||
by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1127
|
||||
- Fix img2img DDIM index out of bound by @wfng92 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1137
|
||||
- Add text prompt to inpaint mask support by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1133
|
||||
- Respect http[s] protocol when making socket.io middleware by @damian0815 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/976
|
||||
- WebUI: Adds Codeformer support by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1151
|
||||
- Skips normalizing prompts for web UI metadata by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1165
|
||||
- Add Asymmetric Tiling by @carson-katri in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1132
|
||||
- Web UI: Increases max CFG Scale to 200 by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1172
|
||||
- Corrects color channels in face restoration; Fixes #1167 by @psychedelicious
|
||||
in https://github.com/invoke-ai/InvokeAI/pull/1175
|
||||
- Flips channels using array slicing instead of using OpenCV by @psychedelicious
|
||||
in https://github.com/invoke-ai/InvokeAI/pull/1178
|
||||
- Fix typo in docs: s/Formally/Formerly by @noodlebox in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1176
|
||||
- fix clipseg loading problems by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1177
|
||||
- Correct color channels in upscale using array slicing by @wfng92 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1181
|
||||
- Web UI: Filters existing images when adding new images; Fixes #1085 by
|
||||
@psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/1171
|
||||
- fix a number of bugs in textual inversion by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1190
|
||||
- Improve !fetch, add !replay command by @ArDiouscuros in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/882
|
||||
- Fix generation of image with s>1000 by @holstvoogd in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/951
|
||||
- Web UI: Gallery improvements by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1198
|
||||
- Update CLI.md by @krummrey in https://github.com/invoke-ai/InvokeAI/pull/1211
|
||||
- outcropping improvements by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1207
|
||||
- add support for loading VAE autoencoders by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1216
|
||||
- remove duplicate fix_func for MPS by @wfng92 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1210
|
||||
- Metadata storage and retrieval fixes by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1204
|
||||
- nix: add shell.nix file by @Cloudef in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1170
|
||||
- Web UI: Changes vite dist asset paths to relative by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1185
|
||||
- Web UI: Removes isDisabled from PromptInput by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1187
|
||||
- Allow user to generate images with initial noise as on M1 / mps system by
|
||||
@ArDiouscuros in https://github.com/invoke-ai/InvokeAI/pull/981
|
||||
- feat: adding filename format template by @plucked in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/968
|
||||
- Web UI: Fixes broken bundle by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1242
|
||||
- Support runwayML custom inpainting model by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1243
|
||||
- Update IMG2IMG.md by @talitore in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1262
|
||||
- New dockerfile - including a build- and a run- script as well as a GH-Action
|
||||
by @mauwii in https://github.com/invoke-ai/InvokeAI/pull/1233
|
||||
- cut over from karras to model noise schedule for higher steps by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1222
|
||||
- Prompt tweaks by @lstein in https://github.com/invoke-ai/InvokeAI/pull/1268
|
||||
- Outpainting implementation by @Kyle0654 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1251
|
||||
- fixing aspect ratio on hires by @tjennings in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1249
|
||||
- Fix-build-container-action by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1274
|
||||
- handle all unicode characters by @damian0815 in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1276
|
||||
- adds models.user.yml to .gitignore by @JakeHL in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1281
|
||||
- remove debug branch, set fail-fast to false by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1284
|
||||
- Protect-secrets-on-pr by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1285
|
||||
- Web UI: Adds initial inpainting implementation by @psychedelicious in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1225
|
||||
- fix environment-mac.yml - tested on x64 and arm64 by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1289
|
||||
- Use proper authentication to download model by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1287
|
||||
- Prevent indexing error for mode RGB by @spezialspezial in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1294
|
||||
- Integrate sd-v1-5 model into test matrix (easily expandable), remove
|
||||
unecesarry caches by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1293
|
||||
- add --no-interactive to preload_models step by @mauwii in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1302
|
||||
- 1-click installer and updater. Uses micromamba to install git and conda into a
|
||||
contained environment (if necessary) before running the normal installation
|
||||
script by @cmdr2 in https://github.com/invoke-ai/InvokeAI/pull/1253
|
||||
- preload_models.py script downloads the weight files by @lstein in
|
||||
https://github.com/invoke-ai/InvokeAI/pull/1290
|
||||
|
||||
## v2.0.1 <small>(13 October 2022)</small>
|
||||
|
||||
- fix noisy images at high step count when using k\* samplers
|
||||
- dream.py script now calls invoke.py module directly rather than via a new
|
||||
python process (which could break the environment)
|
||||
|
||||
## v2.0.0 <small>(9 October 2022)</small>
|
||||
|
||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
||||
backward compatibility.
|
||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
||||
- Support for [inpainting](features/INPAINTING.md) and
|
||||
[outpainting](features/OUTPAINTING.md)
|
||||
- img2img runs on all k\* samplers
|
||||
- Support for
|
||||
[negative prompts](features/PROMPTS.md#negative-and-unconditioned-prompts)
|
||||
- Support for CodeFormer face reconstruction
|
||||
- Support for Textual Inversion on Macintoshes
|
||||
- Support in both WebGUI and CLI for
|
||||
[post-processing of previously-generated images](features/POSTPROCESS.md)
|
||||
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
|
||||
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
|
||||
- New `--hires` option on `invoke>` line allows
|
||||
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
|
||||
at the cost of some performance.
|
||||
- New `--perlin` and `--threshold` options allow you to add and control
|
||||
variation during image generation (see
|
||||
[Thresholding and Perlin Noise Initialization](features/OTHER.md#thresholding-and-perlin-noise-initialization-options))
|
||||
- Extensive metadata now written into PNG files, allowing reliable regeneration
|
||||
of images and tweaking of previous settings.
|
||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
|
||||
platforms.
|
||||
- Improved [command-line completion behavior](features/CLI.md) New commands
|
||||
added:
|
||||
- List command-line history with `!history`
|
||||
- Search command-line history with `!search`
|
||||
- Clear history with `!clear`
|
||||
- Deprecated `--full_precision` / `-F`. Simply omit it and `invoke.py` will auto
|
||||
configure. To switch away from auto use the new flag like
|
||||
`--precision=float32`.
|
||||
|
||||
## v1.14 <small>(11 September 2022)</small>
|
||||
|
||||
- Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
|
||||
- Full support for Apple hardware with M1 or M2 chips.
|
||||
- Add "seamless mode" for circular tiling of image. Generates beautiful effects.
|
||||
([prixt](https://github.com/prixt)).
|
||||
- Inpainting support.
|
||||
- Improved web server GUI.
|
||||
- Lots of code and documentation cleanups.
|
||||
|
||||
## v1.13 <small>(3 September 2022)</small>
|
||||
|
||||
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
|
||||
([Kevin Gibbons](https://github.com/bakkot) and many contributors and
|
||||
reviewers)
|
||||
- Supports a Google Colab notebook for a standalone server running on Google
|
||||
hardware [Arturo Mendivil](https://github.com/artmen1516)
|
||||
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
|
||||
[Kevin Gibbons](https://github.com/bakkot)
|
||||
- WebUI supports incremental display of in-progress images during generation
|
||||
[Kevin Gibbons](https://github.com/bakkot)
|
||||
- A new configuration file scheme that allows new models (including upcoming
|
||||
stable-diffusion-v1.5) to be added without altering the code.
|
||||
([David Wager](https://github.com/maddavid12))
|
||||
- Can specify --grid on invoke.py command line as the default.
|
||||
- Miscellaneous internal bug and stability fixes.
|
||||
- Works on M1 Apple hardware.
|
||||
- Multiple bug fixes.
|
||||
|
||||
---
|
||||
|
||||
## v1.12 <small>(28 August 2022)</small>
|
||||
|
||||
- Improved file handling, including ability to read prompts from standard input.
|
||||
(kudos to [Yunsaki](https://github.com/yunsaki)
|
||||
- The web server is now integrated with the invoke.py script. Invoke by adding
|
||||
--web to the invoke.py command arguments.
|
||||
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
|
||||
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
|
||||
VRAM requirements are modestly reduced. Thanks to both
|
||||
[Blessedcoolant](https://github.com/blessedcoolant) and
|
||||
[Oceanswave](https://github.com/oceanswave) for their work on this.
|
||||
- You can now swap samplers on the invoke> command line.
|
||||
[Blessedcoolant](https://github.com/blessedcoolant)
|
||||
|
||||
---
|
||||
|
||||
## v1.11 <small>(26 August 2022)</small>
|
||||
|
||||
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
|
||||
(kudos to [Oceanswave](https://github.com/Oceanswave)
|
||||
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
|
||||
the seed for the image generated before that, etc. Seed memory only extends
|
||||
back to the previous command, but will work on all images generated with the
|
||||
-n# switch.
|
||||
- Variant generation support temporarily disabled pending more general solution.
|
||||
- Created a feature branch named **yunsaki-morphing-invoke** which adds
|
||||
experimental support for iteratively modifying the prompt and its parameters.
|
||||
Please
|
||||
see[Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
|
||||
a synopsis of how this works. Note that when this feature is eventually added
|
||||
to the main branch, it will may be modified significantly.
|
||||
|
||||
---
|
||||
|
||||
## v1.10 <small>(25 August 2022)</small>
|
||||
|
||||
- A barebones but fully functional interactive web server for online generation
|
||||
of txt2img and img2img.
|
||||
|
||||
---
|
||||
|
||||
## v1.09 <small>(24 August 2022)</small>
|
||||
|
||||
- A new -v option allows you to generate multiple variants of an initial image
|
||||
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
|
||||
[ See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
|
||||
- Added ability to personalize text to image generation (kudos to
|
||||
[Oceanswave](https://github.com/Oceanswave) and
|
||||
[nicolai256](https://github.com/nicolai256))
|
||||
- Enabled all of the samplers from k_diffusion
|
||||
|
||||
---
|
||||
|
||||
## v1.08 <small>(24 August 2022)</small>
|
||||
|
||||
- Escape single quotes on the invoke> command before trying to parse. This
|
||||
avoids parse errors.
|
||||
- Removed instruction to get Python3.8 as first step in Windows install.
|
||||
Anaconda3 does it for you.
|
||||
- Added bounds checks for numeric arguments that could cause crashes.
|
||||
- Cleaned up the copyright and license agreement files.
|
||||
|
||||
---
|
||||
|
||||
## v1.07 <small>(23 August 2022)</small>
|
||||
|
||||
- Image filenames will now never fill gaps in the sequence, but will be assigned
|
||||
the next higher name in the chosen directory. This ensures that the alphabetic
|
||||
and chronological sort orders are the same.
|
||||
|
||||
---
|
||||
|
||||
## v1.06 <small>(23 August 2022)</small>
|
||||
|
||||
- Added weighted prompt support contributed by
|
||||
[xraxra](https://github.com/xraxra)
|
||||
- Example of using weighted prompts to tweak a demonic figure contributed by
|
||||
[bmaltais](https://github.com/bmaltais)
|
||||
|
||||
---
|
||||
|
||||
## v1.05 <small>(22 August 2022 - after the drop)</small>
|
||||
|
||||
- Filenames now use the following formats: 000010.95183149.png -- Two files
|
||||
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
|
||||
by a different seed.
|
||||
|
||||
000011.455191342.01.png -- Two files produced by the same command using
|
||||
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
|
||||
|
||||
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
|
||||
can be regenerated with the indicated key
|
||||
|
||||
- It should no longer be possible for one image to overwrite another
|
||||
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
|
||||
retrieve the path of the output directory.
|
||||
|
||||
---
|
||||
|
||||
## v1.04 <small>(22 August 2022 - after the drop)</small>
|
||||
|
||||
- Updated README to reflect installation of the released weights.
|
||||
- Suppressed very noisy and inconsequential warning when loading the frozen CLIP
|
||||
tokenizer.
|
||||
|
||||
---
|
||||
|
||||
## v1.03 <small>(22 August 2022)</small>
|
||||
|
||||
- The original txt2img and img2img scripts from the CompViz repository have been
|
||||
moved into a subfolder named "orig_scripts", to reduce confusion.
|
||||
|
||||
---
|
||||
|
||||
## v1.02 <small>(21 August 2022)</small>
|
||||
|
||||
- A copy of the prompt and all of its switches and options is now stored in the
|
||||
corresponding image in a tEXt metadata field named "Dream". You can read the
|
||||
prompt using scripts/images2prompt.py, or an image editor that allows you to
|
||||
explore the full metadata. **Please run "conda env update" to load the k_lms
|
||||
dependencies!!**
|
||||
|
||||
---
|
||||
|
||||
## v1.01 <small>(21 August 2022)</small>
|
||||
|
||||
- added k_lms sampling. **Please run "conda env update" to load the k_lms
|
||||
dependencies!!**
|
||||
- use half precision arithmetic by default, resulting in faster execution and
|
||||
lower memory requirements Pass argument --full_precision to invoke.py to get
|
||||
slower but more accurate image generation
|
||||
|
||||
---
|
||||
|
||||
## Links
|
||||
|
||||
- **[Read Me](index.md)**
|
||||
BIN
docs/assets/Lincoln-and-Parrot-512-transparent.png
Executable file
|
After Width: | Height: | Size: 284 KiB |
BIN
docs/assets/Lincoln-and-Parrot-512.png
Normal file
|
After Width: | Height: | Size: 252 KiB |
BIN
docs/assets/canvas/biker_granny.png
Normal file
|
After Width: | Height: | Size: 359 KiB |
BIN
docs/assets/canvas/biker_jacket_granny.png
Normal file
|
After Width: | Height: | Size: 528 KiB |
BIN
docs/assets/canvas/mask_granny.png
Normal file
|
After Width: | Height: | Size: 601 KiB |
BIN
docs/assets/canvas/staging_area.png
Normal file
|
After Width: | Height: | Size: 59 KiB |
BIN
docs/assets/colab_notebook.png
Normal file
|
After Width: | Height: | Size: 799 KiB |
BIN
docs/assets/concepts/image1.png
Normal file
|
After Width: | Height: | Size: 122 KiB |
BIN
docs/assets/concepts/image2.png
Normal file
|
After Width: | Height: | Size: 128 KiB |
BIN
docs/assets/concepts/image3.png
Normal file
|
After Width: | Height: | Size: 99 KiB |
BIN
docs/assets/concepts/image4.png
Normal file
|
After Width: | Height: | Size: 112 KiB |
BIN
docs/assets/concepts/image5.png
Normal file
|
After Width: | Height: | Size: 107 KiB |
BIN
docs/assets/dream-py-demo.png
Normal file
|
After Width: | Height: | Size: 499 KiB |
BIN
docs/assets/dream_web_server.png
Normal file
|
After Width: | Height: | Size: 536 KiB |
BIN
docs/assets/img2img/000019.1592514025.png
Normal file
|
After Width: | Height: | Size: 270 KiB |