This commit is contained in:
lvmin
2023-02-20 16:19:34 -08:00
parent 73a55baba3
commit 21df0d2b8f

View File

@@ -267,4 +267,4 @@ Because we use zero convolutions, the SD should always be able to predict meanin
You will always find that at some iterations, the model "suddenly" be able to fit some training conditions. This means that you will get a basically usable model at about 3k to 7k steps (future training will improve it, but that model after the first "sudden converge" should be basically functional).
Note that 3k to 7k steps is not very large, and you should consider larger batch size rather than more training steps. If you can observe the fitting at 3k step, rather than train it with 300k steps, a better idea is to use 100× gradient accumulation to train that 3k steps with 100× batch size. Note that perhaps we should not do this *too* extremely, but you should consider that, since "sudden converge" will always happen at some point, getting a better converge is more important.
Note that 3k to 7k steps is not very large, and you should consider larger batch size rather than more training steps. If you can observe the "sudden converge" at 3k step, rather than train it with 300k steps, a better idea is to use 100× gradient accumulation to train that 3k steps with 100× batch size. Note that perhaps we should not do this *too* extremely, but you should consider that, since "sudden converge" will always happen at some point, getting a better converge is more important.