diff --git a/docs/train.md b/docs/train.md index 6a28214..a58cedb 100644 --- a/docs/train.md +++ b/docs/train.md @@ -180,7 +180,7 @@ Thanks to our organized dataset pytorch object and the power of pytorch_lightnin Now, you may take a look at [Pytorch Lightning Official DOC](https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.trainer.trainer.Trainer.html?highlight=trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! -Note that if you find OOM, perhaps you need to enable [Low VRAM mode](docs/low_vram.md), and perhaps you also need to use smaller batch size and gradient accumulation. Or you may also want to use some “advanced” tricks like sliced attention or xformers. For example: +Note that if you find OOM, perhaps you need to enable [Low VRAM mode](low_vram.md), and perhaps you also need to use smaller batch size and gradient accumulation. Or you may also want to use some “advanced” tricks like sliced attention or xformers. For example: ```python # Configs