From a2929d84e66ab3597787de88dabb2374c6c389ba Mon Sep 17 00:00:00 2001 From: Seth <60856766+sethupavan12@users.noreply.github.com> Date: Wed, 12 Apr 2023 18:15:07 +0100 Subject: [PATCH] Fix link to Trainer pytorch lightning --- docs/train.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/train.md b/docs/train.md index 1c94ea1..fa77392 100644 --- a/docs/train.md +++ b/docs/train.md @@ -187,7 +187,7 @@ trainer.fit(model, dataloader) Thanks to our organized dataset pytorch object and the power of pytorch_lightning, the entire code is just super short. -Now, you may take a look at [Pytorch Lightning Official DOC](https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.trainer.trainer.Trainer.html?highlight=trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! +Now, you may take a look at [Pytorch Lightning Official DOC](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.trainer.trainer.Trainer.html#trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! Note that if you find OOM, perhaps you need to enable [Low VRAM mode](low_vram.md), and perhaps you also need to use smaller batch size and gradient accumulation. Or you may also want to use some “advanced” tricks like sliced attention or xformers. For example: