From 26aa5f3eadd5df5a9e0b0fdb4736e7b565e9f9f1 Mon Sep 17 00:00:00 2001 From: Jonathan Rahn Date: Wed, 4 Jan 2023 10:28:13 +0100 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 002d9cb..e66434e 100644 --- a/README.md +++ b/README.md @@ -46,7 +46,7 @@ Training on 1 A100 40GB GPU overnight currently gets loss ~3.74, training on 4 g ## finetuning -For an example of how to finetune a GPT on new text go to `data/shakespeare` and look at `prepare.py` to download the tiny shakespeare dataset and render it into a `train.bin` and `val.bin`. Unlike OpenWebText this will run in seconds. Finetuning takes very little time, e.g. on a single GPT just a few minutes. Run an example finetuning like: +For an example of how to finetune a GPT on new text go to `data/shakespeare` and look at `prepare.py` to download the tiny shakespeare dataset and render it into a `train.bin` and `val.bin`. Unlike OpenWebText this will run in seconds. Finetuning takes very little time, e.g. on a single GPU just a few minutes. Run an example finetuning like: ``` $ python train.py finetune_shakespeare