1
0
mirror of https://github.com/osmarks/nanogpt-experiments.git synced 2025-09-07 05:17:58 +00:00

Merge pull request #16 from jorahn/patch-1

Update README.md
This commit is contained in:
Andrej
2023-01-04 09:08:50 -08:00
committed by GitHub

View File

@@ -46,7 +46,7 @@ Training on 1 A100 40GB GPU overnight currently gets loss ~3.74, training on 4 g
## finetuning ## finetuning
For an example of how to finetune a GPT on new text go to `data/shakespeare` and look at `prepare.py` to download the tiny shakespeare dataset and render it into a `train.bin` and `val.bin`. Unlike OpenWebText this will run in seconds. Finetuning takes very little time, e.g. on a single GPT just a few minutes. Run an example finetuning like: For an example of how to finetune a GPT on new text go to `data/shakespeare` and look at `prepare.py` to download the tiny shakespeare dataset and render it into a `train.bin` and `val.bin`. Unlike OpenWebText this will run in seconds. Finetuning takes very little time, e.g. on a single GPU just a few minutes. Run an example finetuning like:
``` ```
$ python train.py finetune_shakespeare $ python train.py finetune_shakespeare