From c58fc4605cdcdc68b26a330a8a621c02e681e7d4 Mon Sep 17 00:00:00 2001 From: Snehal Raj Date: Sat, 25 Mar 2023 20:36:46 +0100 Subject: [PATCH] fix small typo --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8ddef7c..243da36 100644 --- a/README.md +++ b/README.md @@ -37,7 +37,7 @@ This creates a `train.bin` and `val.bin` in that data directory. Now it is time $ python train.py config/train_shakespeare_char.py ``` -If you peak inside it, you'll see that we're training a GPT with a context size of up to 256 characters, 384 feature channels, and it is a 6-layer Transformer with 6 heads in each layer. On one A100 GPU this training run takes about 3 minutes and the best validation loss is 1.4697. Based on the configuration, the model checkpoints are being written into the `--out_dir` directory `out-shakespeare-char`. So once the training finishes we can sample from the best model by pointing the sampling script at this directory: +If you peek inside it, you'll see that we're training a GPT with a context size of up to 256 characters, 384 feature channels, and it is a 6-layer Transformer with 6 heads in each layer. On one A100 GPU this training run takes about 3 minutes and the best validation loss is 1.4697. Based on the configuration, the model checkpoints are being written into the `--out_dir` directory `out-shakespeare-char`. So once the training finishes we can sample from the best model by pointing the sampling script at this directory: ``` $ python sample.py --out_dir=out-shakespeare-char