1
0
mirror of https://github.com/osmarks/nanogpt-experiments.git synced 2024-12-18 06:00:29 +00:00
Testing various LLM-related things.
Go to file
2022-12-29 05:06:07 +00:00
config adding a lightweight configurator that may be a terrible mistake lol. also adding configs to evaluate the baseline GPT2 versions released by OpenAI on OWT. we have some ways to go to match those numbers atm 2022-12-28 23:31:23 +00:00
data/openwebtext first very bad commit 2022-12-28 00:58:19 +00:00
bench.py add pytorch profiler support. not sure how to support both profiler and simple benchmarking, a bit gnarly atm hmm 2022-12-29 01:49:53 +00:00
model.py first very bad commit 2022-12-28 00:58:19 +00:00
README.md add support for DDP training. the scaling timings right now do not look good by default, have to dig more into 2022-12-29 05:06:07 +00:00
sample.py first very bad commit 2022-12-28 00:58:19 +00:00
train.py add support for DDP training. the scaling timings right now do not look good by default, have to dig more into 2022-12-29 05:06:07 +00:00

nanoGPT

The cleanest, smallest, fastest repository for training/finetuning medium-sized GPTs. Still under active development, currently working to reproduce GPT-2 on OpenWebText dataset. The code itself aims by design to be plain and readable: train.py is a ~300-line boilerplate training loop and model.py a ~300-line GPT model definition, which can optionally load the GPT-2 weights from OpenAI. That's it.

install

Dependencies:

  • pytorch <3
  • numpy <3
  • pip install datasets for huggingface datasets <3
  • pip install tiktoken for OpenAI's fast bpe code <3
  • pip install wandb for optional logging <3

usage

To render a dataset we first tokenize some documents into one simple long 1D array of indices. E.g. for OpenWebText see:

$ cd data/openwebtext
$ python prepare.py

To download and tokenize the OpenWebText dataset. This will create a train.bin and val.bin which holds the GPT2 BPE token ids in one sequence, stored as raw uint16 bytes. Then we're ready to kick off training. The training script currently by default tries to reproduce the smallest GPT-2 released by OpenAI, i.e. the 124M version of GPT-2. We can demo train as follows on a single device, though I encourage you to read the code and see all of the settings and paths up top in the file:

$ python train.py

To train using PyTorch Distributed Data Parallel (DDP) run the script with torchrun. For example to train on a node with 4 GPUs run:

$ torchrun --standalone --nproc_per_node=4 train.py

Once some checkpoints are written to the output directory (e.g. ./out by default), we can sample from the model:

$ python sample.py

Training on 1 GPU overnight currently gets loss ~3.74. Random chance at init is -ln(1/50257) = 10.82. Which brings us to baselines.

baselines

OpenAI GPT-2 checkpoints allow us to get some baselines in place for openwebtext. We can get the numbers as follows:

$ python train.py eval_gpt2
$ python train.py eval_gpt2_medium
$ python train.py eval_gpt2_large
$ python train.py eval_gpt2_xl

and observe the following losses on train and val:

model params train loss val loss
gpt2 124M 3.11 3.12
gpt2-medium 350M 2.85 2.84
gpt2-large 774M 2.66 2.67
gpt2-xl 1558M 2.56 2.54

I briefly tried finetuning gpt2 a bit more on our OWT and didn't notice dramatic improvements, suggesting that OWT is not much much different from WT in terms of the data distribution, but this needs a bit more thorough attempt once the code is in a better place.

benchmarking

For model benchmarking bench.py might be useful. It's identical what happens in the meat of the training loop of train.py, but omits much of the other complexities.