1
0
mirror of https://github.com/osmarks/nanogpt-experiments.git synced 2024-12-23 16:40:29 +00:00
nanogpt-experiments/data/openwebtext/readme.md
2022-12-28 00:58:19 +00:00

16 lines
489 B
Markdown

## openwebtext dataset
after running `prepare.py` (preprocess) we get:
- train.bin is ~17GB, val.bin ~8.5MB
- train has ~9B tokens (9,035,582,198)
- val has ~4M tokens (4,434,897)
this came from 8,013,769 documents in total.
references:
- OpenAI's WebText dataset is discussed in [GPT-2 paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenWebText](https://skylion007.github.io/OpenWebTextCorpus/) dataset