mirror of
https://github.com/osmarks/nanogpt-experiments.git
synced 2024-12-24 17:10:29 +00:00
16 lines
489 B
Markdown
16 lines
489 B
Markdown
|
|
||
|
## openwebtext dataset
|
||
|
|
||
|
after running `prepare.py` (preprocess) we get:
|
||
|
|
||
|
- train.bin is ~17GB, val.bin ~8.5MB
|
||
|
- train has ~9B tokens (9,035,582,198)
|
||
|
- val has ~4M tokens (4,434,897)
|
||
|
|
||
|
this came from 8,013,769 documents in total.
|
||
|
|
||
|
references:
|
||
|
|
||
|
- OpenAI's WebText dataset is discussed in [GPT-2 paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
|
||
|
- [OpenWebText](https://skylion007.github.io/OpenWebTextCorpus/) dataset
|