mirror of
https://github.com/osmarks/nanogpt-experiments.git
synced 2024-11-10 20:09:58 +00:00
also add a sampling/inference section
This commit is contained in:
parent
23a8e701d2
commit
f83dd034e1
13
README.md
13
README.md
@ -173,6 +173,19 @@ Thou hast no right, no right, but to be sold.
|
||||
|
||||
Whoa there, GPT, entering some dark place over there. I didn't really tune the hyperparameters in the config too much, feel free to try!
|
||||
|
||||
## sampling / inference
|
||||
|
||||
Use the script `sample.py` to sample either from pre-trained GPT-2 models released by OpenAI, or from a model you trained yourself. For example, here is a way to sample from the largest available `gpt2-xl` model:
|
||||
|
||||
```
|
||||
$ python sample.py \
|
||||
--init_from=gpt2-xl \
|
||||
--start="What is the answer to life, the universe, and everything?" \
|
||||
--num_samples=5 --max_new_tokens=100
|
||||
```
|
||||
|
||||
If you'd like to sample from a model you trained, use the `--out_dir` to point the code appropriately. You can also prompt the model with some text from a file, e.g. `$ python sample.py --start=FILE:prompt.txt`.
|
||||
|
||||
## efficiency notes
|
||||
|
||||
For simple model benchmarking and profiling, `bench.py` might be useful. It's identical to what happens in the meat of the training loop of `train.py`, but omits much of the other complexities.
|
||||
|
Loading…
Reference in New Issue
Block a user