1
0
mirror of https://github.com/osmarks/nanogpt-experiments.git synced 2024-12-18 14:10:28 +00:00

oops missed one # have to fix

This commit is contained in:
Andrej Karpathy 2022-12-29 05:24:14 +00:00
parent 97e2ab1b8d
commit e7bac659f5

View File

@ -68,7 +68,7 @@ I briefly tried finetuning gpt2 a bit more on our OWT and didn't notice dramatic
For model benchmarking `bench.py` might be useful. It's identical what happens in the meat of the training loop of `train.py`, but omits much of the other complexities.
# todos
## todos
A few that I'm aware of, other than the ones mentioned in code: