mirror of
https://github.com/osmarks/nanogpt-experiments.git
synced 2024-11-11 04:19:57 +00:00
the badge is a bit ugly, move it down to troubleshooting section
This commit is contained in:
parent
aa8e4c2546
commit
2b083fbfde
@ -1,8 +1,6 @@
|
|||||||
|
|
||||||
# nanoGPT
|
# nanoGPT
|
||||||
|
|
||||||
[![](https://dcbadge.vercel.app/api/server/3zy8kqD9Cp?compact=true&style=flat)](https://discord.gg/3zy8kqD9Cp)
|
|
||||||
|
|
||||||
![nanoGPT](assets/nanogpt.jpg)
|
![nanoGPT](assets/nanogpt.jpg)
|
||||||
|
|
||||||
The simplest, fastest repository for training/finetuning medium-sized GPTs. It is a rewrite of [minGPT](https://github.com/karpathy/minGPT) that prioritizes teeth over education. Still under active development, but currently the file `train.py` reproduces GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in 38 hours of training. The code itself is plain and readable: `train.py` is a ~300-line boilerplate training loop and `model.py` a ~300-line GPT model definition, which can optionally load the GPT-2 weights from OpenAI. That's it.
|
The simplest, fastest repository for training/finetuning medium-sized GPTs. It is a rewrite of [minGPT](https://github.com/karpathy/minGPT) that prioritizes teeth over education. Still under active development, but currently the file `train.py` reproduces GPT-2 (124M) on OpenWebText, running on a single 8XA100 40GB node in 38 hours of training. The code itself is plain and readable: `train.py` is a ~300-line boilerplate training loop and `model.py` a ~300-line GPT model definition, which can optionally load the GPT-2 weights from OpenAI. That's it.
|
||||||
@ -171,6 +169,10 @@ Results
|
|||||||
|
|
||||||
- Note that by default this repo uses PyTorch 2.0 (i.e. `torch.compile`). This is fairly new and experimental, and not yet available on all platforms (e.g. Windows). If you're running into related error messages try to disable this by adding `--compile=False` flag. This will slow down the code but at least it will run.
|
- Note that by default this repo uses PyTorch 2.0 (i.e. `torch.compile`). This is fairly new and experimental, and not yet available on all platforms (e.g. Windows). If you're running into related error messages try to disable this by adding `--compile=False` flag. This will slow down the code but at least it will run.
|
||||||
|
|
||||||
|
For more questions/discussions also feel free to stop by #nanoGPT on Discord:
|
||||||
|
|
||||||
|
[![](https://dcbadge.vercel.app/api/server/3zy8kqD9Cp?compact=true&style=flat)](https://discord.gg/3zy8kqD9Cp)
|
||||||
|
|
||||||
## acknowledgements
|
## acknowledgements
|
||||||
|
|
||||||
All nanoGPT experiments are powered by GPUs on [Lambda labs](https://lambdalabs.com), the best Cloud GPU provider thank you :)
|
All nanoGPT experiments are powered by GPUs on [Lambda labs](https://lambdalabs.com), the best Cloud GPU provider thank you :)
|
||||||
|
Loading…
Reference in New Issue
Block a user