1
0
mirror of https://github.com/osmarks/website synced 2025-09-11 23:05:59 +00:00

emphasis blocks

This commit is contained in:
osmarks
2025-01-24 15:17:41 +00:00
parent d44443289d
commit 9d9a78a950
4 changed files with 21 additions and 6 deletions

View File

@@ -5,9 +5,7 @@ created: 25/02/2024
updated: 14/04/2024
slug: mlrig
---
::: epigraph attribution=@jckarter link=https://twitter.com/jckarter/status/1441441401439358988
Programmers love talking about the “bare metal”, when in fact the logic board is composed primarily of plastics and silicon oxides.
:::
::: emphasis
## Summary
@@ -16,6 +14,12 @@ Programmers love talking about the “bare metal”, when in fact the logic boar
- Older or used parts are good to cut costs (not overly old GPUs).
- Buy a sufficiently capable PSU.
:::
::: epigraph attribution=@jckarter link=https://twitter.com/jckarter/status/1441441401439358988
Programmers love talking about the “bare metal”, when in fact the logic board is composed primarily of plastics and silicon oxides.
:::
## Long version
Thanks to the osmarks.net crawlers scouring the web for bloggable information[^1], I've found out that many people are interested in having local hardware to run machine learning workloads (by which I refer to GPU-accelerated inference or training of large neural nets: anything else is [not real](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)), but are doing it wrong, or not at all. There are superficially good part choices which are, in actuality, extremely bad for almost anything, and shiny [prebuilt options](https://lambdalabs.com/gpu-workstations/vector-one) which are far more expensive than necessary. In this article, I will outline what to do to get a useful system at somewhat less expense[^2].