Edit ‘the_seventy_maxims_of_maximally_effective_machine_learning_engineers’
This commit is contained in:
@@ -22,7 +22,7 @@ Based on [[https://schlockmercenary.fandom.com/wiki/The_Seventy_Maxims_of_Maxima
|
||||
*. If you’re not willing to prune your own layers, you’re not willing to deploy.
|
||||
*. Give a model a labeled dataset, and it trains for a day. Take its labels away and call it “self-supervised,” and it’ll generate new ones for you to validate tomorrow.
|
||||
*. If you’re manually labeling data, somebody’s done something wrong.
|
||||
*. Training loss and validation loss should be easier to tell apart.
|
||||
*. Memory-bound and compute-bound should be easier to tell apart.
|
||||
*. Any sufficiently advanced algorithm is indistinguishable from a matrix multiplication.
|
||||
*. If your model’s failure is covered by the SLA, you didn’t test enough edge cases.
|
||||
*. “Fire-and-forget training” is fine, provided you never actually forget to monitor the run.
|
||||
|
||||
Reference in New Issue
Block a user