diff --git a/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco b/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco index 1ee7b5e..02b7166 100644 --- a/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco +++ b/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco @@ -7,10 +7,10 @@ Based on [[https://schlockmercenary.fandom.com/wiki/The_Seventy_Maxims_of_Maxima *. Feature importance and data leakage should be easier to tell apart. *. If increasing model complexity wasn’t your last resort, you failed to add enough layers. *. If the accuracy is high enough, stakeholders will stop complaining about the compute costs. -*. Harsh critiques have their place—usually in the rejected pull requests. +*. Harsh critiques have their place – usually in the rejected pull requests. *. Never turn your back on a reinforcement learner. *. Sometimes the only way out is through… through another epoch. -*. Every dataset is trainable—at least once. +*. Every dataset is trainable at least once. *. A gentle learning rate turneth away divergence. Once the loss stabilizes, crank it up. *. Do unto others’ hyperparameters as you would have them do unto yours. *. “Innovative architecture” means never asking “did we implement a proper baseline?” @@ -69,4 +69,4 @@ Based on [[https://schlockmercenary.fandom.com/wiki/The_Seventy_Maxims_of_Maxima *. If you can’t explain it, cite the arXiv paper. *. Deploying with confidence intervals doesn’t mean you shouldn’t also deploy with a kill switch. *. Sometimes SOTA is a function of who had the biggest TPU pod. -*. Bugs are not an option—they are mandatory. The option is whether or not to catch them before releasing the paper. \ No newline at end of file +*. Bugs are not an option – they are mandatory. The option is whether or not to catch them before releasing the paper. \ No newline at end of file