diff --git a/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco b/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco index a08093e..0629fb0 100644 --- a/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco +++ b/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco @@ -25,7 +25,7 @@ Based on [[https://schlockmercenary.fandom.com/wiki/The_Seventy_Maxims_of_Maxima *. Training loss and validation loss should be easier to tell apart. *. Any sufficiently advanced algorithm is indistinguishable from a matrix multiplication. *. If your model’s failure is covered by the SLA, you didn’t test enough edge cases. -*. “Fire-and-forget training” is fine, provided you never actually forget to monitor drift. +*. “Fire-and-forget training” is fine, provided you never actually forget to monitor the run. *. Don’t be afraid to be the first to try a random seed. *. If the cost of cloud compute is high enough, you might get promoted for shutting down idle instances. *. The enemy of my bias is my variance. No more. No less.