diff --git a/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco b/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco index 0ae4774..5905947 100644 --- a/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco +++ b/the_seventy_maxims_of_maximally_effective_machine_learning_engineers.myco @@ -8,7 +8,7 @@ Based on [[https://schlockmercenary.fandom.com/wiki/The_Seventy_Maxims_of_Maxima *. If increasing model complexity wasn’t your last resort, you failed to add enough layers. *. If the accuracy is high enough, stakeholders will stop complaining about the compute costs. *. Harsh critiques have their place—usually in the rejected pull requests. -*. Never turn your back on a Claude Code. +*. Never turn your back on a reinforcement learner. *. Sometimes the only way out is through… through another epoch. *. Every dataset is trainable—at least once. *. A gentle learning rate turneth away divergence. Once the loss stabilizes, crank it up. @@ -16,7 +16,7 @@ Based on [[https://schlockmercenary.fandom.com/wiki/The_Seventy_Maxims_of_Maxima *. “Innovative architecture” means never asking “did we implement a proper baseline?” *. Only you can prevent reward hacking. *. Your model is in the leaderboards: be sure it has dropout. -*. The longer training goes without overfitting, the bigger the validation-set disaster. +*. The longer your Claude Code runs without input, the bigger the impending disaster. *. If the optimizer is leading from the front, watch for exploding gradients in the rear. *. The field advances when you turn competitors into collaborators, but that’s not the same as your h-index advancing. *. If you’re not willing to prune your own layers, you’re not willing to deploy.