diff --git a/autogollark.myco b/autogollark.myco index 83cca3b..89808f9 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -25,7 +25,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt * {RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work. [[https://arxiv.org/abs/2505.15778]] [[https://arxiv.org/abs/2505.24864]] [[https://arxiv.org/abs/2509.06160]] * Unclear whether model could feasibly learn tool use "from scratch", so still need SFT pipeline. } -* https://arxiv.org/abs/2310.04363 can improve sampling (roughly) //and// train for tool use. +* https://arxiv.org/abs/2310.04363 can improve sampling (roughly) //and// train for tool use. However, it seems really annoying. } * {Local finetune only? Would be more tonally consistent but dumber, I think. * Temporary bursts of hypercompetence enabled by powerful base model are a key feature. Small model is really repetitive.