From 1fee9e3a8ac43e4dcafa2daf2ff33afe25cbdfd3 Mon Sep 17 00:00:00 2001 From: osmarks Date: Wed, 10 Sep 2025 08:46:58 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98autogollark=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- autogollark.myco | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogollark.myco b/autogollark.myco index 08a183f..8625871 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -22,7 +22,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt } * {Tool capabilities (how to get the data? Examples in context only?!). * Synthetic via instruct model. -* {RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work. [[https://arxiv.org/abs/2505.15778]] [[https://arxiv.org/abs/2505.24864]] +* {RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work. [[https://arxiv.org/abs/2505.15778]] [[https://arxiv.org/abs/2505.24864]] [[https://arxiv.org/abs/2509.06160]] * Unclear whether model could feasibly learn tool use "from scratch", so still need SFT pipeline. } * https://arxiv.org/abs/2310.04363 can improve sampling (roughly) //and// train for tool use.