From def2617c80aa9c097b6c51f3ec84cd8960f21575 Mon Sep 17 00:00:00 2001 From: osmarks Date: Sat, 10 May 2025 10:29:02 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98autogollark=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- autogollark.myco | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogollark.myco b/autogollark.myco index c9f9247..9583c84 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -22,7 +22,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt } * {Tool capabilities (how to get the data? Examples in context only?!). * Synthetic via instruct model. -* RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. https://arxiv.org/abs/2503.22828 would probably work. +* RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work. } * {Local finetune only? Would be more tonally consistent but dumber, I think. * Temporary bursts of hypercompetence enabled by powerful base model are a key feature. Small model is really repetitive.