From cfea49d0472310b5799916f01b9a44ed7fd3c369 Mon Sep 17 00:00:00 2001 From: osmarks Date: Wed, 24 Dec 2025 18:25:22 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98autogollark=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- autogollark.myco | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/autogollark.myco b/autogollark.myco index dff3dd9..df71be8 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -22,22 +22,23 @@ Autogollark currently comprises the dataset, the search API server and the [[htt } * {Tool capabilities (how to get the data? Examples in context only?!). * Synthetic via instruct model. -* {RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work. [[https://arxiv.org/abs/2505.15778]] [[https://arxiv.org/abs/2505.24864]] [[https://arxiv.org/abs/2509.06160]] +* {RL (also include reasoning, of course). Probably hard though (sparse rewards). [[https://arxiv.org/abs/2403.09629]] (bad?). [[https://arxiv.org/abs/2503.22828]] would probably work. [[https://arxiv.org/abs/2505.15778]] [[https://arxiv.org/abs/2505.24864]] [[https://arxiv.org/abs/2509.06160]] * Unclear whether model could feasibly learn tool use "from scratch", so still need SFT pipeline. } * https://arxiv.org/abs/2310.04363 can improve sampling (roughly) //and// train for tool use. However, it seems really annoying. } * {Local finetune only? Would be more tonally consistent but dumber, I think. * Temporary bursts of hypercompetence enabled by powerful base model are a key feature. Small model is really repetitive. -* Can additionally finetune on "interesting" blog posts etc (ref https://x.com/QiaochuYuan/status/1913382597381767471). +* Can additionally finetune on "interesting" blog posts etc (ref https://x.com/QiaochuYuan/status/1913382597381767471). Maghammer archival data, books, transcripts. * Decision theory training data (synthetic, probably) (ref https://arxiv.org/abs/2411.10588). -* ~~Pending:~~ Resource now available: XEROGRAPHIC BIFROST 3. +* ~~Pending:~~ Resource now available: [[XEROGRAPHIC BIFROST]] phase 3. * https://arxiv.org/abs/2507.07101 * https://arxiv.org/abs/2507.01335 +* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work? Why? } * MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475 * {Longer context, mux several channels. -* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. +* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. Maybe we can process channels in parallel and fudge the K/V caches. * Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency. Inference code will be awful. } }