From 482fd81078baa17394651ec34d6332d7dcd3e60c Mon Sep 17 00:00:00 2001 From: osmarks Date: Wed, 24 Dec 2025 18:30:01 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98autogollark=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- autogollark.myco | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/autogollark.myco b/autogollark.myco index df71be8..b5c6912 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -13,7 +13,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt * Writeable memory? * {Fix lowercasing issue. * Due to general personality stability. Need finetune or similar. -* One proposal: use internal finetune to steer big model somehow. Possibly: use its likelihood (prefill-only) to evaluate goodness of big model output wrt. gollark personality, and if it is too bad then use finetune directly. +* One proposal: use internal finetune to steer big model somehow. Possibly: use its likelihood (prefill-only) to evaluate goodness of big model output wrt. gollark personality, and if it is too bad then use finetune directly. But issues if we go for a custom tokenizer. * Is GCG code salvageable? NanoGCG, maybe. } * {Increased autonomy (wrt. responses). @@ -34,9 +34,10 @@ Autogollark currently comprises the dataset, the search API server and the [[htt * ~~Pending:~~ Resource now available: [[XEROGRAPHIC BIFROST]] phase 3. * https://arxiv.org/abs/2507.07101 * https://arxiv.org/abs/2507.01335 -* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work? Why? +* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work (at scale)? Why? Would be really nice for using AMX units. +* Maybe compute grants are available for training. } -* MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475 +* Search over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475 * {Longer context, mux several channels. * {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. Maybe we can process channels in parallel and fudge the K/V caches. * Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency. Inference code will be awful.