From bf8062da357878d45e3c1cfe4141f57ed74a5a04 Mon Sep 17 00:00:00 2001 From: osmarks Date: Wed, 24 Dec 2025 18:22:49 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98autogollark=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- autogollark.myco | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogollark.myco b/autogollark.myco index 22a460d..dff3dd9 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -38,7 +38,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt * MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475 * {Longer context, mux several channels. * {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. -* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency. +* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency. Inference code will be awful. } } * Train on e.g. Discord Unveiled (local copy available).