Edit ‘autogollark’

This commit is contained in:
osmarks
2025-12-25 23:10:52 +00:00
committed by wikimind
parent f3ae12e66a
commit 14b3727eb3

View File

@@ -41,7 +41,9 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
}
* Search over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475
* {Longer context, mux several channels.
* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. Maybe we can process channels in parallel and fudge the K/V caches.
* {{No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. Maybe we can process channels in parallel and fudge the K/V caches.
* Maybe this is //not// a good way to decide when to respond: significant power draw implications unless I do something clever (polling, batching, deprioritize when recently idle).
}
* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency. Inference code will be awful.
}
}