Edit ‘autogollark’

This commit is contained in:
osmarks
2025-05-23 21:39:06 +00:00
committed by wikimind
parent b1205a8cfe
commit 3bf4b6ef3b

View File

@@ -34,7 +34,9 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
}
* MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n?
* {Longer context, mux several channels.
* No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing.
* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing.
* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency.
}
}
== Versions