Edit ‘autogollark’
This commit is contained in:
@@ -34,7 +34,9 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
|
||||
}
|
||||
* MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n?
|
||||
* {Longer context, mux several channels.
|
||||
* No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing.
|
||||
* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing.
|
||||
* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency.
|
||||
}
|
||||
}
|
||||
|
||||
== Versions
|
||||
|
||||
Reference in New Issue
Block a user