Edit ‘autogollark’
This commit is contained in:
@@ -13,7 +13,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
|
||||
* Writeable memory?
|
||||
* {Fix lowercasing issue.
|
||||
* Due to general personality stability. Need finetune or similar.
|
||||
* One proposal: use internal finetune to steer big model somehow. Possibly: use its likelihood (prefill-only) to evaluate goodness of big model output wrt. gollark personality, and if it is too bad then use finetune directly.
|
||||
* One proposal: use internal finetune to steer big model somehow. Possibly: use its likelihood (prefill-only) to evaluate goodness of big model output wrt. gollark personality, and if it is too bad then use finetune directly. But issues if we go for a custom tokenizer.
|
||||
* Is GCG code salvageable? NanoGCG, maybe.
|
||||
}
|
||||
* {Increased autonomy (wrt. responses).
|
||||
@@ -34,9 +34,10 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
|
||||
* ~~Pending:~~ Resource now available: [[XEROGRAPHIC BIFROST]] phase 3.
|
||||
* https://arxiv.org/abs/2507.07101
|
||||
* https://arxiv.org/abs/2507.01335
|
||||
* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work? Why?
|
||||
* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work (at scale)? Why? Would be really nice for using AMX units.
|
||||
* Maybe compute grants are available for training.
|
||||
}
|
||||
* MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475
|
||||
* Search over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475
|
||||
* {Longer context, mux several channels.
|
||||
* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing. Maybe we can process channels in parallel and fudge the K/V caches.
|
||||
* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency. Inference code will be awful.
|
||||
|
||||
Reference in New Issue
Block a user