Edit ‘autogollark’
This commit is contained in:
@@ -22,7 +22,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
|
|||||||
}
|
}
|
||||||
* {Tool capabilities (how to get the data? Examples in context only?!).
|
* {Tool capabilities (how to get the data? Examples in context only?!).
|
||||||
* Synthetic via instruct model.
|
* Synthetic via instruct model.
|
||||||
* {RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work.
|
* {RL (also include reasoning, of course). Probably hard though (sparse rewards). https://arxiv.org/abs/2403.09629. [[https://arxiv.org/abs/2503.22828]] would probably work. [[https://arxiv.org/abs/2505.15778]]
|
||||||
* Unclear whether model could feasibly learn tool use "from scratch", so still need SFT pipeline.
|
* Unclear whether model could feasibly learn tool use "from scratch", so still need SFT pipeline.
|
||||||
}
|
}
|
||||||
* https://arxiv.org/abs/2310.04363 can improve sampling (roughly) //and// train for tool use.
|
* https://arxiv.org/abs/2310.04363 can improve sampling (roughly) //and// train for tool use.
|
||||||
@@ -33,7 +33,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
|
|||||||
* Decision theory training data (synthetic, probably) (ref https://arxiv.org/abs/2411.10588).
|
* Decision theory training data (synthetic, probably) (ref https://arxiv.org/abs/2411.10588).
|
||||||
* Pending: XEROGRAPHIC BIFROST 3.
|
* Pending: XEROGRAPHIC BIFROST 3.
|
||||||
}
|
}
|
||||||
* MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n?
|
* MCTS over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475
|
||||||
* {Longer context, mux several channels.
|
* {Longer context, mux several channels.
|
||||||
* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing.
|
* {No obvious reason Autogollark can't train (and run inference!) on every channel simultaneously, with messages sorted by time and other non-Discord things (tool calls?) inline. Not good use of parallelism but does neatly solve the when-to-respond thing.
|
||||||
* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency.
|
* Context length issues, and subquadratic models are sort of bad, though maybe we can "upcycle" a midsized model to RWKV. This exists somewhere. Not sure of efficiency.
|
||||||
|
|||||||
Reference in New Issue
Block a user