Edit ‘autogollark’

This commit is contained in:
osmarks
2025-12-25 23:14:22 +00:00
committed by wikimind
parent 83e24b9c9a
commit d7869b09b9

View File

@@ -38,7 +38,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
* https://arxiv.org/abs/2507.01335
* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work (at scale)? Why? Would be really nice for using AMX units.
* Maybe compute grants are available for training.
* Substantial bandwidth bottleneck on CPU. Specdec/MTP would be useful.
* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful.
}
* Search over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475
* {Longer context, mux several channels.