diff --git a/autogollark.myco b/autogollark.myco index 70ba389..808127b 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -39,7 +39,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt * https://arxiv.org/abs/2510.14901 * https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work (at scale)? Why? Would be really nice for using AMX units. * Maybe compute grants are available for training. -* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful. +* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful. Can anything use AMX well though? } * Search over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475 * {Longer context, mux several channels.