From d7869b09b9db15bc609e8806ee23099a59d13131 Mon Sep 17 00:00:00 2001 From: osmarks Date: Thu, 25 Dec 2025 23:14:22 +0000 Subject: [PATCH] =?UTF-8?q?Edit=20=E2=80=98autogollark=E2=80=99?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- autogollark.myco | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogollark.myco b/autogollark.myco index 18308e8..5a1a921 100644 --- a/autogollark.myco +++ b/autogollark.myco @@ -38,7 +38,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt * https://arxiv.org/abs/2507.01335 * https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work (at scale)? Why? Would be really nice for using AMX units. * Maybe compute grants are available for training. -* Substantial bandwidth bottleneck on CPU. Specdec/MTP would be useful. +* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful. } * Search over conversations with non-gollark simulacra? Should find //something// to use spare parallelism on local inference. Best-of-n? https://arxiv.org/abs/2505.10475 * {Longer context, mux several channels.