Edit ‘autogollark’
This commit is contained in:
@@ -36,6 +36,7 @@ Autogollark currently comprises the dataset, the search API server and the [[htt
|
||||
* ~~Pending:~~ Resource now available: [[XEROGRAPHIC BIFROST]] phase 3.
|
||||
* https://arxiv.org/abs/2507.07101
|
||||
* https://arxiv.org/abs/2507.01335
|
||||
* https://arxiv.org/abs/2510.14901
|
||||
* https://github.com/d0rc/egg.c and https://eshyperscale.github.io/. Does this actually work (at scale)? Why? Would be really nice for using AMX units.
|
||||
* Maybe compute grants are available for training.
|
||||
* Substantial bandwidth bottleneck on CPU (230GB/s nominal; 200GB/s benchmarked; 100GB/s per NUMA node, which llama.cpp handles awfully). Specdec/MTP would be useful.
|
||||
|
||||
Reference in New Issue
Block a user