From fb4c6847723871aa01535a7b9f54374f2042a9e3 Mon Sep 17 00:00:00 2001 From: osmarks Date: Tue, 9 Sep 2025 22:07:03 +0100 Subject: [PATCH] the word 'comprise' is too powerful for me --- blog/ai-accelerator.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/blog/ai-accelerator.md b/blog/ai-accelerator.md index 78d94ad..f4fad59 100644 --- a/blog/ai-accelerator.md +++ b/blog/ai-accelerator.md @@ -26,7 +26,7 @@ The worse utilization on real training runs is partly because of individual weig ### DLRMs -While "generative AI" now comprises the majority of interest in AI, a large fraction of total compute is still spent on boring but critical work like the [Deep Learning Recommender Models](https://ai.meta.com/blog/dlrm-an-advanced-open-source-deep-learning-recommendation-model/) which now control modern culture by determining what social media users see. These use extremely large lookup tables for sparse features and very little arithmetic, making them much more memory bandwidth- and capacity-bound. I won't talk about them further because there are already solutions for this implemented by Google and Meta in TPUs and [MTIA](https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/), and no startups seem particularly interested. +While "generative AI" now constitutes the majority of interest in AI, a large fraction of total compute is still spent on boring but critical work like the [Deep Learning Recommender Models](https://ai.meta.com/blog/dlrm-an-advanced-open-source-deep-learning-recommendation-model/) which now control modern culture by determining what social media users see. These use extremely large lookup tables for sparse features and very little arithmetic, making them much more memory bandwidth- and capacity-bound. I won't talk about them further because there are already solutions for this implemented by Google and Meta in TPUs and [MTIA](https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/), and no startups seem particularly interested. ## Hardware design