From e92f3e626e870078c45e747be87b952817251329 Mon Sep 17 00:00:00 2001 From: osmarks Date: Mon, 23 Feb 2026 17:08:21 +0000 Subject: [PATCH] copyedits --- blog/graphcore.md | 2 +- links_cache.json | 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/blog/graphcore.md b/blog/graphcore.md index 2b07af2..bb0dea3 100644 --- a/blog/graphcore.md +++ b/blog/graphcore.md @@ -77,4 +77,4 @@ Even without this, there are some possible applications which do work quite well [^19]: Not data latency, which is [250ns](https://www.graphcore.ai/posts/accelerating-resnet50-training-on-the-ipu-behind-our-mlperf-benchmark). There are separate cables for sync. -[^20]: [Tenstorrent](https://tenstorrent.com/) had it earlier, but it has been bugged for generations: the functional model is [slightly defective](https://github.com/tenstorrent/tt-isa-documentation/blob/main/WormholeB0/TensixTile/TensixCoprocessor/SFPSTOCHRND_FloatFloat.md). +[^20]: [Tenstorrent](https://tenstorrent.com/) had it earlier than Nvidia, but it has been bugged for generations: the functional model is [slightly defective](https://github.com/tenstorrent/tt-isa-documentation/blob/main/WormholeB0/TensixTile/TensixCoprocessor/SFPSTOCHRND_FloatFloat.md). diff --git a/links_cache.json b/links_cache.json index a66d241..7f2f590 100644 --- a/links_cache.json +++ b/links_cache.json @@ -5244,7 +5244,6 @@ "auto": true }, "https://github.com/tenstorrent/tt-isa-documentation/blob/main/WormholeB0/TensixTile/TensixCoprocessor/SFPSTOCHRND_FloatFloat.md": { - "inline": true, "excerpt": "Contribute to tenstorrent/tt-isa-documentation development by creating an account on GitHub.", "title": "tt-isa-documentation/WormholeB0/TensixTile/TensixCoprocessor/SFPSTOCHRND_FloatFloat.md at main ยท tenstorrent/tt-isa-documentation", "author": "tenstorrent",