Discussion about this post

User's avatar
Neural Foundry's avatar

Excellent breakdown of the AI infrastructure stack! However, I noticed one critical layer that's conspicuously absent: the Storage/Memory vertical. Between Compute and the application layer, you need massive amounts of high-speed storage and memory to feed those GPUs and store the training data, model weights, and inference results. Companies like Micron (DRAM/HBM), Western Digital (NAND flash/enterprise SSDs), and SK Hynix are absolutely crucial to this chain. The storage demand from AI is staggering - a single large language model training run can require petabytes of high-performance storage, and the inference layer needs ultra-fast SSDs to serve results at scale. Western Digital specifically has been positionning itself heavily in the AI data center market with their high-capacity enterprise SSDs and NVMe solutions. Without sufficient storage infrastructure, even the best Nvidia GPUs would sit idle. The storage bottleneck is often what limits AI workload performance in practice, making this vertical just as strategic as networking or compute. Perhaps worth considering WDC or MU as additions to your portfolio framework?

Expand full comment
Mitul's avatar

Really good write up on the supply chain of the AI infrastructure! It would be really cool to have a picture that includes tope 3 of each domain.

Expand full comment
1 more comment...

No posts

Ready for more?