June 8, 2025Comment(142)

HBM Buyers, Google in Second Place

Advertisements

The landscape of high-bandwidth memory (HBM) is evolving at an accelerating pace, fueled by the demands of artificial intelligence (AI), machine learning, and data-centric applications. As key players such as Nvidia, Google, Amazon, and AMD jostle for dominance, HBM has moved from being a niche technology to a cornerstone of next-generation computing. The rising competition in this space reflects a broader shift within the tech industry, one that sees HBM as not just a performance enhancer, but a critical enabler of the AI revolution.

Nvidia has long been recognized as the dominant force in AI processing, and its growing demand for HBM only reinforces its lead in this space. The latest data from Samsung's internal roadmap underscores this trend, showing Nvidia's projected acquisition of nearly 9.18 million HBM chips in 2025—an eye-popping increase from 5.48 million in 2024. This surge is indicative of the company's expanding role in AI and machine learning, where high-speed memory is increasingly vital for the complex computations involved in training and deploying models. In this context, Nvidia’s ability to integrate advanced memory technologies like HBM into its systems will be key to maintaining its competitive advantage. The company’s commitment to innovation, including the development of specialized AI hardware like the A100 and H100 GPUs, reflects an ongoing effort to stay at the cutting edge of AI performance.

The demand for HBM is not limited to Nvidia, however. The presence of other tech titans like Google, Amazon, and AMD in the market signals that competition for HBM resources is intensifying. Google, which is rapidly scaling up its AI hardware capabilities, has emerged as a significant player in the HBM market. Its Tensor Processing Units (TPUs), which leverage HBM for high-speed data transfer, are central to its cloud-based AI offerings. As the company continues to ramp up its data center operations, its reliance on HBM will likely grow, solidifying its place as a formidable competitor in the AI space.

Amazon, another key player, has also been aggressively expanding its capabilities in AI and cloud computing. The company manufactures a range of chips—ranging from its proprietary Graviton processors to specialized application-specific integrated circuits (ASICs)—that rely heavily on high-bandwidth memory for processing efficiency. Given Amazon’s massive scale and ongoing investments in AI and cloud infrastructure, its need for HBM will only increase. The company’s recent announcement of a $100 billion investment in cloud infrastructure further highlights the centrality of HBM in its strategic plans, ensuring that it remains competitive in the race for AI dominance.

In contrast, AMD, while making steady progress, has a more modest share of the AI data center market. Projections suggest that the company will utilize around 720,000 HBM chips this year, which represents only 7% of the total HBM allocation in the industry. Despite its CEO Lisa Su’s optimistic statements about the company’s growth trajectory, AMD still has much ground to cover in terms of gaining market share from its more established competitors. However, AMD's strategy to enhance its chip offerings, particularly with the recent release of its MI300 AI accelerator, could position the company to capture a larger share of the HBM market in the coming years. As AMD continues to refine its products and push for greater adoption in data centers, its ability to integrate advanced memory technologies like HBM will be crucial to its success.

While these companies are all vying for a larger slice of the HBM pie, one notable player, Intel, appears to be facing challenges in this rapidly evolving landscape. According to the Samsung roadmap, Intel’s demand for HBM is expected to decrease this year, primarily due to the phasing out of its Gaudi 3 products and the slow rollout of its Falcon Shores architecture. This shift raises concerns about Intel’s ability to keep pace with its rivals, particularly as Nvidia and AMD continue to build out their AI capabilities with the help of HBM. Intel’s new Jaguar Shores products, which are expected to leverage HBM4+, may offer a path forward, but the company’s struggle to maintain relevance in the AI chip market highlights the pressure it faces to adapt quickly or risk being left behind.

The competition in the HBM space underscores a broader trend in the tech industry. As AI becomes increasingly central to business operations, data processing capabilities are coming under greater scrutiny. HBM’s ability to handle massive amounts of data at high speeds makes it indispensable for AI applications that require vast computational power. The demand for HBM has skyrocketed as companies realize that more traditional forms of memory, such as DRAM, are insufficient for the high-performance needs of modern AI models. As the race for AI supremacy heats up, the role of memory technology will only grow more critical.

The financial stakes are immense. For companies like Google and Amazon, the ability to leverage HBM effectively could make the difference between maintaining leadership in AI and losing ground to competitors. Both companies have signaled their intention to invest heavily in their own in-house chips and cloud infrastructure, with Amazon projected to spend over $100 billion on cloud initiatives and Google earmarking $75 billion for its infrastructure expansion. These investments are reflective of a broader strategic shift in the industry, one in which custom-built hardware and specialized memory technologies are seen as key differentiators.

Meanwhile, the role of HBM extends beyond AI, with broader implications for other industries such as healthcare, finance, and autonomous vehicles. As AI continues to transform sectors like drug discovery, personalized medicine, and predictive analytics, the demand for high-performance computing resources will only increase. In these areas, the ability to process vast amounts of data in real-time is paramount, and HBM will be a central enabler of these capabilities. For example, in healthcare, AI models that predict patient outcomes or assist in diagnostic imaging rely on vast datasets that require the kind of memory bandwidth that HBM can provide.

The trajectory of HBM technology is equally fascinating. The next generation of HBM—HBM4+—is expected to offer even greater memory bandwidth and efficiency, enabling more powerful AI applications. This technology promises to push the boundaries of what is possible in fields ranging from cloud computing to edge AI. However, the future of HBM is not without challenges. The technology remains expensive, and scaling its production to meet the needs of the growing AI market will require significant investments in manufacturing capabilities. Companies like Samsung, SK hynix, and Micron will need to ramp up their production of HBM to meet the rising demand, while also innovating to lower costs and improve efficiency.

As the HBM landscape continues to shift, the competition between Nvidia, Google, Amazon, AMD, and Intel will only intensify. Each company’s ability to secure a reliable supply of high-bandwidth memory will play a critical role in determining who emerges as the dominant player in the AI race. For investors, understanding the dynamics of this competition will be essential in navigating the rapidly changing tech landscape. Ultimately, HBM’s role in AI development will not just shape the fortunes of these companies, but will also have broader implications for the future of computing itself. The next few years will be pivotal in determining how quickly AI technologies can advance, and how effectively they can be integrated into industries that will shape the future of society.
Error message
Error message
Error message
Error message
Error message

Your Message is successfully sent!