Microsoft is reportedly working with AMD to support the chipmaker’s expansion into artificial intelligence processors. According to a report by BloombergMicrosoft is providing technical resources to support AMD’s advancements as the two companies join forces to compete with Nvidia, which holds an estimated 80 percent share of the AI processor market.
in turn, Bloombergsources also claim that AMD is helping Microsoft develop its own in-house AI chips, codenamed Athena. Several hundred employees of Microsoft’s silicon division are reportedly working on the project, and the company has apparently already poured about $2 billion into the development. However, Microsoft spokesperson Frank Shaw has denied that AMD is involved with Athena.
We’ve reached out to AMD and Microsoft for confirmation and will update this story if we hear back.
Nvidia’s impressive market share for GPUs has helped it dominate the AI chip industry
The explosive popularity of AI services like OpenAI’s ChatGPT is driving the demand for processors that can handle the massive computing power required to run them. Nvidia’s dominant market share for graphics processing units (GPUs) – specialized chips that provide the required computing power – allows it to dominate this space. There is currently no suitable alternative, which is a problem for companies like Microsoft who need Nvidia’s expensive processors to power the various AI services in the Azure Cloud.
Nvidia’s CUDA libraries have driven most advancements in AI over the last decade. Despite AMD being a major rival in the gaming hardware industry, the company still lacks a suitable alternative to the CUDA ecosystem for large-scale machine learning deployments. With the AI industry heating up, AMD is trying to put itself in a better position to capitalize. “We are very excited about our opportunity in AI — this is our number one strategic priority,” Chief Executive Officer Lisa Su said during the chipmaker’s earnings call on Tuesday. “We are in the very early stages of the AI computing era and the adoption and growth rate is faster than any other technology in recent history.”
Su claims that AMD is well positioned to make partially custom chips for its largest customers to use in their AI data centers. “I think we have a very complete IP portfolio of CPUs, GPUs, FPGAs, adaptive SoCs, DPUs and a very capable semi-custom team,” Su said, adding that the company “sees more possibilities than game consoles.”
AMD is also confident that the upcoming Instinct MI300 data center chip can be adapted for generative AI workloads. “MI300 is actually very well positioned for HPC or supercomputing workloads as well as AI workloads,” said Su. “And with the recent interest in generative AI, I’d say the pipeline for MI300 has expanded significantly here over the past few months, and we’re excited about that. We are investing a lot more resources.”
In the meantime, Microsoft plans to continue working closely with Nvidia to secure more of the company’s processors. The AI boom has led to a growing shortage of specialized GPU chips, further limited by Nvidia’s near-monopoly on the supply of such hardware. Microsoft and AMD aren’t the only players trying to develop in-house AI chips – Google has its own TPU (Tensor Processing Unit) chip for training its AI models, and Amazon has similarly created Trainium AI chips to train computer models for machine learning.