The AI chip market is no longer a one-horse race. Nvidia still dominates with over 90% market share in discrete AI GPUs, but the landscape is shifting. AMD is gaining real traction with hyperscalers, and Broadcom is building a massive custom silicon business that could reshape the entire market.

Three companies. Three completely different strategies. Here’s how each is playing the game.

Nvidia: The Incumbent

Nvidia’s position is almost absurd in its dominance. Over 90% of AI training and inference runs on Nvidia GPUs. The CUDA ecosystem - 20+ years of software, libraries, and developer tooling - is the real moat. You can build a faster chip, but you can’t replicate an ecosystem overnight.

The numbers:

  • FY2026 revenue: $65.7B expected for Q4 alone (earnings Feb 25)
  • Growth: 47% revenue CAGR projected through fiscal 2028
  • Roadmap: Blackwell (shipping now) → Rubin (H2 2026, 5x inference) → Rubin Ultra (2027) → Feynman (2028)
  • Order book: $500B in combined Blackwell + Rubin visibility through end of 2026

The Rubin platform is Nvidia’s biggest leap yet - 6 new chips, 50 petaflops of FP4 inference, and a 10x reduction in token cost compared to Blackwell. Meta just signed a multi-year deal for millions of Nvidia chips. Microsoft, Google, and Amazon are all building Nvidia-powered data centers at unprecedented scale.

The risk: When you’re 90% of the market, every customer you lose makes headlines. And at $700B in collective AI capex from the Mag 7, the ROI question is getting louder.

AMD: The Challenger

AMD’s strategy is straightforward - be the credible alternative. Not necessarily better than Nvidia, but good enough and cheaper. It’s the same playbook Lisa Su ran against Intel in CPUs, and it’s starting to work.

The numbers:

  • AI GPU revenue: Expected to hit $10-12B in 2026
  • Growth: 32% revenue growth projected for 2026
  • Adoption: 8 of the top 10 AI companies now use AMD Instinct for production workloads
  • Software: ROCm downloads increased 10x year-over-year in late 2025

The GPU lineup:

  • MI300X (shipping): 192GB HBM3 memory - 2.4x more than Nvidia’s H100. This matters for large model inference where memory capacity is the bottleneck.
  • MI400X (2026): Next-gen CDNA architecture, targeting on-premises AI workloads
  • MI500 (announced at CES 2026): The big Nvidia competitor for next-gen training

The real progress is in software. ROCm, AMD’s answer to CUDA, has been the historical weakness. But a 10x increase in downloads signals that developers are actually trying it. Microsoft Azure and Meta running production workloads on MI300X is the strongest validation AMD has ever had.

The risk: AMD has 5-10% market share. Even with rapid growth, they’re playing catch-up on software ecosystem, developer mindshare, and the networking/interconnect layer where Nvidia’s NVLink is years ahead.

Broadcom: The Kingmaker

This is the one most people miss. While Nvidia and AMD fight over the GPU market, Broadcom is quietly building the custom ASIC business - designing chips specifically for individual customers.

The numbers:

  • AI revenue: $46B projected for 2026 - a 134% year-over-year increase
  • Growth: 38% revenue CAGR through fiscal 2028
  • Market position: On track for 60% share of the custom AI chip market by 2027
  • Custom ASIC shipment growth: 44% increase in 2026 (vs 16% for GPU shipments)

The customers:

  • Google - Broadcom designs the TPU (Tensor Processing Unit). Google’s TPU revenue alone drives ~58% of Broadcom’s ASIC shipments and ~78% of ASIC revenue ($22.1B). Each TPU chip carries a $13,000 price tag.
  • Meta - Custom AI silicon for their data centers
  • OpenAI - A $10B partnership for custom AI chips, with deployments starting H2 2026
  • Anthropic - Joined the custom silicon train as well

Why custom ASICs matter: A custom chip designed for a specific workload can be 2-3x more power efficient and cheaper per operation than a general-purpose GPU. When you’re spending $100B+ on AI infrastructure (like Google or Meta), even a 20% efficiency gain saves billions annually.

Here’s the disruptive prediction: by late 2026, the majority of frontier model training may shift to custom ASICs rather than general-purpose GPUs. That’s Broadcom’s bet.

The risk: Broadcom depends on a handful of massive customers. If Google decides to bring TPU design in-house, or if OpenAI’s partnership doesn’t scale, the revenue concentration is dangerous.

Head-to-Head Comparison

Metric Nvidia AMD Broadcom
Strategy Own the full stack (GPU + CPU + networking) Be the cheaper, credible GPU alternative Design custom chips for hyperscalers
2026 AI revenue ~$130B+ (full year) $10-12B $46B
Revenue CAGR 47% (FY25-28) 32% (2026) 38% (FY25-28)
Key product Rubin (50 PFLOPS) MI300X / MI400X Custom ASICs (Google TPU, etc.)
Moat CUDA ecosystem + NVLink Price/performance + memory capacity Deep customer relationships + design expertise
Biggest customer Meta, Microsoft, Google Microsoft Azure, Meta Google ($22.1B TPU revenue)
Market share ~90% (GPUs) ~5-10% (GPUs) ~60% (custom ASICs by 2027)

The Bigger Picture

The AI chip market isn’t zero-sum. Total AI semiconductor revenue is expected to exceed 50% of all chip sales by the end of the decade. There’s room for all three to grow.

But the dynamics are shifting:

  1. GPUs aren’t the only game anymore. Custom ASICs are growing at 44% vs 16% for GPUs. Broadcom’s custom silicon business is the fastest-growing segment in AI hardware.

  2. Software is the real moat. Nvidia’s CUDA has 20 years of head start. AMD’s ROCm is gaining traction but isn’t there yet. Broadcom sidesteps this entirely - their customers write their own software (Google’s JAX, Meta’s PyTorch optimizations).

  3. The customer is becoming the competitor. Google designs its own TPUs. Amazon has Trainium. Microsoft is reportedly working on custom AI silicon. The hyperscalers don’t want to be dependent on any single chip vendor. This helps AMD and Broadcom at Nvidia’s expense.

  4. China is a wildcard. Export restrictions mean Nvidia can’t sell its best chips in China. Huawei’s Ascend chips are filling the gap. This is lost revenue for Nvidia and creates a parallel AI hardware ecosystem.

Who Wins?

Short term (2026): Nvidia. The Rubin platform is a generational leap, the $500B order book is real, and no one else has the full-stack integration. Earnings on Feb 25 will likely confirm this.

Medium term (2027-2028): Broadcom becomes the dark horse. As custom ASICs mature and more hyperscalers design their own chips, the GPU-only model faces pressure. Broadcom’s 60% custom ASIC market share by 2027 is a powerful position.

Long term: The market fragments. Nvidia keeps the general-purpose AI market. Broadcom owns custom silicon for the top 5-10 hyperscalers. AMD captures the price-sensitive and on-premises segment. And increasingly, the biggest customers build their own chips entirely.

The AI chip war has three fronts now. Investors betting on just one company are missing two-thirds of the story.