Microsoft is building its own AI model

robot
Abstract generation in progress

Microsoft is building its own AI model

Quartz · Sven Hoppe/picture alliance via Getty Images

Shannon Carroll

Fri, February 13, 2026 at 12:33 AM GMT+9 2 min read

In this article:

  •                                       StockStory Top Pick 
    

    MSFT

    -0.63%

Microsoft wants out of the “powered by someone else” business. The company’s AI chief, Mustafa Suleyman, told the Financial Times that the company is pushing toward AI “self-sufficiency.” That means developing its own advanced foundation models and continuing to reduce its reliance on OpenAI, even as the two companies keep their relationship intact.

Microsoft’s October 2025 reset with OpenAI preserved the core perks: Microsoft says OpenAI remains its “frontier model partner,” and Microsoft’s IP rights and Azure API exclusivity run “through 2032,” including models “post-AGI.” So this is Microsoft buying itself even more room to negotiate, route, and replace.

Because when your flagship AI product sits inside Microsoft 365, “single supplier” starts sounding like a vulnerability you have to try to explain on earnings calls. Microsoft can keep selling “Copilot everywhere,” but the real prize is making sure the underlying compute, security, and billing stay Microsoft-shaped, no matter which model is hot this quarter.

Microsoft is also trying to prove it’s not just talking. In August 2025, Microsoft AI previewed MAI-1-preview, calling it “an in-house mixture-of-experts model” that was “pre-trained and post-trained on ~15,000 NVIDIA H100 GPUs,” with plans to roll it into certain Copilot text use cases.

That’s a clear marker of intent: Microsoft is building models, and it’s doing it at a meaningful scale, on the same hardware reality as everyone else.

Microsoft’s new Maia 200 chip is positioned as an inference accelerator “engineered to dramatically improve the economics of AI token generation” — or, essentially, to take aim at Nvidia’s software, pairing custom silicon with a software package meant to loosen CUDA’s grip. Inference is where the bills stack up — and where hyperscalers most want leverage.

Meanwhile, Microsoft is widening its menu on purpose, hosting models from xAI, Meta, Mistral, and Black Forest Labs in its data centers. It has also been willing to use Anthropic models in Microsoft 365 Copilot experiences after internal testing found them better for certain Office tasks, a shift that even involved paying AWS for access.

So yes, Microsoft wants to be the place where every winning model runs — and it wants at least one winner to have a Microsoft badge on it.

Terms and Privacy Policy

Privacy Dashboard

More Info

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin