Epic AI Computing Power Orders! Meta Drops $100 Billion to Secure AMD Chips with a Five-Year Deal to Challenge Nvidia

Meta has reached a groundbreaking five-year strategic agreement with AMD (NASDAQ:AMD), valued at up to $100 billion, to procure 6 gigawatts of computing power chips and deeply customize the MI450 processor. As part of the deal, Meta will receive warrants to purchase up to 160 million shares of AMD, potentially owning up to 10% of the company.

This move aims to secure computing power supply through equity bundling and reduce reliance on Nvidia, marking a new phase in the AI arms race where tech giants are accelerating the global competition by deep customization and reshaping their supply chains.

Meta Platforms (NASDAQ:META) and AMD (NASDAQ:AMD) have finalized a massive, unprecedented five-year deal to purchase AI chips and data center equipment, valued at up to $100 billion. This collaboration signals an escalation in the AI infrastructure investment race among global tech giants, while also providing AMD with a key strategic foothold in the computing power market traditionally dominated by Nvidia.

According to the latest disclosed agreement, Meta Platforms (NASDAQ:META) will purchase up to 6 gigawatts of AMD (NASDAQ:AMD) processors and data center equipment over the next five years. Reports from Reuters and The Wall Street Journal estimate the total value of the deal to be between $60 billion and over $100 billion. The first batch of devices, featuring the new MI450 graphics processing unit (GPU) from AMD, is set to begin deployment in the second half of this year.

As part of an innovative financial arrangement, Meta Platforms (NASDAQ:META) will receive warrants to buy up to 160 million shares of AMD (NASDAQ:AMD) at a price of $0.01 per share. If specific technical and business milestones are met and AMD’s stock price reaches $600 in the future, Meta could potentially acquire around 10% of AMD, becoming one of its core shareholders.

The announcement quickly triggered a strong market reaction, with AMD (NASDAQ:AMD) shares rising nearly 7% in early trading, reaching $209.80. This deal not only boosts AMD’s revenue prospects but also highlights how large tech companies are reshaping industry supply chains through equity-linked partnerships to secure AI computing power.

Deep Customization and Accelerated Computing Power Deployment

At the heart of this partnership is a highly customized hardware solution. AMD (NASDAQ:AMD) will provide Meta Platforms (NASDAQ:META) with a range of products, including customized central processing units (CPUs), optimized for Meta’s low-power, high-performance needs.

According to Bloomberg, AMD CEO Lisa Su stated that the company is providing “high-performance, energy-efficient infrastructure optimized for Meta’s workloads,” and noted that Meta assisted in the design of the MI450 chip. This chip is primarily optimized for the “inference” phase of AI (the process by which AI models respond to user queries). The Wall Street Journal pointed out that the MI450 uses a “chiplet” architecture design, which makes it easier to customize compared to traditional monolithic silicon chips.

“We have very ambitious goals,” said Santosh Janardhan, Meta’s global infrastructure head, in an interview. “Being able to define the required technical specifications more tightly was one of the key reasons why Meta and AMD formed this deep partnership.”

Challenging Nvidia’s Market Dominance

This deal holds significant strategic importance for AMD (NASDAQ:AMD). Currently, Nvidia (NASDAQ:NVDA) controls approximately 90% of the global AI chip market, with a market value of $4.66 trillion, while AMD’s market cap is around $320 billion.

Just last week, Meta pledged to purchase millions of Nvidia processors to fuel its AI expansion. However, to reduce supply chain risks and enhance bargaining power, tech giants are actively seeking reliable “second suppliers.” Ben Bajarin, a chip analyst at Creative Strategies, pointed out:

“Meta is in a unique position to control the entire tech stack—they can use anyone’s computing power. This deal also underscores the current limitations in the computing power industry.”

Santosh Janardhan added that given Meta’s massive need for data centers and infrastructure, multiple chip suppliers and technological paths are required. He emphasized that Meta will continue to procure chips from Nvidia while also advancing its in-house AI chip development projects.

Equity Bundling and High Capital Expenditures

The structure of Meta’s acquisition of AMD (NASDAQ:AMD) stock warrants has drawn attention to financing models in the AI industry. Last October, OpenAI also reached a very similar agreement with AMD (NASDAQ:AMD). This model, known as “circular financing,” involves customers securing equity or investment commitments from suppliers through large procurement orders. It is increasingly becoming a common method for AI giants to lock in key technologies.

This partnership also reflects the massive capital expenditure pressures facing tech giants in the AI era. Meta Platforms (NASDAQ:META) CEO Mark Zuckerberg has previously identified AI as the company’s top priority, announcing ambitious plans to build “tens of gigawatts” or even “hundreds of gigawatts” of computing power. According to Meta’s earnings report released last month, the company’s capital expenditures for 2026 could reach up to $135 billion, with plans to build around 30 data centers in the U.S. and globally to keep up with the intense global AI race and compete with companies like OpenAI.

Lisa Su remarked that the Meta-AMD partnership is “moving to the next level.” For AMD, which achieved $34.6 billion in revenue last year, even an additional $10 billion in annual sales would significantly accelerate its race to catch up with Nvidia (NASDAQ:NVDA) in the AI chip market.

Nvidia Earnings Report Coming Soon, Citi Provides Optimistic Outlook, AI Inference Roadmap Could Be a New Catalyst

Nvidia (NASDAQ:NVDA) is set to release its latest quarterly earnings and guidance on February 25. Citi Group has given an optimistic outlook for the chip giant, led by Jensen Huang, and expects the company to release strong performance guidance.

In a report to clients, Citi analyst Atif Malik stated that his model predicts Nvidia’s revenue for the fiscal quarter ending in January will be approximately $67 billion, exceeding Wall Street’s consensus estimate of $65.6 billion. Furthermore, he expects the company’s guidance for the fiscal quarter ending in April to be around $73 billion, notably higher than the market’s expectation of $71.6 billion.

Malik pointed out that the continued ramp-up of B300 products, coupled with the launch of the Rubin architecture, will drive Nvidia’s sales to accelerate by 34% year-over-year in the second half of 2026, significantly outperforming the 27% growth expected for the first half of 2026. He believes that investors’ focus has shifted away from the current earnings report and towards the annual GTC conference scheduled for mid-March, where Nvidia is expected to focus on its inference roadmap. This will include details on how the company plans to utilize Groq’s low-latency SRAM intellectual property and provide its first preliminary outlook on AI-related sales from 2026 to 2027.

Based on this outlook, Malik maintains a “Buy” rating on Nvidia and sets a target price of $270.

Looking at the company from a longer-term perspective, Malik further stated that Nvidia’s current valuation “appears attractive.” As market visibility on its 2026 performance improves, the stock is expected to outperform the broader market in the second half of 2026.

He also mentioned that the inference market is evolving toward being “more diversified,” which will offer more choices for model scale and application scenario customization. This also means that the use of AI accelerators will take on more diverse forms. However, from a system-level perspective, he expects Nvidia to continue to lead in workloads focused on training as well as inference and logical deduction, and he believes that MLPerf remains the most valuable benchmark for comparing different AI accelerators.

Google’s Gemini 3 Deep Think Model Gets Major Upgrade, Aiming at Research and Engineering Applications

Without any tool assistance, the model achieved a 48.4% accuracy rate on the “Humanity’s Last Exam” (HLE) benchmark and scored 84.6% on the ARC-AGI-2 test. It also reached gold medal level in the written portions of the 2025 International Physics and Chemistry Olympiads. Google stated that the new model is designed to help researchers tackle “unsolvable” problems—ranging from identifying flaws in research papers to optimizing semiconductor crystal growth.

Gemini 3 Deep Think (NASDAQ:GOOGL), Google’s deep thinking model, has undergone a significant upgrade, taking its reasoning capabilities from abstract theory to practical applications. This upgrade focuses on solving complex challenges in modern scientific research and engineering, marking Google’s strategic investment in the enterprise AI market.

On Thursday, February 12, Google officially announced the Gemini 3 Deep Think upgrade, stating that the updated model achieved breakthrough results across several industry benchmarks, including 84.6% in the ARC-AGI-2 test (verified by the ARC Prize Foundation) and an Elo score of 3455 on the competitive programming platform Codeforces.

The upgraded deep thinking model is now available to Google AI Ultra subscribers and is accessible through the Gemini API for selected researchers, engineers, and enterprise users for early access. Google reported that the model has already shown practical value in real-world research, from detecting logical flaws in research papers to optimizing semiconductor material growth processes.

This release positions Google to directly compete with OpenAI’s o1 series and Anthropic’s Claude in the AI reasoning model race. As general AI capabilities become increasingly commoditized, specialized reasoning abilities have become the new battleground in the enterprise market. The launch of the deep thinking model signals that Google is unwilling to concede in this high-value sector.

From Benchmark Results to Gold Medal Performance
Google highlighted the deep thinking model’s performance on rigorous academic benchmarks. In addition to the previously mentioned results, Gemini 3 Deep Think achieved gold medal levels in the written portions of the 2025 International Physics and Chemistry Olympiads and scored 50.5% in the CMT-Benchmark advanced theoretical physics test.

Comparative results from Google show that Gemini 3 Deep Think surpassed the strongest models from Anthropic and OpenAI in several tests, including outperforming the Gemini 3 Pro preview version. For instance, in the ARC-AGI-2 test, Gemini 3 Deep Think scored 84.6%, while Anthropic’s Claude Opus 4.6 Thinking Max achieved 68.8%, and OpenAI’s GPT-5.2 Thinking xhigh scored 52.9%.

Google’s team stated that this upgrade was developed in close collaboration with scientists and researchers to address research challenges that lack clear boundaries or single correct answers, often involving messy or incomplete data. The model combines deep scientific knowledge with practical engineering capabilities, bridging the gap from abstract theory to practical applications.

Beyond breakthroughs in mathematics and programming, the deep thinking model has extended its performance to multiple scientific fields, including chemistry and physics (including theoretical physics). This broad applicability means the model is no longer limited to specific disciplines, but rather serves as a cross-disciplinary research tool.

Real-World Application Cases Validate Its Value
Early test users have demonstrated the model’s real-world potential. Lisa Carbone, a mathematician at Rutgers University, used the deep thinking model to review a highly specialized mathematical paper while researching the required structures for high-energy physics. The model successfully identified a subtle logical flaw that had previously gone undetected despite peer review.

At Duke University, Wang Lab used the deep thinking model to optimize the manufacturing method for complex crystal growth, aiming at the discovery of potential semiconductor materials. The model successfully designed a formula that grew thin films over 100 microns in thickness, achieving precision that was previously unattainable with prior methods.

Anupam Pathak, head of research and development at Google’s Platforms & Devices Division and former CEO of Liftware, tested the upgraded deep thinking model to accelerate the design of physical components.

Another use case showcased by Google demonstrated how the upgraded Gemini 3 Deep Think could convert sketches into 3D printable physical models. The model can analyze blueprints, model complex shapes, and generate the necessary files for 3D printing.

Strategic Positioning in the Enterprise Market
This upgrade reflects a broader shift in the AI industry—from general chatbots to specialized reasoning engines that can tackle professional-grade problems. For enterprise clients, evaluation criteria are changing, focusing not only on which AI can write code or summarize documents the fastest, but also on reasoning capabilities—whether the model can handle complex financial models, analyze experimental data, identify methodological flaws, or assist in patent research or drug discovery.

Google’s advantage lies in its integration capabilities. The deep thinking model is not an isolated tool but part of the broader Gemini ecosystem, meaning it can leverage Google’s vast knowledge graph, scientific datasets, and research partnerships. Researchers using deep thinking through Google Cloud theoretically have access to computational power and data sources that standalone AI services cannot match.

On Thursday, the company posted on X, saying, “The upgraded deep thinking model is driving discoveries and helping researchers solve ‘unsolvable’ problems—from finding flaws in research papers to optimizing semiconductor (crystal) growth.” This statement underscores the model’s transition from benchmark tests to real-world applications.

From a product strategy perspective, Google is targeting both consumer and enterprise users. Google AI Ultra subscribers can immediately access the model via the Gemini app, while scientists, engineers, and enterprise users can apply for early access through the Gemini API. This layered strategy reflects Google’s dual goal of maintaining a presence in the consumer market while vying for high-value enterprise clients.

AI Reasoning Model Competition Heats Up
The launch of the deep thinking model puts Google in direct competition with OpenAI and Anthropic in the AI reasoning race. OpenAI’s o1 model reportedly spends more time “thinking” before generating responses, using reinforcement learning to improve reasoning chains. Anthropic’s Claude 3 has also carved out a niche in research and analytical tasks. Now, Google has staked its claim in the same field, backed by the infrastructure and distribution advantages of being integrated into Workspace and Cloud Platform.

For professional users, this means making a choice between fast general responses and slower, deeper reasoning, which could lead to a new architectural decision. Applications may route simple queries to standard models while escalating complex issues to the reasoning model, creating a layered AI reasoning approach.

Google posted on X on Thursday: “Gemini 3 Deep Think performed exceptionally well in pushing the frontiers of intelligence in benchmark tests. Specific data: 48.4% in ‘Humanity’s Last Exam’ (without tools), 84.6% in ARC-AGI-2 (verified by the ARC Prize Foundation), and an Elo rating of 3455 on Codeforces.”

Google also pointed out that the model now excels in fields like chemistry and physics.

The true test of this competition, however, will not be the press releases, but real-world adoption. If research institutions and engineering firms begin using deep thinking models to tackle complex tasks, it will validate Google’s judgment—that the future of enterprise AI lies in depth, not speed. The company has made it clear: it is competing for the high-end sector of the AI market, where reasoning matters more than conversation.