Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Nvidia earns 2.2 billion dollars daily, with an annual net profit exceeding four times that of Tencent.
Summary:
In a new coordinate system, tokens are the new crude oil, and inference is the new engine.
After the market close on Wednesday, NVIDIA delivered a performance that still silenced Wall Street — for the fourth fiscal quarter, NVIDIA’s revenue reached a record $68.127 billion, up 73% from $39.331 billion a year earlier; net profit was $42.96 billion, up 94% from $22.091 billion. Additionally, NVIDIA’s full-year revenue last year was $215.938 billion, with a net profit of $120.067 billion, earning about $3.28 billion per day (roughly 22 billion RMB). Based on its latest full-year net profit, this is approximately 4.22 times Tencent’s 2024 net profit (194 billion RMB), and about 6.34 billion times Alibaba’s fiscal 2025 net profit (129.4 billion RMB).
While the entire tech industry discusses cost reduction and efficiency gains, and debates when the AI bubble will burst, this chip giant used its full-year revenue of $215.9 billion to announce a somewhat brutal truth to the market: for true top players, there is no winter, only a return to dominance.
But NVIDIA’s performance has never been just a story of a king; it is a barometer of the entire AI demand. NVIDIA’s rapid growth indicates that the computing needs of the entire market are skyrocketing.
Amid Wall Street analysts’ concerns about the AI bubble and cloud data center CAPEX, Jensen Huang remains very confident. He said he is very optimistic about clients’ cash flow growth, “The reason is very simple. We are now seeing a turning point in agentic AI, and everyone is realizing the actual value of AI agents across the globe and in various enterprises.”
He clearly sees that in this new AI era, a company’s computing power equals its revenue. Without computing power, tokens cannot be generated; without tokens, revenue growth is impossible.
Breaking records is not enough; NVIDIA’s goal is to “take everything”
If you try to find any evidence of “AI spending slowdown” in NVIDIA’s financial report, you will likely be disappointed. The $68.1 billion quarterly revenue is not just a celebration of numbers but a precise reflection of the current industry state: AI remains an extremely expensive game, and the cost barriers are rising higher for top-tier companies.
A closer look at the report shows that the data center business contributed $62.3 billion in a single quarter, a 75% increase year-over-year, further increasing its share to over 91%. This growth is driven by the four major cloud providers — Google, Amazon, Meta, and Microsoft — whose capital expenditures are nearly疯狂. Wall Street estimates suggest these giants’ combined CAPEX could approach $700 billion by 2026.
Furthermore, in fiscal year 2026, NVIDIA’s revenue hit a record $215.938 billion, surpassing $200 billion for the first time.
From the outside, NVIDIA has secured a core position in the AI era through its computing power dominance.
But NVIDIA’s ambition is to bring everything onto its platform: “As we continue to build a complete NVIDIA AI ecosystem—covering AI, physical AI, AI-physical, life sciences, biology, robotics, and manufacturing—we hope these ecosystems will be built on NVIDIA’s platform,” Huang said.
To this end, NVIDIA is strengthening its moat through capital investments. Huang mentioned during the conference that NVIDIA is close to reaching an agreement with OpenAI. This collaboration was initially outlined last year as a potential $100 billion AI infrastructure project. Huang described OpenAI as “a once-in-a-generation company.” Additionally, NVIDIA acquired the AI inference chip startup Groq’s technology license for about $20 billion last year and brought its core team onboard to complete the inference computing puzzle.
Moreover, NVIDIA recognizes the changing global competitive landscape. In August last year, after approval, NVIDIA sold about $60 million worth of H20 chips in China. However, since the H20 license was granted in 2026, NVIDIA has yet to generate any revenue from the H200 licensing project.
NVIDIA’s CFO, Colette Kress, openly admitted that NVIDIA’s competitors in China are “making progress,” mentioning companies that have strengthened through recent IPOs. She pointed out that these companies have the potential to disrupt the existing global AI landscape and could impact NVIDIA’s competitive position worldwide.
For example, domestic listed company Cambricon released a performance forecast at the beginning of 2026: it is expected that in 2025, its full-year revenue will be between 6 billion and 7 billion RMB, a more than fourfold increase year-over-year. More importantly, its net profit attributable to shareholders is expected to be between 1.85 billion and 2.15 billion RMB, marking its first annual profit since listing.
Welcome to the Agentic AI era!
Another aspect of NVIDIA’s financial report is the soaring global computing demand.
A recent McKinsey survey shows that over 70% of CIOs at large enterprises plan to double their technology spending between 2026 and 2027, with 70% of their budgets being redirected toward AI-related areas.
Despite Microsoft and Meta’s large GPU purchases, McKinsey’s senior partner admits that the ROI (return on investment) for AI remains “elusive.” Clients often require a 20% to 40% productivity boost in exchange for large orders, essentially using efficiency gains to offset the high costs of computing power. NVIDIA’s impressive financials are built on this “painful yet joyful” contradiction.
However, Huang sees a fundamental shift in this business logic. He repeatedly emphasizes a core point during the conference call: “Inference now equals our clients’ revenue.”
He detailed the underlying logic: “Because intelligent agents are generating so many tokens, and the results are so effective. When an AI agent writes code, it generates thousands, tens of thousands, or hundreds of thousands of tokens, because they run for minutes to hours. So these systems, these agent systems, are emerging as a team of different intelligent agents.”
In his view, “the number of tokens being generated is truly exponential. Therefore, we need to perform inference at higher speeds. When you do inference faster, each token is monetized, which directly translates into revenue.”
The popularity of agentic AI resonates with the recent rapid rise of OpenClaw.
OpenClaw’s creator, Peter Steinberger, recently said in an interview with OpenAI that over the past year, he alone made more than 90,000 code commits on GitHub across 120+ projects. This is a new miracle in software engineering history. The emergence of agentic AI has greatly reduced R&D costs; previously, a full team of architects, front-end, back-end, and testers was needed to develop an MVP (minimum viable product). Now, one person can complete it within hours. This further accelerates the arrival of the Agentic AI era, where hiring a silicon-based employee will soon become standard.
Peter is a firm supporter of this idea: “I think NVIDIA’s CEO said it — in the short term, you won’t be replaced by AI; you will be replaced by those who use AI.”
Due to its popularity, Chinese large-model companies are continuing to gain more overseas market share with high cost-performance ratios. After launching K2.5 on January 27, 2026, Kimi quickly became the first main model supported for free by OpenClaw. There are reports that, 20 days after K2.5’s launch, its revenue exceeded that of the entire 2025 year.
In any case, computing demand is “exploding.”
Currently, NVIDIA’s inventory schedule extends to 2027. The product iteration cycle is seamlessly transitioning from old to new. The Blackwell architecture is gaining momentum, and the next-generation Vera Rubin platform has already taken a significant step forward. Colette Kress stated, “We shipped the first batch of Vera Rubin samples to customers earlier this week, and we are on track to start mass production and shipments in the second half of this year. We expect every cloud model builder to deploy Vera Rubin.”
In this new coordinate system, tokens are the new oil, inference is the new engine, and NVIDIA’s ambition is to gradually become the infrastructure contractor of this new world.