Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI Eyes Alternative Chips as NVIDIA Hardware Faces Performance Challenges
As ChatGPT continues to scale globally, OpenAI is grappling with hardware performance issues that prompted the company to seek alternatives to NVIDIA’s current processor lineup. The organization has become increasingly concerned about latency problems when NVIDIA chips handle the complex computational demands of high-volume user queries.
Performance Bottlenecks Drive the Chip Search
The core challenge stems from response time limitations in NVIDIA’s existing AI infrastructure. When ChatGPT processes intricate requests from millions of concurrent users, the hardware struggles to deliver the speed users expect. This performance gap isn’t merely a technical concern—it directly impacts user experience and operational efficiency. OpenAI’s exploration into competing silicon solutions reflects the broader industry trend of major AI labs diversifying their hardware dependencies.
Moving Beyond Single Vendor Reliance
Sources indicate OpenAI initiated its investigation into alternative chips during 2025, exploring various semiconductor options from multiple manufacturers. Rather than remaining locked into NVIDIA’s ecosystem, the company is actively evaluating how different processor architectures can better handle its workload patterns. This strategic pivot suggests OpenAI recognizes that relying on a single vendor for chips may constrain its ability to optimize performance and manage costs as AI models become increasingly resource-intensive.
The shift marks a significant moment in the AI infrastructure landscape, where even major AI providers are reassessing their chip strategies to balance performance, scalability, and vendor independence.