Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Competitor Sora has learned to generate videos with complex editing - ForkLog: cryptocurrencies, AI, singularity, the future
Chinese developer Kuaishou has introduced the third version of the Kling AI video generation model.
The model combines several tasks: transforming text, images, and references into videos, adding or removing content, modifying and transforming clips.
Video length has increased to 15 seconds. Other improvements include more flexible shot management and precise prompt adherence. Overall realism has been enhanced: character movements are more expressive and dynamic.
The model supports various editing solutions: from classic “shot-reverse shot” dialogues to parallel storytelling and scenes with voice-over.
In addition to standard image-based video generation, Kling 3.0 supports multiple reference images and video sources as scene elements.
The model captures characteristics of characters, objects, and episodes. Regardless of camera movement and plot development, key objects remain stable and consistent throughout the video.
Developers have improved native audio: the system more accurately synchronizes speech with facial expressions, and in scenes with dialogues, allows manual designation of the specific speaker.
The list of supported languages has been expanded to include Chinese, English, Japanese, Korean, and Spanish. Dialects and accents are also better conveyed.
Additionally, the team has upgraded the multimodal model O1 to Video 3.0 Omni.
Competitors Sora are advancing
OpenAI introduced the Sora video generation model in February 2024. The tool caused excitement on social media, but the public release only happened in December.
Almost a year later, users gained access to text-to-video generation, image “bringing to life,” and scene completion.
The iOS app Sora was released in September and immediately attracted attention: over 100,000 installs on the first day. The service surpassed 1 million downloads faster than ChatGPT, despite being invite-only.
However, the trend soon shifted. In December, downloads decreased by 32% compared to the previous month. In January, the downward trend continued — the app was downloaded 1.2 million times.
Sora also competes with Meta AI and its Vibes feature. In December, pressure on the market increased with startup Runway, whose Gen 4.5 model outperformed competitors in independent tests.
Second, OpenAI’s product faced copyright infringement issues. Users created videos with popular characters like “SpongeBob” or “Pikachu,” leading the company to tighten restrictions.
In December, the situation stabilized after an agreement with Disney, allowing users to generate videos with studio characters. However, this did not lead to an increase in downloads.
Recall that in October, deepfakes featuring Sam Altman flooded Sora.