🚀 #GateNewbieVillageEpisode5 ✖️ @Surrealist5N1K
💬 Stay clear-headed in a bull market, calm in a bear market.
Share your trading journey | Discuss strategies | Grow with the Gate Family
⏰ Event Time: Nov 5 10:00 – Nov 12 26:00 UTC
How to Join:
1️⃣ Follow Gate_Square + @Surrealist5N1K
2️⃣ Post on Gate Square with the hashtag #GateNewbieVillageEpisode5
3️⃣ Share your trading experiences, insights, or growth stories
— The more genuine and insightful your post, the higher your chance to win!
🎁 Rewards
3 lucky participants → Gate X RedBull Cap + $20 Position Voucher
If delivery is unavailable, th
OpenAI and Anthropic are testing models for issues such as hallucinations and safety.
Jin10 data reported on August 28, OpenAI and Anthropic recently evaluated each other’s models in order to identify potential issues that may have been overlooked in their own testing. The two companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other’s publicly available AI models and examined whether the models exhibited hallucination tendencies, as well as the so-called “misalignment” issue, which refers to models not operating as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and Anthropic released Opus 4.1 at the beginning of August. Anthropic was founded by former OpenAI employees.