Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
315 Confirmed! AI Poisoning Black Market is Too Rampant! The AI Recommendations You Trust May Be "Poisoned" by Others
Introduction: Your digital life is being quietly manipulated. Watch out for pitfalls!
Editor | Jing Cheng
Author | Jiang Jing
At the 2026 CCTV 315 Evening Gala, an inconspicuous demonstration caused many people to break out in cold sweat.
A software called “Liqing GEO Optimization System” can generate content on AI platforms, feeding “soft articles” into large AI models. Surprisingly, a fictional smart wristband was recommended as a “good product” by AI within three days.
This seemingly absurd operation is not an isolated case but a microcosm of a complete industry chain covering software development, content generation, mass publishing, and commercial monetization.
Such AI misinformation may be happening in every AI consultation you and I have.
The question is, how is this AI poisoning different from usual false advertising and online rumors? Can ordinary people operate it? What impact does it have on us and the AI industry?
1. What exactly is AI poisoning?
As AI becomes more widely integrated into daily life, the concept of “AI poisoning” is gradually entering the public eye. Many people confuse it with false advertising and online rumors, but there are fundamental differences.
The core definition of AI poisoning is fundamentally different from everyday false advertising and online rumors. False advertising involves direct deception by businesses—for example, exaggerating a regular water cup as having medicinal effects. Online rumors involve fabricating false information to mislead public opinion or harm others.
In contrast, AI poisoning does not target humans directly. Instead, it pollutes or misleads AI systems—by injecting false content into training data or inputting malicious commands during operation—causing AI to learn incorrect knowledge from the source and indirectly pass on false information to us under the guise of “smart recommendations.”
Renowned financial writer and Impact Research Institute director Gao Chengyuan pointed out that in a business context, GEO (Generative Engine Optimization) is a trust-building system designed for the AI search era. It uses structured, authoritative content to make brands or individuals’ professional images visible, recognized, and trusted by AI systems, ultimately becoming the “default answer” in AI conversations. Its essence is the compliant accumulation of trust assets, not malicious dissemination of false information.
However, as Wu Zewei, a special researcher at the Su Commercial Bank, said, “False advertising” and “online rumors” are direct deceptions aimed at humans, which consumers can often verify through multiple channels. AI poisoning, on the other hand, is an indirect, technical attack that contaminates the “water source” AI relies on for thinking and judgment.
He pointed out that once a model is successfully poisoned, it can continuously and covertly output manipulated “standard answers” to all questions without users knowing, packaging false information as objective algorithmic judgments. The scope of harm can exponentially expand and is much harder to detect.
For example, the GEO software exposed at the 315 Gala is a core tool for AI poisoning.
According to Global Network, industry insiders bought this software on e-commerce platforms, fabricated a smart wristband called Apollo9, and made outrageous claims like “quantum entanglement sensing” and “black hole-level battery life.” After inputting data into the software, it generated more than ten soft articles within minutes, which could be automatically published without human intervention.
2. Is AI poisoning really operable by anyone with zero experience?
Currently, there is online buzz claiming that “AI poisoning can be done with just a hundred yuan and no experience.” What is the real situation? How is GEO software circulating online, and why can it be easily purchased?
Wang Peng, deputy researcher at Beijing Academy of Social Sciences, said that “zero-experience poisoning” often refers to using automation tools to deploy large-scale AI-generated biased content on social media and Q&A platforms. This operation is very low-cost and essentially an “AI version of SEO.”
He pointed out that GEO software circulates in black and gray markets and some e-commerce platforms under the guise of “traffic-driving tools.” It operates in a legal and technical gray area and is initially hard to distinguish from normal content creation, making it very accessible to purchase.
The main reason it is so easy to buy is demand and lack of regulation. Companies want to secure AI recommendation spots, and even after spending hundreds of millions on advertising, they are willing to spend a few million more to poison the data, just to get AI to recommend their products more often.
According to China National Radio, some GEO service providers openly admit, “Doing GEO is poisoning,” serving over 200 clients annually. They profit from the price difference on publishing platforms, with each fake soft article costing only a few dozen yuan, published in bulk, gradually forming a complete black industry loop.
Are techniques like tag flipping, backdoor attacks, and prompt injection technically difficult? Can ordinary people operate them easily? Are there quick methods to identify such poisoning behaviors?
Gao Heng, an expert at the Science and Technology News Society’s Sci-Fi Communication and Future Industry Committee, believes that prompt injection and similar inference-stage attacks are more likely to occur because they happen during model use, essentially inducing the model to produce biased responses through specific inputs.
He said this method usually affects only individual interactions rather than changing the model’s core capabilities. Therefore, technically, truly “polluting” the model’s ability is not easy and cannot be easily scaled with simple tools.
3. AI poisoning threatens the AI industry
As large AI models become more deeply embedded in our lives, work, and creation, a more covert and deadly threat is quietly spreading—AI poisoning.
Unlike hardware failures or algorithm vulnerabilities, it uses false data, malicious samples, and incorrect information as weapons to quietly contaminate the training environment that AI depends on for growth.
Many see it as just a minor issue of unclean data, but they overlook that AI’s foundation is data. Poisoned data can shake the future of AI.
After uncovering the truth about AI poisoning, the most urgent question is: how much impact will this invisible contamination have on the entire AI industry? Will false data poisoning training models cause years of technological accumulation to collapse overnight, leading to a vicious cycle of “bad money driving out good”?
The answer is far more severe than we imagine.
For ordinary people, the most direct harm is being misled by AI: asking which home appliance is best might result in a fictitious product recommendation; consulting AI about legal issues could lead to signing invalid contracts.
Relying on AI for medical advice might produce incorrect medication suggestions. These harms are closely related to our daily lives and are hard to detect—after all, most people believe “AI won’t make mistakes.”
After understanding the dangers of AI poisoning, do you still dare to blindly trust AI recommendations?
Senior enterprise management expert and senior consultant Dong Peng believes that in the short term, it will inevitably cause a “bad money drives out good” effect, with contaminated data training low-quality models, disrupting market order, increasing compliance costs for legitimate companies, and squeezing the survival space of high-quality AI.
Mao Huina, founder of Wanshi Technology, believes that AI poisoning will not be effective long-term nor create a cycle of “bad money driving out good.” Ultimately, AI serves users, and although low-quality content may temporarily confuse AI engines, it is easy for users to recognize. User feedback will help large language models learn to distinguish information quality and continue to improve. As a technological advancement, large language models will promote business segmentation and diversification, making it difficult for bad data to persist.
Dong Peng sees that in the long run, this opposition will foster a unified evolutionary drive. The proliferation of poisoning attacks will push the industry from an extensive pursuit of “scale expansion” toward a more refined focus on “data quality” and “model robustness.”
He admits this process is like a powerful vaccine—painful but necessary—awakening the industry’s collective awareness of data security and model trustworthiness, ultimately driving AI technology to spiral upward, achieving a dialectical progression from quantitative to qualitative change.
4. How to steer GEO technology back onto the right path
As AI poisoning shifts from a hidden technical risk to an overt threat to industry security, people are not only wary of risks but also asking a more critical question: do these maliciously used technologies carry an inherent “original sin”?
Many tools abused by black markets were not created for destruction. GEO technology is a typical example. It was originally designed to help optimize AI models and improve data quality and reliability but has been distorted into a weapon for data poisoning driven by profit. Technology itself is innocent; misuse is the real problem.
In response to this chaos, the industry must not only defend passively but also actively correct course. How can the industry guide such technologies back to positive applications? Can industry self-discipline uphold bottom lines? What systems, regulations, and ecosystems are needed to truly promote beneficial development and avoid harm?
The answer requires collective effort from the entire industry.
After the 315 exposure, many GEO-related companies declared their firm stance against AI poisoning. For example, iFlytek’s partners, Henan Henghui Heguan Network Technology Co., Ltd., and AB Ke, explicitly stated they do not engage in “manipulating AI recommendations” or “ranking boosting” violations.
This marks the beginning of industry self-discipline, but self-regulation alone is far from enough.
Yuan Shuai, deputy secretary-general of the Zhongguancun IoT Industry Alliance, said that guiding GEO and similar technologies toward positive applications requires establishing clear technical usage standards industry-wide, defining legitimate scenarios such as model optimization and data calibration, and setting up a registration system for service providers to regulate the development and sale of related tools.
Angel investor and senior AI expert Guo Tao pointed out that the government should strengthen regulation, introduce relevant laws and regulations to severely punish malicious behaviors like poisoning, and establish authoritative data review and supervision agencies to audit AI training data and models. Additionally, technological R&D should be enhanced to improve AI models’ resistance to poisoning, employing technical measures to prevent such attacks. A multi-pronged approach is necessary to guide technology toward positive development.
After the 315 exposure of AI black industry, safeguarding AI recommendation integrity and protecting your digital life and data security is crucial.
Next time you encounter unfamiliar products recommended by AI, will you trust them blindly?
Have you ever fallen for false information recommended by AI? Share your experience in the comments and help others avoid pitfalls!
Partially compiled from Global Network, China National Radio, and others.