Gate App Options Now Live! Test and Win Big
📅 Event Period: September 30, 2025 – October 17, 2025
- Submit valid feedback to receive 10–100 USDT.
- Complete at least 10,000 USDT in cumulative options trading volume to participate.
👉 Register now :https://www.gate.com/campaigns/2063
Details: https://www.gate.com/zh-tw/announcements/article/47455
ChatGPT products are emerging in an endless stream, and global regulation is crossing the river by feeling the stones
Source: Power Plant, Author: Zhang Yongyi, Editor: Gao Yulei
On July 31, the chatgpt products in Apple's App Store in China were taken off the shelves collectively. According to Apple, related products need to obtain an operating license.
Not only China, but also the United States and Europe, which are determined to compete in this field, are actively enforcing legislation and regulation while encouraging innovation.
In April 2021, the European Commission first published a proposal for a regulatory structure for the operation and governance of machine learning applications, proposing the regulation of AI. At that time, the most popular view in the AI industry was still Wu Enda's famous quip that "worrying about artificial intelligence today is like worrying about overpopulation on Mars."
But by 2023, such a view can no longer become mainstream: it took less than 6 months for generative AI to show the world the great potential to replace humans and completely change the existing world—just as it was developed in World War II Come out like a nuclear weapon.
Physicist J-Robert Oppenheimer led the development of the world's first atomic bomb. At that time, World War II was over, but the nuclear bomb, as the most terrifying weapon in human history, still dominated the historical trend to a large extent: the United States The government did not adopt the criticisms and suggestions of experts such as Oppenheimer that "nuclear proliferation will lead to an unprecedented arms race", which eventually led to the outbreak of a nuclear arms race between the United States and the Soviet Union represented by the development of the "super nuclear bomb" hydrogen bomb—and in During the Cuban Missile Crisis, almost the entire human civilization was dragged into a dark abyss beyond redemption.
The crisis that Oppenheimer has encountered has many similarities with the current "AI Omnic Crisis" encountered by humans: before humans use AI to drag themselves into a larger and uncontrollable future, guide it to a more massive and uncontrollable future. Perhaps the best way is to implement supervision in a safe track.
In 2021, the European Union's "preparation for a rainy day" eventually became the first regulatory framework for the artificial intelligence industry in human history-it is also the predecessor of the European Union's Artificial Intelligence Act (AI Act).
However, when the lawmakers designed this bill, they did not expect the existence of generative artificial intelligence or large models by the end of 2022. Therefore, the explosive rise of generative AI in 2023 has also added more The section on generative AI: including the transparency of its use of large models, and the regulation of user data collection/use.
At present, this bill has been voted by the European Parliament in mid-June. The next step for the final terms is to finalize the law in the negotiations between the three EU decision-making bodies - the Parliament, the Council and the Commission. An agreement will be reached and finally come into force.
In China, legislation on generative AI is also in progress: on July 13, the "Interim Measures for the Administration of Generative Artificial Intelligence Services" (hereinafter collectively referred to as the "Interim Measures") has been jointly issued by seven ministries and commissions including the Cyberspace Administration of China. It will go into effect in August.
This may become the first generative artificial intelligence management regulation that will finally be implemented. In the past round of "AI Legislative Race", China's AI legislation process has surpassed it, and it has become the fastest-growing specific regulation in the field of generative AI. .
In other countries that are in the first echelon of AI development, such as the United Kingdom and the United States, regulatory legislation on AI is also in progress: on March 16, the US Copyright Office launched an initiative to study copyright law and policy issues arising from artificial intelligence technology: including Scope of copyright in works generated using artificial intelligence tools, and use of copyrighted material for machine learning purposes. The UK government released its first AI regulatory framework on April 4. Since then, the US National Telecommunications and Information Administration (NTIA) has released an AI accountability draft to solicit broader public feedback on AI accountability measures and policies.
INNOVATION AND ACCOUNTABILITY
The legislation and governance of generative AI is a new field that no one has ever set foot in before, and every legislator has to bear the pressure of doubts from the outside world: At this year's Google I/O developer conference, Google officially released its generative AI The conversational robot Bard, but the scope of service completely excludes the European region.
This makes many European AI researchers/companies question: Why is Europe missing? Later, Google stated more times that it "looks forward to opening up to European users", which was further interpreted as a "hedging measure" for Google to avoid the legal gray area that exists in generative AI, resulting in huge fines in the EU.
By July, Google finally revealed why: Jack Krawczyk, head of Bard product, wrote in a blog post: At the beginning of the research, the Irish Data Protection Commission (DPC) had already proposed the intention of releasing Bard in Europe, but Ultimately, it took until July to meet the regulator's request for information to be provided.
Today, the European Union's "Artificial Intelligence Act" has been released, and almost every law in it directly points to the emerging or potential problems in the current AI development: the spread of false/misinformation, the possible education/mental health and other major problems.
But then legislators found that the more complicated part of solving this problem lies in how to determine the legislation: it is necessary to protect innovation/avoid the monopoly of giants in the field of AI, and to ensure the controllability of AI to a certain extent. Avoid flooding with fake content. It has become the common core of generative AI legislation in China, the United States and Europe, but they have different emphases in actual regulations.
Many previous European cases on artificial intelligence have caused negative treatment from institutions such as Google and OpenAI. Many local European technology companies and even legislators are worried that overly stringent legislation will allow Europe to return to the world's leading level with the help of the AI industry. The realization of the vision has become difficult: After the "Artificial Intelligence Act" is officially passed, the European Union has the right to issue a fine of up to 30 million euros, or 6% of the company's annual revenue, against companies that violate artificial intelligence regulations. This is undoubtedly an obvious warning for companies such as Google and Microsoft that want to expand their generative AI business in Europe.
In the June version of the "Artificial Intelligence Act", EU lawmakers explicitly included new provisions to encourage responsible AI innovation while reducing technical risks, properly supervise AI, and place it at the forefront. Important position: Article 1 of the bill clearly supports innovative measures for small and medium-sized enterprises and start-ups, including the establishment of "regulatory sandboxes" and other measures to reduce the compliance burden on small and medium-sized enterprises and start-ups.
However, before selling AI services or deploying systems to the market, generative AI must meet a series of regulatory requirements for risk management, data, transparency, and documentation. At the same time, the use of artificial intelligence in sensitive areas such as critical infrastructure will be regarded as "High risk" is included in the scope of supervision.
At present, the AI specification based on the "Interim Measures" has already taken action: On July 31, a large number of apps that provided ChatGPT services in the App Store in China were collectively removed from the shelves by Apple, and did not provide direct ChatGPT access, but also focused on AI functions. Another batch of apps will not be affected for the time being.
Among the current controversies surrounding the birth of generative AI, the data copyright disputes used to train large-scale language models for generative AI, manufacturers in the first echelon of generative AI development have already felt limited by the lack of high-quality content. "Invisible ceiling", but at the same time, countless creators and media have begun to initiate legal proceedings and other actions on copyright issues caused by generative AI. It is imminent to formulate content copyright protection clauses for the development of generative AI.
Therefore, this is also the focus of the generative AI legislation in the United States and Europe: the European "Artificial Intelligence Act" clearly requires large-scale model suppliers to declare whether they use copyrighted materials to train AI, and at the same time record enough log information for creators to seek Compensation; while the US Copyright Office has issued a new review, soliciting suggestions on the broad copyright issues raised by generative AI, and seeking special legislation to solve them.
In the current "Interim Measures", the relevant paragraphs in the draft for comments in April have been deleted, leaving the current "Interim Measures" with blank provisions on the protection of intellectual property rights regarding data sources, which need to be improved urgently.
However, for the large models that generative AI relies on for development; it is the law of development to search for data on the Internet to accelerate the iterative development of large models, and one-size-fits-all restrictions are likely to cause a major blow to the entire industry. Mentioned "exemption" in different scenarios: In the European Union's "Artificial Intelligence Law", an exemption for AI research by open source developers is included: cooperation and building artificial intelligence components in an open environment will receive special protection. At the same time, in the "Interim Measures", the relevant clauses also clarify the scope of exemption of the law:
"As long as enterprises, scientific research institutions, etc. do not openly provide generative artificial intelligence services to the public, this law does not apply."
"Legislation in the field of intellectual property still needs more time for research and demonstration." A legal consultant currently serving large-scale model manufacturers told reporters: "Chinese high-quality data sets are more scarce than English content, and generative The characteristics of AI training determine that even the giants cannot completely sign contracts with every platform and every content creator independently, not to mention the sky-high licensing fee is already a huge blow to startups.”
"Perhaps the current problem can only be left to time for the development of the industry to get better practical solutions." Competition and cooperation, the consultant added. Although in the competition between China and the United States, AI development has become one of the main battlefields for the United States to contain China's development, but in the field of generative AI legislation, cooperation between China, the United States and Europe is gradually becoming the mainstream.
At this stage, artificial intelligence companies in the United States have realized the risks that come with the development of AI. Promises have been made to ensure that AI "does not do evil": OpenAI stated that its stated mission is "to ensure that artificial general intelligence benefits all mankind." DeepMind's operating principles include a commitment to "be a responsible pioneer in the field of artificial intelligence," while DeepMind's founders pledge not to engage in lethal artificial intelligence research, and Google's artificial intelligence research principles also stipulate that Google will not deploy or design for Weapons that harm humans, or artificial intelligence that violates international norms for surveillance.
On July 26, Anthropic, Google, Microsoft, and OpenAI launched the Frontier Model Forum, an industry body focused on ensuring the safe and responsible development of cutting-edge AI models.
The forum will aim to "work with policy makers, academia, and civil society to minimize risk and share knowledge about security risks" while advancing AI research. At the same time, actively integrate into existing international multilateral cooperation: including G7, OECD and other organizations' policy formulation on generative AI. In order to promote the synchronization of countries in the direction of legislation and regulatory concepts.
This is also consistent with the views of Max Tegmark, a professor at the Massachusetts Institute of Technology and founder of the Future of Life Institute: "China is in an advantageous position in many fields of generative AI, and will likely become a leader in controlling artificial intelligence."
The number of large-scale models in China and the United States accounts for 90% of the world, so the legislative coordination between China, the United States and Europe will even determine the development trend of the global generative AI industry. "Legal coordination with China on the premise of conforming to the interests of the United States" has gradually become the consensus of the American political and academic circles.
More representative is the grading measures for AI risks: China's "Interim Measures" mentioned "inclusive prudence and classification and grading supervision for generative artificial intelligence services", but did not elaborate on the grading in the current version At present, only paragraphs such as "developing corresponding classification and grading supervision rules or guidelines" have been written.
The European "Artificial Intelligence Law" also proposes a grading system to correspond to the obligations of AI developers with different risk levels. The current proposed AI "threat level" includes four categories:
In the current version, the "Artificial Intelligence Law" extracts generative AI as an independent classification, and at the same time strictly restricts products classified as "high-risk AI". Possibility of risk. This is also considered to be a "reference blueprint" for future legislation on AI risks in China and the United States.
In addition, judging from the actual legislative process, actively inviting the public and AI research institutions to participate in the legislative process is a consensus formed by regulatory agencies in various countries through experience in the past six months. It is a "shadow clause" that cannot be questioned.
"Generative AI may never be perfect, but the law is destined to play a major role in the key direction of AI development"
Just as Oppenheimer never regretted developing the atomic bomb in New Mexico even though he opposed the use of nuclear weapons in war, no one can stop curious researchers from using existing technology to develop smarter AI, science The pace of exploration will never stop there. "Necessary and reasonable" supervision is the "speed bump" on the way of AI's frantic running, and it is also the "last line of defense" to prevent generative artificial intelligence from causing real harm.
But focusing on "how to avoid falling behind in the AI race caused by regulation" is still one of the most concerned things for lawmakers. Relevant legislation will undoubtedly become more complete with more experience and feedback.
"Regulatory agencies will uphold the spirit of scientific legislation and open-door legislation, and make timely revisions and improvements." The passage about generative AI in the "Interim Measures" may be the best principle for legislation in the era of generative AI.