AI Layer1 Research Report: The Infrastructure Battle of Decentralization in Artificial Intelligence

AI Layer1 Research Report: Finding the On-Chain DeAI Fertile Ground

Overview

In recent years, leading tech companies such as OpenAI, Anthropic, Google, and Meta have been driving the rapid development of large language models (LLM). LLMs have demonstrated unprecedented capabilities across various industries, greatly expanding the realm of human imagination and even showing the potential to replace human labor in certain scenarios. However, the core of these technologies is firmly held by a few centralized tech giants. With substantial capital and control over costly computing resources, these companies have established insurmountable barriers, making it difficult for the vast majority of developers and innovation teams to compete.

At the same time, in the early stages of rapid AI evolution, public opinion often focuses on the breakthroughs and conveniences brought by technology, while attention to core issues such as privacy protection, transparency, and security is relatively insufficient. In the long run, these issues will profoundly affect the healthy development of the AI industry and its social acceptance. If not properly addressed, the debate over whether AI is "for good" or "for evil" will become increasingly prominent, while centralized giants, driven by profit motives, often lack sufficient motivation to proactively address these challenges.

Blockchain technology, with its decentralization, transparency, and censorship resistance features, provides new possibilities for the sustainable development of the AI industry. Currently, numerous "Web3 AI" applications have emerged on several mainstream blockchains. However, a deeper analysis reveals that these projects still face many issues: on one hand, the degree of decentralization is limited, key links and infrastructure still rely on centralized cloud services, and the meme attribute is too heavy, making it difficult to support a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still shows limitations in model capabilities, data utilization, and application scenarios, and the depth and breadth of innovation need to be improved.

To truly realize the vision of decentralized AI, enabling blockchain to safely, efficiently, and democratically support large-scale AI applications, and to compete with centralized solutions in terms of performance, we need to design a Layer 1 blockchain specifically tailored for AI. This will provide a solid foundation for open innovation in AI, democratic governance, and data security, promoting the prosperous development of a decentralized AI ecosystem.

Biteye and PANews jointly released AI Layer1 research report: Searching for fertile ground for on-chain DeAI

Core features of AI Layer 1

AI Layer 1, as a blockchain specifically tailored for AI applications, has its underlying architecture and performance design closely aligned with the requirements of AI tasks, aiming to effectively support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:

  1. Efficient Incentives and Decentralized Consensus Mechanism The core of AI Layer 1 lies in building an open network for shared resources such as computing power and storage. Unlike traditional blockchain nodes, which mainly focus on ledger bookkeeping, the nodes of AI Layer 1 need to undertake more complex tasks. They must not only provide computing power and complete AI model training and inference but also contribute diverse resources such as storage, data, and bandwidth, thereby breaking the monopoly of centralized giants on AI infrastructure. This places higher demands on the underlying consensus and incentive mechanisms: AI Layer 1 must accurately assess, incentivize, and validate the actual contributions of nodes in tasks such as AI inference and training, ensuring the security of the network and the efficient allocation of resources. Only in this way can the stability and prosperity of the network be guaranteed, effectively reducing the overall computing power costs.

  2. Exceptional high performance and support for heterogeneous task capabilities AI tasks, especially the training and inference of LLMs, place extremely high demands on computational performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support diverse and heterogeneous task types, including different model structures, data processing, inference, storage, and other diverse scenarios. AI Layer 1 must deeply optimize the underlying architecture for high throughput, low latency, and elastic parallelism, while also providing native support for heterogeneous computing resources to ensure that various AI tasks can run efficiently, achieving a smooth transition from "single-type tasks" to "complex and diverse ecosystems."

  3. Verifiability and Trustworthy Output Assurance AI Layer 1 must not only prevent malicious model behavior and data tampering security risks but also ensure the verifiability and alignment of AI output results from the underlying mechanisms. By integrating cutting-edge technologies such as Trusted Execution Environments (TEE), Zero-Knowledge Proofs (ZK), and Multi-Party Computation (MPC), the platform enables each model inference, training, and data processing process to be independently verified, ensuring the fairness and transparency of the AI system. At the same time, this verifiability helps users clarify the logic and basis of AI outputs, achieving "what is obtained is what is desired," thereby enhancing user trust and satisfaction with AI products.

  4. Data Privacy Protection AI applications often involve sensitive user data, especially in finance, healthcare, and social sectors, where data privacy protection is particularly critical. AI Layer 1 should ensure verifiability while adopting cryptography-based data processing technologies, privacy computing protocols, and data permission management to ensure the security of data throughout the processes of inference, training, and storage, effectively preventing data leakage and abuse, and alleviating users' concerns regarding data security.

  5. Strong ecological carrying and development support capabilities. As an AI-native Layer 1 infrastructure, the platform not only needs to have technological leadership but also provides comprehensive development tools, integrated SDKs, operation support, and incentive mechanisms for ecological participants such as developers, node operators, and AI service providers. By continuously optimizing platform usability and developer experience, we promote the landing of diverse AI-native applications, achieving the sustained prosperity of a decentralized AI ecosystem.

Based on the above background and expectations, this article will provide a detailed introduction to six representative AI Layer 1 projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G. It will systematically sort out the latest developments in the field, analyze the current status of the projects, and discuss future trends.

Biteye and PANews jointly released AI Layer1 research report: Seeking fertile ground for on-chain DeAI

Sentient: Build a loyal open-source decentralized AI model

Project Overview

Sentient is an open-source protocol platform that is building an AI Layer 1 blockchain (. The initial phase is Layer 2, which will later migrate to Layer 1). By combining AI Pipeline and blockchain technology, it aims to create a decentralized artificial intelligence economy. Its core goal is to solve issues of model ownership, invocation tracking, and value distribution in the centralized LLM market through the "OML" framework (Open, Monetizable, Loyal), enabling AI models to achieve on-chain ownership structure, transparent invocation, and value sharing. Sentient's vision is to allow anyone to build, collaborate, own, and monetize AI products, thereby fostering a fair and open AI Agent network ecosystem.

The Sentient Foundation team brings together top academic experts, blockchain entrepreneurs, and engineers from around the world, dedicated to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI safety and privacy protection, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecosystem layout. Team members come from well-known companies such as Meta, Coinbase, and Polygon, as well as top universities like Princeton University and the Indian Institutes of Technology, covering fields such as AI/ML, NLP, and computer vision to collaboratively promote the project's implementation.

As a second entrepreneurial project of Polygon co-founder Sandeep Nailwal, Sentient came with a halo from the very beginning, possessing rich resources, connections, and market recognition, providing strong backing for the project's development. In mid-2024, Sentient completed a $85 million seed round financing, led by Founders Fund, Pantera, and Framework Ventures, with other investment institutions including dozens of well-known VCs such as Delphi, Hashkey, and Spartan.

Biteye and PANews jointly released an AI Layer 1 research report: Finding fertile ground for on-chain DeAI

Design Architecture and Application Layer

Infrastructure Layer

Core Architecture

The core architecture of Sentient consists of two parts: AI Pipeline and on-chain system.

The AI pipeline is the foundation for developing and training "Loyal AI" artifacts, consisting of two core processes:

  • Data Curation: A community-driven data selection process used for model alignment.
  • Loyalty Training: Ensure that the model maintains a training process that is consistent with community intentions.

The blockchain system provides transparency and decentralized control for protocols, ensuring ownership of AI artifacts, usage tracking, revenue distribution, and fair governance. The specific architecture is divided into four layers:

  • Storage Layer: Stores model weights and fingerprint registration information;
  • Distribution Layer: The entry point for model calls controlled by the authorization contract;
  • Access Layer: Verifies whether the user is authorized through permission proof;
  • Incentive Layer: The revenue routing contract allocates payments to trainers, deployers, and validators on each call.

OML Model Framework

The OML framework (Open, Monetizable, Loyal) is a core concept proposed by Sentient, aimed at providing clear ownership protection and economic incentives for open-source AI models. By combining on-chain technology and AI-native cryptography, it has the following characteristics:

  • Openness: The model must be open source, with code and data structures transparent, facilitating community reproduction, auditing, and improvement.
  • Monetization: Each model call triggers a revenue stream, and the on-chain contract distributes the earnings to the trainers, deployers, and validators.
  • Loyalty: The model belongs to the contributor community, and the direction of upgrades and governance is determined by the DAO, with usage and modifications controlled by cryptographic mechanisms.

AI-native Cryptography

AI-native encryption utilizes the continuity, low-dimensional manifold structure, and differentiability properties of AI models to develop a "verifiable but non-removable" lightweight security mechanism. Its core technology is:

  • Fingerprint embedding: Insert a set of covert query-response key-value pairs during training to form a unique signature for the model;
  • Ownership Verification Protocol: Verifying whether the fingerprint is retained through a third-party detector (Prover) in the form of a query.
  • Permission calling mechanism: Before calling, you need to obtain the "permission certificate" issued by the model owner, and the system will then authorize the model to decode the input and return the correct answer.

This method enables "behavior-based authorization calls + ownership verification" without the cost of re-encryption.

Model Rights Confirmation and Secure Execution Framework

Sentient currently adopts Melange hybrid security: combining fingerprint identification, TEE execution, and on-chain contract revenue sharing. The fingerprint method is implemented through OML 1.0, emphasizing the "Optimistic Security" concept, which assumes compliance by default, with the ability to detect and punish violations.

The fingerprint mechanism is a key implementation of OML, which generates a unique signature during the training phase by embedding specific "question-answer" pairs. Through these signatures, model owners can verify ownership and prevent unauthorized copying and commercialization. This mechanism not only protects the rights of model developers but also provides a traceable on-chain record of the model's usage behavior.

In addition, Sentient has launched the Enclave TEE computing framework, utilizing trusted execution environments (such as AWS Nitro Enclaves) to ensure that models only respond to authorized requests, preventing unauthorized access and use. Although TEE relies on hardware and has certain security risks, its high performance and real-time advantages make it a core technology for current model deployment.

In the future, Sentient plans to introduce zero-knowledge proofs (ZK) and fully homomorphic encryption (FHE) technologies to further enhance privacy protection and verifiability, providing more mature solutions for the decentralized deployment of AI models.

![Biteye and PANews Jointly Release AI Layer1 Research Report: Searching for On-chain DeAI Fertile Ground](

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 8
  • Share
Comment
0/400
MaticHoleFillervip
· 07-27 07:42
Capital monopoly, who will counter it!
View OriginalReply0
TokenDustCollectorvip
· 07-27 07:19
AI is going to be competitive again.
View OriginalReply0
CommunitySlackervip
· 07-25 11:33
Worshipping giants is boring; innovation is where the excitement is.
View OriginalReply0
ZeroRushCaptainvip
· 07-24 20:00
Goodness, this monopoly treats us retail investors like suckers to play people for suckers.
View OriginalReply0
StakeHouseDirectorvip
· 07-24 20:00
What are the tech giants monopolizing doing?
View OriginalReply0
SmartContractPlumbervip
· 07-24 19:59
The management authority of centralized AI is more terrifying than contract vulnerabilities; who can audit them?
View OriginalReply0
VCsSuckMyLiquidityvip
· 07-24 19:51
Monopoly is the most toxic on the road to innovation 🐸
View OriginalReply0
PriceOracleFairyvip
· 07-24 19:49
wen real decentralized ai, fren? this stuff is just bigtech in web3 clothing...
Reply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)