💙 Gate Square #Gate Blue Challenge# 💙
Show your limitless creativity with Gate Blue!
📅 Event Period
August 11 – 20, 2025
🎯 How to Participate
1. Post your original creation (image / video / hand-drawn art / digital work, etc.) on Gate Square, incorporating Gate’s brand blue or the Gate logo.
2. Include the hashtag #Gate Blue Challenge# in your post title or content.
3. Add a short blessing or message for Gate in your content (e.g., “Wishing Gate Exchange continued success — may the blue shine forever!”).
4. Submissions must be original and comply with community guidelines. Plagiarism or re
Decentralization Storage Evolution: From FIL to Walrus - Technological Transformations and Future Challenges
The Development History and Future Prospects of Decentralization Storage
Decentralization storage was once one of the popular tracks in the blockchain industry. Filecoin, as the leading project of the last bull market, had a market value that once exceeded $10 billion. Arweave, with its concept of permanent storage, also reached a peak market value of $3.5 billion. However, as the availability of cold data storage is being questioned, the necessity of permanent storage is also challenged, and the prospects for decentralized storage have been called into question.
Recently, the emergence of Walrus has brought new attention to the long-quiet storage sector. The Shelby project, launched in collaboration between Aptos and Jump Crypto, aims to further promote the development of decentralized storage in the hot data storage field. So, can decentralized storage make a comeback and provide solutions for broader application scenarios? Or is this just another round of brief conceptual speculation? This article will analyze the narrative changes of decentralized storage from the development paths of four projects: Filecoin, Arweave, Walrus, and Shelby, exploring the prospects and challenges of its popularization.
Filecoin: Surface Storage, Substantial Mining
Filecoin is one of the early rising altcoin projects, and its development direction revolves around Decentralization. This is a common characteristic of early altcoins - seeking the meaning of Decentralization in various traditional fields. Filecoin combines storage with Decentralization, pointing out the trust risks associated with centralized data storage service providers, thereby proposing a solution for decentralized storage.
However, some aspects that Filecoin sacrificed to achieve Decentralization became pain points that later projects like Arweave or Walrus attempted to address. To understand why Filecoin is essentially just a mining coin project, it is necessary to understand the objective limitations of its underlying technology IPFS in handling hot data.
IPFS: Decentralization architecture is limited by transmission bottlenecks.
IPFS(, the InterPlanetary File System, was launched around 2015 with the aim of disrupting the traditional HTTP protocol through content addressing. The biggest flaw of IPFS is its extremely slow retrieval speed. In an era where traditional data services can achieve millisecond-level responses, retrieving a file via IPFS still takes several seconds, making it difficult to promote in practical applications and explaining why, apart from a few blockchain projects, it is rarely adopted in traditional industries.
The underlying P2P protocol of IPFS is mainly suitable for "cold data" - static content that does not change frequently, such as videos, images, and documents. However, when it comes to handling hot data, such as dynamic web pages, online games, or artificial intelligence applications, the P2P protocol does not have a significant advantage over traditional CDNs.
Although IPFS itself is not a blockchain, its directed acyclic graph design concept, )DAG(, aligns closely with many public chains and Web3 protocols, making it inherently suitable as a foundational building framework for blockchains. Therefore, even lacking practical value, it is sufficient as a foundational framework for carrying blockchain narratives. Early clone projects only needed a runnable framework to depict grand visions, but as Filecoin developed to a certain stage, the limitations brought by IPFS began to hinder its progress.
) The logic of mining coins under the storage cloak
The original intention of IPFS is to allow users to become part of the storage network while storing data. However, in the absence of economic incentives, it is difficult for users to actively use this system, let alone become active storage nodes. This means that most users will only store files on IPFS but will not contribute their own storage space or store others' files. It is against this backdrop that Filecoin was born.
In the token economic model of Filecoin, there are mainly three roles: users are responsible for paying fees to store data; storage miners receive token incentives for storing user data; and retrieval miners provide data when users need it and receive incentives.
This model has the potential for malicious behavior. Storage miners may fill garbage data after providing storage space to gain rewards. Since this garbage data will not be retrieved, even if lost, it will not trigger the penalty mechanism for storage miners. This allows storage miners to delete garbage data and repeat this process. Filecoin's proof of replication consensus can only ensure that user data has not been privately deleted, but cannot prevent miners from filling garbage data.
The operation of Filecoin largely relies on the continuous investment of miners in the token economy, rather than on the real demand for distributed storage from end users. Although the project is still undergoing iterations, at this stage, the ecological construction of Filecoin aligns more with the "mining coin logic" rather than the "application-driven" definition of storage projects.
Arweave: The Gains and Losses of Long-Termism
If Filecoin's design goal is to create an incentivized, verifiable Decentralization "data cloud" shell, then Arweave takes a different extreme direction in storage: providing the capability for permanent storage of data. Arweave does not attempt to build a distributed computing platform; its entire system revolves around a core assumption - important data should be stored once and retained in the network forever. This extreme long-termism leads Arweave to differ significantly from Filecoin in terms of mechanisms, incentive models, hardware requirements, and narrative perspectives.
Arweave focuses on Bitcoin as its learning object, attempting to continuously optimize its permanent storage network over a long period measured in years. Arweave does not care about marketing, nor does it pay attention to competitors and market trends. It simply continues moving forward on the path of iterating its network architecture, indifferent to whether anyone is paying attention, because this is the essence of the Arweave development team: long-termism. Thanks to long-termism, Arweave was highly sought after in the last bull market; and also because of long-termism, even if it falls to the bottom, Arweave may still survive several rounds of bull and bear markets. But will there be a place for Arweave in the future of decentralized storage? The value of permanent storage can only be proven over time.
The Arweave mainnet has transitioned from version 1.5 to the recent version 2.9. Although it has lost market attention, it has been committed to allowing a broader range of miners to participate in the network at minimal cost and incentivizing miners to maximize data storage, continuously enhancing the robustness of the entire network. Arweave is well aware that it does not align with market preferences, adopting a conservative approach, not embracing the miner community, with the ecosystem completely stagnant. It aims to upgrade the mainnet at minimal cost, continuously lowering hardware barriers without compromising network security.
Review of the upgrade path from 1.5 to 2.9
The Arweave version 1.5 revealed a vulnerability that allowed miners to rely on GPU stacking instead of real storage to optimize block generation chances. To curb this trend, version 1.7 introduced the RandomX algorithm, which limits the use of specialized computing power and instead requires general-purpose CPUs to participate in mining, thereby weakening computing power centralization.
In version 2.0, Arweave adopts SPoA, transforming data proofs into a concise path of Merkle tree structures, and introduces format 2 transactions to reduce synchronization burdens. This architecture alleviates network bandwidth pressure, significantly enhancing node collaboration capabilities. However, some miners can still evade the responsibility of holding real data through centralized high-speed storage pool strategies.
To correct this bias, version 2.4 introduced the SPoRA mechanism, which incorporates global indexing and slow hash random access, requiring miners to genuinely hold data blocks in order to participate in effective block generation, thereby weakening the stacking effect of computational power from a mechanistic perspective. As a result, miners began to pay attention to storage access speed, driving the application of SSDs and high-speed read-write devices. Version 2.6 introduced hash chain control for block generation rhythm, balancing the marginal benefits of high-performance devices and providing fair participation space for small and medium-sized miners.
Subsequent versions further strengthen network collaboration capabilities and storage diversity: 2.7 adds collaborative mining and pool mechanisms to enhance the competitiveness of small miners; 2.8 introduces a composite packaging mechanism that allows large-capacity low-speed devices to participate flexibly; 2.9 introduces a new packaging process in replica_2_9 format, significantly improving efficiency and reducing computational dependence, completing the closed loop of data-driven mining models.
Overall, Arweave's upgrade path clearly presents its storage-oriented long-term strategy: while continuously resisting the trend of centralized computing power, it aims to lower the participation threshold and ensure the long-term viability of the protocol.
Walrus: Is Embracing Hot Data Hype or Hidden Potential?
The design concept of Walrus is completely different from that of Filecoin and Arweave. Filecoin's starting point is to create a decentralized verifiable storage system, at the cost of cold data storage; Arweave's starting point is to create an on-chain library of Alexandria that can permanently store data, at the cost of too few scenarios; Walrus's starting point is to optimize the storage cost of hot data storage protocols.
Magic-modified Error Correction Code: Cost Innovation or New Wine in an Old Bottle?
In terms of storage cost design, Walrus believes that the storage expenses of Filecoin and Arweave are unreasonable, as both employ a fully replicated architecture. Their main advantage lies in the fact that each node holds a complete copy, providing strong fault tolerance and independence among nodes. This type of architecture ensures that even if some nodes go offline, the network still maintains data availability. However, this also means that the system requires multiple copies for redundancy to maintain robustness, thereby increasing storage costs. Particularly in Arweave's design, the consensus mechanism itself encourages nodes to store redundant data to enhance data security. In contrast, Filecoin is more flexible in cost control, but the trade-off is that some low-cost storage may carry a higher risk of data loss. Walrus attempts to find a balance between the two, controlling replication costs while enhancing availability through structured redundancy, thereby establishing a new compromise between data accessibility and cost efficiency.
The Redstuff created by Walrus is a key technology for reducing node redundancy. It is derived from Reed-Solomon ### RS ( encoding. RS encoding is a very traditional erasure code algorithm, and erasure codes are a technique that allows for doubling a dataset by adding redundant fragments ) erasure code (, which can be used to reconstruct the original data. From CD-ROMs to satellite communications to QR codes, it is frequently used in everyday life.
Erasure codes allow users to obtain a block, such as 1MB in size, and then "expand" it to 2MB, where the additional 1MB is called special data known as erasure codes. If any byte in the block is lost, users can easily recover those bytes through the code. Even if up to 1MB of the block is lost, the entire block can still be recovered. The same technology enables computers to read all the data on a CD-ROM, even if it has been damaged.
Currently, the most commonly used is RS coding. The implementation method starts with k information blocks, constructs the related polynomial, and evaluates it at different x-coordinates to obtain the encoded blocks. Using RS erasure codes, the probability of randomly sampling large chunks of data loss is very small.
What is the biggest feature of Redstuff? By improving the erasure coding algorithm, Walrus can quickly and robustly encode unstructured data blocks into smaller shards, which are distributed and stored in a network of storage nodes. Even if up to two-thirds of the shards are lost, the original data block can be quickly reconstructed using partial shards. This is made possible while keeping the replication factor only between 4 and 5 times.
Therefore, it is reasonable to define Walrus as a lightweight redundancy and recovery protocol redesigned around the Decentralization scenario. Compared to traditional erasure codes ) such as Reed-Solomon (, RedStuff no longer pursues strict mathematical consistency, but instead makes realistic trade-offs concerning data distribution, storage verification, and computational costs. This model abandons the immediate decoding mechanisms required for centralized scheduling, opting instead to verify through on-chain Proof whether nodes hold specific data replicas, thus adapting to a more dynamic and marginalized network structure.
The core design of RedStuff is to split data into two categories: primary slices and secondary slices. Primary slices are used to recover the original data, and their generation and distribution are subject to strict constraints, with a recovery threshold of f+1, requiring 2f+1 signatures as availability endorsement. Secondary slices are generated through simple operations such as XOR combinations, serving to provide elastic fault tolerance and enhance the overall robustness of the system. This structure essentially reduces the requirements for data consistency - allowing different nodes to temporarily store different versions of data.