
Throughput refers to the amount of “work” a blockchain network can process within a given timeframe, typically measured as transactions per second (TPS) or the computational capacity handled per second. It is a key factor determining whether transactions will queue up and whether fees will spike during periods of congestion.
Think of a blockchain like a highway: the more lanes and faster toll booths, the more cars can pass through per unit of time. Higher throughput means users experience shorter wait times and less fee volatility during peak periods. When throughput is limited, popular activities such as DeFi transactions or NFT minting can experience delays and longer confirmation times.
There are two main methods for measuring throughput. The first is TPS, or the number of transactions confirmed per second. While intuitive, this metric can be misleading since different transactions vary in complexity—simply counting transactions does not reflect true network capacity.
The second method uses “gas” as a unit to measure computational throughput. Gas can be thought of as “computational bandwidth”; each operation consumes a different amount of gas. Each block has a gas limit, so dividing the block’s gas limit by block time yields the average gas processed per second. Measuring by gas provides a standardized way to compare operations with varying complexity.
Some also use “data byte throughput” (bytes processed per second) to assess block space utilization, especially in scenarios involving large-scale on-chain data storage. In practice, a comprehensive analysis combines TPS, gas, and byte-based perspectives.
Throughput measures “how much can be processed per unit of time,” while latency focuses on “how long it takes for a single transaction to be confirmed from submission.” The two are related but distinct: a chain may have stable latency but low throughput, or high throughput but batch confirmations that delay individual transactions.
In blockchain terminology, “finality” is also crucial—it refers to the time until a transaction is irreversibly confirmed. Some networks produce blocks quickly but have a nonzero probability of rollback in the short term; others offer stronger finality guarantees. To fully evaluate user experience, you should consider throughput, latency, and finality together.
Key factors affecting throughput include block time, block capacity (or gas limit), network propagation speed, and node hardware performance.
Approaches to boosting throughput generally fall into two categories: on-chain scaling and off-chain load migration with subsequent settlement.
Direct scaling involves increasing block capacity or shortening block time. These changes can quickly improve throughput but may raise hardware requirements for nodes, risking reduced decentralization.
The next major approach is Layer 2 (L2) networks. L2 can be understood as aggregating multiple transactions off-chain, then submitting results to the main chain. Popular implementations include rollup solutions:
Sharding distributes overall network load across multiple parallel shard chains, reducing pressure on any single chain.
Parallel execution allows simultaneous processing of non-conflicting transactions, raising single-node throughput. Coupled with more efficient storage and networking protocols, this yields substantial improvements.
As of the second half of 2025, Ethereum mainnet maintains robust baseline throughput, prioritizing security and decentralization; major industry improvements come from Layer 2 solutions. With upgrades focused on data availability, L2s see reduced batch costs and increased bandwidth—practical throughput now commonly reaches hundreds or thousands of transactions per second during peak times (see various L2 official dashboards and community trackers for reference as of late 2025).
This means bulk operations on mainnet may still queue during congestion, while high-frequency activities routed through L2s balance cost and speed effectively. For most users, choosing an optimal L2 network greatly improves confirmation experiences.
Layer 2 networks increase throughput but introduce new trade-offs. The main considerations are whether the sequencer (the entity ordering transactions) is decentralized, downtime risks, and how assets bridge to/from the mainnet with associated finality delays.
When assessing an L2 solution, you should examine throughput alongside downtime history, data availability commitments, and withdrawal processes.
To incorporate throughput considerations into deposits/withdrawals and on-chain interactions on Gate, follow these steps:
Tip: Asset transfers carry risk. Before switching to new networks, test addresses and workflows with small amounts; for cross-chain or withdrawal actions always verify contract addresses and official channels to avoid phishing links.
You can combine observation with small-scale practical tests to get direct insights without disrupting the network.
Throughput determines how much work a chain can process per unit time—directly impacting fees and wait times. Measurement should consider both TPS and gas metrics alongside latency and finality. Bottlenecks include block time, block capacity, network propagation speed, and execution/storage overhead. Scalability approaches range from direct expansion to Layer 2 solutions, sharding, and parallel execution—but all require balancing security with decentralization. In practice, consult real-time on-chain metrics when choosing networks for deposits/withdrawals or major events; strategically timing actions helps minimize costs and waiting risks.
Low throughput means the blockchain can only process a limited number of transactions per second—when the network is busy your transaction may queue up and wait. This leads to slower confirmations and potentially higher gas fees. For example, Bitcoin handles only about seven transactions per second; during peak times you could wait hours for inclusion.
High throughput is just a technical metric—actual network adoption requires strong ecosystem applications. Some chains can process thousands of transactions per second but lack quality DApps, liquidity, or an active user base; speed alone doesn’t drive usage. Throughput is necessary for robust public chains but not sufficient by itself.
It depends on your use case. For large asset transfers, prioritize security (choose chains like Bitcoin or Ethereum mainnet) since security breaches are irreversible; for everyday small transactions or DApp interactions, high-throughput chains (such as Arbitrum or Optimism) offer faster confirmations. Gate supports leading public chains so you can select flexibly based on your needs.
Layer 2 solutions dramatically boost throughput (often 100–1000x), but not infinitely. They speed things up by aggregating transactions off-chain before regularly submitting summaries to the mainnet. Ultimately throughput is capped by mainnet capacity—and you must balance scalability with security and decentralization.
Not always. Slow transaction processing can be due to: network congestion hitting throughput limits (most common), low gas fee bids lowering transaction priority, or node sync delays. Monitor real-time network congestion and adjust your gas fees accordingly; Gate’s trading system provides current network status prompts so you can make informed decisions.


