Blockchain transactions per second (TPS) numbers are often treated as a performance gauge, but they don’t tell the full story of whether a network can scale in practice.
Carter Feldman, founder of Psy Protocol and a former hacker, told Cointelegraph that TPS figures are often misleading because they ignore how transactions are actually verified and relayed across decentralized systems.
“Many pre-mainnet, testnet or isolated benchmarking tests measure TPS with only one node running. At that point, you might as well call Instagram a blockchain that can hit 1 billion TPS because it has one central authority validating every API call,” Feldman said.
Part of the issue is how most blockchains are designed. The faster they try to go, the heavier the load on every node and the harder decentralization becomes. That burden can be reduced by separating transaction execution from verification.
TPS numbers ignore the cost of decentralization
TPS is a valid benchmark for blockchain performance. If a network has higher TPS, it can handle more real usage.
But Feldman argued most headline TPS figures represent ideal settings that don’t translate to real-world throughput. The impressive numbers don’t show how the system performs under decentralized conditions.
“The TPS of a virtual machine or a single node is not a measure of a blockchain’s real mainnet performance,” said Feldman.
“However, the number of transactions per second a blockchain can process in a production environment is still a valid way to quantify how much usage it can handle, which is what scaling should mean.”
Every full node in a blockchain must check that transactions follow the protocol’s rules. If one node accepts an invalid transaction, others should reject it. That’s what makes a decentralized ledger work.
Related: Firedancer will speed up Solana, but it won’t reach full potential
Blockchain performance considers how fast a virtual machine executes transactions. But bandwidth, latency and network topology matter in the real world. So, performance also depends on how transactions are received and verified by other nodes across the network.
As a result, TPS figures published in white papers often diverge from mainnet performance. Benchmarks that isolate execution from relay and verification costs measure something closer to virtual machine speed than blockchain scalability.
EOS, a network on which Feldman was a former block producer, smashed initial coin offering records in 2018. Its white paper suggested a theoretical scale of around 1 million TPS. That remains an eye-popping figure even by 2026 standards.
EOS never reached its theoretical TPS target. Earlier reports claimed it could hit 4,000 transactions under favorable settings. However, research conducted by blockchain testers at Whiteblock found that in realistic network conditions, throughput fell to roughly 50 TPS.
In 2023, Jump Crypto demonstrated that its Solana validator client, Firedancer, reached what EOS couldn’t by testing 1 million TPS. The client has since been rolling out, with many validators running a hybrid version known as Frankendancer. Solana in live conditions today typically processes around 3,000-4,000 TPS. Roughly 40% of those transactions are non-vote transactions, which better reflect actual user activity.

Breaking the linear scaling problem
Blockchain throughput usually scales linearly with workload. More transactions reflect more activity, but it also means nodes receive and verify more data.
Each additional transaction adds computational burden. At some point, bandwidth limits, hardware constraints and synchronization delays make further increases unsustainable without sacrificing decentralization.
Feldman said that overcoming this constraint requires rethinking how validity is proven, which can be done through zero-knowledge (ZK) technology. ZK is a way to prove that a batch of transactions was processed correctly without making every node run those transactions again. Because it allows validity to be proven without revealing all underlying data, ZK is often pushed as a solution to privacy issues.
Related: Privacy tools are rising behind institutional adoption, says ZKsync dev
Feldman argues that it can ease the scaling burden as well via recursive ZK-proofs. In simple terms, that refers to proofs verifying other proofs.
“It turns out that you can take two ZK-proofs and generate a ZK-proof that proves that both of these proofs are correct,” Feldman said. “So, you can take two proofs and make them into one proof.”
“Let’s say we start with 16 users’ transactions. We can take those 16 and make them into eight proofs, then we can take the eight proofs and make them into four proofs,” Feldman explained while sharing a graphic of a proof tree where multiple proofs ultimately become one.

In traditional blockchain designs, increasing TPS raises verification and bandwidth requirements for every node. Feldman argues that with a proof-based design, throughput can increase without proportionally increasing per-node verification costs.
That does not mean ZK eliminates scaling tradeoffs entirely. Generating proofs can be computationally intensive and may require specialized infrastructure. While verification becomes cheap for ordinary nodes, the burden shifts to provers that must perform heavy cryptographic work. Retrofitting proof-based verification into existing blockchain architectures is also complex, which helps explain why most major networks still rely on traditional execution models.
Performance beyond raw throughput
TPS is not useless, but it is conditional. According to Feldman, raw throughput figures are less meaningful than economic signals such as transaction fees, which provide a clearer indicator of network health and demand.
“I would contend that TPS is the number two benchmark of a blockchain’s performance, but only if it is measured in a production environment or in an environment where transactions are not just processed but also relayed and verified by other nodes,” he said.

Blockchain’s dominant and existing design also influenced investments. Those modeled around sequential execution can’t easily bolt on proof-based verification without redesigning how transactions are processed.
“In the very beginning, it was almost impossible to raise money for anything but a ZK EVM [Ethereum Virtual Machine],” Feldman said, explaining Psy Protocol’s former funding issues.
“The reason people didn’t want to fund it in the beginning is that it took a while,” he added. “You can’t just fork EVMs or their state storage because everything is done completely differently.”
In most blockchains, higher TPS means more work for every node. A headline figure alone does not show whether that workload is sustainable.
Magazine: Ethereum’s roadmap to 10,000 TPS using ZK tech: Dummies’ guide
Cointelegraph Features and Cointelegraph Magazine publish long-form journalism, analysis and narrative reporting produced by Cointelegraph’s in-house editorial team and selected external contributors with subject-matter expertise. All articles are edited and reviewed by Cointelegraph editors in line with our editorial standards. Contributions from external writers are commissioned for their experience, research or perspective and do not reflect the views of Cointelegraph as a company unless explicitly stated. Content published in Features and Magazine does not constitute financial, legal or investment advice. Readers should conduct their own research and consult qualified professionals where appropriate. Cointelegraph maintains full editorial independence. The selection, commissioning and publication of Features and Magazine content are not influenced by advertisers, partners or commercial relationships.




Be the first to comment