miner5-rizun.png

Scaling at Stanford: Transistors for Mining and Processing One Gigabyte Blocks

Last weekend the fourth Scaling Bitcoin Workshop took place in Stanford, California. One explored the relationship between mining, hardware and scalability; the other reports an experiment using gigabyte sized block processing with today’s soft- and hardware.

Scaling Bitcoin Workshop

There should be little doubt that the Scaling Bitcoin Workshop is the most sophisticated Bitcoin conference. There are no exhibitions of companies, the whole conference targets solely the academic community, which can discuss the scalability of Bitcoin without being interrupted by business and politics.

The Scaling Bitcoin Workshop is hosted each year on another continent. This time it took place on the university campus of Stanford. However, the 25 presentations are available worldwide as live streams and video records.

Compared with the last workshop, the content has evolved. While last year in Milan topics like onchain scaling and hard forks have been more or less tabooed, Stanford saw some interesting presentations on the issue.

However, the upcoming 2x hard fork was no topic, as most participants considered it to have only a small impact on the larger picture and wanted to avoid the politics and toxicity which comes with the fork. As in Milan, issues like offchain scaling and privacy in the context of scaling have been an hot topic covered by several presentations. Other than in the last workshops, Stanford also gave room for some talks about mining.

On the website of Scaling Bitcoin you find videos of the 25 presentations. They are a gift for everybody who wants to learn more about Bitcoin and dive into the technological mysteries of scaling and privacy.

Whatever topic – Lightning Network, other payment channels, confidential transactions, mining, hard forks, smart contracts, atomic swaps, block propagation – you will strike gold.

We pick out two presentations to write about: Min Chen’s talk about hardware and scaling, and Peter Rizun’s and Andrew Stone’s presentation of the gigablock testnet experiment. The selection is purely subjective, and if you consider other topics to be more interesting, like payment channels or privacy, we urge you to listen to the presentations for yourself.

5 Billion Dollars are Invested in Mining each Year

Chen Min is the chip architect of Canaan, a Chinese company which was under the name Avalon, one of the first to produce asic chips for Bitcoin mining.

Min talks about several ideas, how the protocol, hardware and scalability are connected. She compared  the reward system of blockchains with a planned economy, in which some rules define how the game is played. “In the Bitcoin system, the rules are very clear: There is a reward for mining and an 1mb limit, leading to the result, that it is extremely secure, but low performant.”

In her presentation Min responds to the increasingly voiced accusations that miners are attacking Bitcoin. She takes this serious and asks, if there are incentives in the protocol, which make miners to attackers, and if and how this can be improved.

First, she made a staggering observation: Every year around $5 billion is invested in Bitcoin mining, which represents around 5 percent of the whole global chip production. The economic consequences are tremendous:

“Energy prices go up, because it is more profitable to sell energy to miners. Ethereum mining drives up DRAm prices, if I want to buy a gaming computer, I will pay about a $500 extra price driven by mining demand.”

But this is not the most important consequence of this enormous investments in mining hardware. “Think of it: If we invest this on network bandwidth, or storage systems, to scale up Bitcoin, it would be enough. The blocksize is now 1mb. If you invested $5 billion to enhance the network, you could process 1tb blocks, or work with 10 billion transactions each second.”

Instead, it happens that $5billion are spent for building mountains of asic chips in data centers. Thanks to pool mining those hardware does not even process transactions. “We waste money. The investment in mining does not improve performance,” Min concludes.

Does this make hardware an attack? Does mining leak resources of the system, resources which could otherwise be used to make the network better?

“Hardware is not, and can not be an attacker. Hardware is defined by your protocol. If you build hardware as a contributor role, it can be; if you design hardware as an attacker, it can only be an attacker.” She reminds us of the telecommunication companies which began around 20 years ago to wire the world with fibers. They have become extremely powerful – but they contribute with something: with performance for telecommunications. A good protocol makes it more profitable to be a contributor instead of an attacker.

Actually, this was the idea of Bitcoin. Nodes get a reward for their contribution to secure the network. This is still true and hashrate still secures the network. But the problem is, it doesn’t contribute to performance. Thanks to pool mining, miners don’t process transactions, and partially, work against the system when they mine empty blocks to reduce orphans or keep the limit at 1mb to profit from higher fees. At the same time the non mining nodes contribute a lot of work, without getting a reward for it.

These incentives, Min says, are the key for long term scalability. She proposes a “proof of contribution”, in which actors get rewarded only when they contribute to the system.

The Gigablock Testnet

The presentation of Bitcoin Unlimited’s Peter Rizun and Andrew Stone was interesting for one specific reason: It answered a long standing paradox of the scaling debate.

The Bitcoin community has discussed how to scale Bitcoin for years. A number of reasons are voiced as to why you can’t scale Bitcoin, and there are requests from all sides to do more science and make less noise.

However, Bitcoin’s onchain scalability is still one of the least researched topics. There are thousands of scientific papers about Bitcoins. A lot of them are about privacy and about offchain payment channels. But if you talk about how much capacity Bitcoin actually can process, you always end with one single paper, “On Scaling Decentralized Blockchains”.

This paper was released in early 2016. It assumes old technology and is purely based on estimations and no empirical experimental data.

In Stanford, Peter Rizun and Andrew Stone present the first scientific experiments about onchain scaling. “Many people want the limit lifted, but are afraid that Bitcoin loses important properties,” Rizun explains. However, he beliefs that “Bitcoin is designed to be highly scaled. All of the technology is available for massive scale.”

This means that If you assume that 4 billion people on earth will each send one transaction a day, you need 50,000 transactions each second. “We wanted to test this, in a global network of nodes, using standard software. This is the gigablock testnet.” The nodes consist of 4 Core CPU, 16 gigabyte memory and SSD harddrive. Good devices, but nothing special. 4-6 miners build blocks, 12 nodes use Python scripts to generate and propagate transactions.

In the last two months, Rizun, Stone, other Bitcoin Unlimited and scientists of the University of British Columbia ramped up the number of transactions in the Gigablock Testnet. They started with 1 transaction per second and pushed it up to 500.

“The MemPool was keeping up with transactions”, Rizun tells, “but when we reached 100 tx/ sec, the mempool acceptance could not keep up. So we found bottleneck number one. Mempool coherency went down.”

The reason for the bottleneck, Rizun explains, was not the CPU. This was only at 25 percent of its capacity. “The bottleneck was the single-threaded mempool acceptance code path. Andrew Stone parallelized mempool acceptance. When testing again, the rate was pretty good.”

The next bottleneck was found at around 500 transactions each second. The block propagation reached the time of block intervals, 10 minutes. With xthin block propagation the limit of the blocksize is around 1 Gigabyte.

Rizun admits, that they did not test the factors of hard drives and UTXO. It is possible that in long term tests these factors can pull down the bottleneck to a lower number of transactions. The developers want to research these factors in future tests. For today, they are able to provide the best data points for making assumptions about Bitcoin’s real capacity onchain.


Source: BTCManager.com

Check Also

Webp.net-resizeimage-1-1068x1068.png

Bitcoin Conversation Volume Beats Talk of Facebook, Apple, Netflix, Google

Markets and Prices While eggs must always first give way to chickens, ...