In the midst of Bitcoin’s price rise and the dramatic flash crash of DAO and Ether, Core developer Gavin Andresen took to Reddit voicing his support for an unlimited blocksize increase.
Blocksize Increase: ‘Nothing Bad Will Happen’
The comments were made in response to concerns about an increase to 2MB by longstanding r/btc subreddit user u/Pool30.
“I believe the network will eventually have so many problems, that an increase in blocksize will happen. But 2MB is not enough, lets push for 8MB or 20MB instead,” u/Pool30 wrote.
Yes, let’s eliminate the limit.
Nothing bad will happen if we do.
“And if I’m wrong, the bad things would be mild annoyances, not existential risks, much less risky than operating a network near 100% capacity,” he added.
Increasing speculation has resulted from the changing relationship between blocksize increase and Bitcoin price, as well as between Bitcoin and Ether prices. The situation may well change again following today’s suspected attack on The DAO, with rumors suspecting its entire Ether funds could be ultimately drained by hackers.
Asked whether removing the doubling to 2MB advocated by Bitcoin Classic communities or removing the cap altogether, Andresen replied that this was not a decisive issue.
“Either would be fine,” he said.
Andresen has been at the center of criticism from throughout the community in recent months. Following the fraudulent claims by Craig Wright of being the creator of Bitcoin, Andresen’s voicing of support for Wright led to widespread criticism and even his GitHub access being compromised.
‘Trust Smart Developers’
In a blog post at the end of May, Andresen described what he termed “Bitcoin protocol role models” after a recent debate with a Core community member over the block size limit.
Andresen stated that the idea of a limit somewhere else – in this case, the Internet and the size of the global routing table in the Border Gateway Protocol (BGP) protocol – came about organically, not arbitrarily.
“There is no place in any BGP specification I can find that says ‘Routing Tables Shall Be No More Than Eleven Gigabytes Big,’” he wrote following research.
“There are limits on routing table sizes, but they are not top-down-specified-in-a-standards-document protocol limits. They are organic limits that arise from whatever hardware is available and from the (sometimes very contentious!) interaction of the engineers keeping the Internet backbone up and running.”
He added that he had not been able to find a widely-used internet protocol that arbitrarily limits itself.
“Trust that smart developers will fix scaling or denial-of-service issues as they arise,” he wrote.
What do you think about the idea of no blocksize limit? Let us know in the comments section below.
Images courtesy of vox.com, ibtimes.co.uk, wired.com