"IP-V6 penetrationAdded half a percent of the Internet's users each year over the past two decades." Iljitsch van Beijnum has captured the status in a great headline. There is no unifying force that implements Internet rules. Each of the network of networks takes its time. 

While the world average is about 10%, Google's data finds the U.S. at 25%. 

Van Beijnum sees the primary holdback, "We need to upgrade all servers, all clients, all routers, all firewalls, all load balancers, and all management systems to IPv6 before we can retire IPv4 and thus free ourselves of its limitations.... So even though all our operating systems and nearly all network equipment supports IPv6 today (and has for many years in most cases), as long as there's just one device along the way that doesn't understand the new protocol—or its administrator hasn't gotten around to enabling it—we have to keep using IPv4

Leslie Daigle, Former Chief Internet Technology Officer for the Internet Society, says this lack of compatibility with the current IPv4 protocol was the single critical failure. Vint Cerf emailed me related thoughts:

"No matter what, any change to the IPv4 format would have required changes to ALL router and host software. Every router and host would have to change to recognize the new IPv6 format in addition to the older IPv4 format. At that point, dual-stack seemed like the most economic way to do the implementation. Assuming the larger address format was the ultimate destination, creating some mixed IPv4/IPv6 format packet would have been more complex than just having the two distinct formats and would still have forced every router and host to change.

"Everyone had IPv4 so if you did a DNS lookup and got back only IPv4 you could use existing code. If you got back IPv6 only, you would have to use IPv6 code or not open the connection. If you got back both address types, then you needed to have a preference for one or the other.None of this was a trivial change. It was once proposed to use variable length addressing but the host and router programmers didn't like having to parse packets to find field boundaries." 

Others point to the cost because routers need upgrading or replacing. 

Vint Cerf accepts, “It’s my fault,” for choosing 32 bits and allowing only 4.3B addresses. Looking back, Vint adds, “It’s enough to do an experiment.The problem is the experiment never ended.” In 2015, Vint pointed out, "The next wave of stuff is the Internet of Things. Every appliance you can possibly imagine, you're shifting from electromechanical controls to programmable controls. And once you put a computer inside of anything, there's an opportunity to put it on the Net."

With hindsight, Vint says, "Who the hell knew how much address space we needed?" 


Surprising thing in the Google data: U.S. at 25% converted is far ahead of almost everyone. I had guessed we would not be moving so quickly because so many institutions in the U.S. still had addresses.
 
It may be the reason for the U.S. response is that ISOC is based here and understands U.S. requirements particularly well. 

 

Latest issue

August 27

Reply "subscribe" to be added, "un" to be dropped

G.fast: AT&T: Millions of G.fast Lines Coming. Now Starting Buildings Outside of Territory http://bit.ly/ATTout; Here Comes 5 & 10 Gigabit G.mgfast; 424 and 848 MHz http://bit.ly/gmgfast ; Stanton of Adtran: G.fast Will Be Even Faster, But Not Yet http://bit.ly/gfaster ; Broadcom's 212 MHz Exists http://bit.ly/Broad212 Update: I'll have test data next issue; 1.6 Gig 212 MHz ZTE/NetCologne Demo http://bit.ly/ZTENC212; Broadcom Bummer: Blocked By Berlin Ban http://bit.ly/Broadbum; Tamboli: "2019 Will Be The Year of G.fast" http://bit.ly/GF2019; 2019 Deutsche Telekom G.fast Build is On http://bit.ly/DTgfast below