Stephen Crocker: How the Internet Got Its Rules

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard.

After all, everyone understood there was a practical value in choosing to do the same task in the same way. For example, if we wanted to move a file from one machine to another, and if you were to design the process one way, and I was to design it another, then anyone who wanted to talk to both of us would have to employ two distinct ways of doing the same thing. So there was plenty of natural pressure to avoid such hassles. It probably helped that in those days we avoided patents and other restrictions; without any financial incentive to control the protocols, it was much easier to reach agreement.

This was the ultimate in openness in technical design and that culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has. In fact, we probably wouldn’t have the Web without it. When CERN physicists wanted to publish a lot of information in a way that people could easily get to it and add to it, they simply built and tested their ideas. Because of the groundwork we’d laid in the R.F.C.’s, they did not have to ask permission, or make any changes to the core operations of the Internet. Others soon copied them ”” hundreds of thousands of computer users, then hundreds of millions, creating and sharing content and technology. That’s the Web.

Read it all.

Posted in * Culture-Watch, Blogging & the Internet, History

One comment on “Stephen Crocker: How the Internet Got Its Rules

  1. Daniel says:

    Thanks for posting this. I used to work with Crocker and for Vinton Cerf (one of the Internet’s founders) about 25 years ago at MCI Communications. It was a wonderful learning environment; everyone infused with a sense of making things happen quickly in a pragmatic way and open to whatever ideas worked the best.

    The last part of Crocker’s piece is the most important. It is reinforced by a recent article in the New England Journal of Medicine where two doctors make a plea for not throwing money at the national health care system to “automate” it. As they so correctly state, this will just make matters worse by solidifying the current system of incompatible, legacy computer systems. They, as Crocker, advocate a system of open standards for the storage, processing and interconnection of medical information. My plea, is to add iron clad information security to these records. Just throwing $17 billion at modernizing and automating health care will do little more than create additional members of the wealthy class we so like to despise today. Remember, Ross Perot got his fortune started creating computer systems for a little government program called Medicare.

    Another critically important benefit of open standards development is that it makes the costs of failure small. This is the key to rapid and successful development. If there are no big failures, you can afford to keep going until you finally hit the big success. (As an aside, if Wall Street followed this path instead of placing huge, leveraged bets in areas where they did not understand the risks of failure, we wouldn’t be where we are now.) Such successes can be quite random and unpredictable, unlike government planning and programs. I think this is quite germane to the current government policy of making big bets on fixing the economy without considering the risk of what happens if they have a big failure instead of a big success right away.