an electronic communications network that connects computer networks and organizational computer facilities around the world. Libertarians often cite the internet as a case in point that liberty is the mother of innovation. Opponents quickly counter that the internet was a government program, proving once again that markets must be guided by the steady hand of the state. In one sense the critics are correct, though not in ways they understand. The internet indeed began as a typical government program, the ARPANET, designed to share mainframe computing power and to establish a secure military communications network. Advanced Research Projects Agency (ARPA), now DARPA, of the United States Department of Defense, funded the original network.
Of course the designers could not have foreseen what the (commercial) internet has become. Still, this reality has important implications for how the internet works — and explains why there are so many roadblocks in the continued development of online technologies. It is only thanks to market participants that the internet became something other than a typical government program: inefficient, overcapitalized, and not directed toward socially useful purposes.
In fact, the role of the government in the creation of the internet is often understated. The internet owes its very existence to the state and to state funding. The story begins with ARPA, created in 1957 in response to the Soviets’ launch of Sputnik and established to research the efficient use of computers for civilian and military applications.
As the term suggests, using computers would no longer be restricted to a static one-way process but would be dynamically interactive. According to the standard histories, the man most responsible for defining these new goals was J. C. R. Licklider. A psychologist specializing in psychoacoustics, he had worked on early computing research, becoming a vocal proponent for interactive computing. His 1960 essay “Man-Computer Symbiosis” outlined how computers might even go so far as to augment the human mind. Licklider, known by friends, colleagues, and casual acquaintances as “Lick,” was the first to describe the concept he called the “Galactic Network.”
It just so happened that funding was available. Three years earlier in 1957, the Soviet launch of Sputnik had sent the US military into a panic. Partially in response, the Department of Defense (DoD) created a new agency for basic and applied technological research called the Advanced Research Projects Administration (ARPA, today known as DARPA). The agency threw large sums of money at all sorts of possible — and dubious — research avenues, from psychological operations to weather control. Licklider was appointed to head the Command and Control and Behavioral Sciences divisions, presumably because of his background in both psychology and computing.
In the paper “Man-Computer Symbiosis,” published in 1960, Licklider provided a guide for decades of computer research to follow. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site.
In October 1962, Licklider was appointed head of the Information Processing Techniques Office (IPTO) at ARPA, the United States Department of Defense Advanced Research Projects Agency. The IPTO funded the research that led to the development of the ARPANET.
During his time as director of ARPA’s Information Processing Techniques Office (IPTO) Licklider funded a research project headed by Robert Fano at MIT called Project MAC, a large mainframe computer that was designed to be shared by up to 30 simultaneous users, each sitting at a separate typewriter terminal. Project MAC (the Project on Mathematics and Computation) would develop groundbreaking research in operating systems, artificial intelligence, and the theory of computation.
Licklider sought out the leading computer research institutions in the U.S. and set up research contracts with them. Soon there were about a dozen universities and companies working on ARPA contracts including Stanford, UCLA, and Berkeley. Lick jokingly nicknamed his group the Intergalactic Computer Network. This group would later form the core who created the ARPANET.
Lick left ARPA in 1964. Lick never implemented his vision at ARPA. Licklider moved on, but he left behind his vision of a universal network in others. In a few years after leaving ARPA in 1964, Licklider’s ideas were implemented with creation of the ARPANET.
Its members realized that the big computers scattered around university campuses needed to communicate with one another, much as Licklider had discussed in his 1960 paper. In 1967, one of his successors at ARPA, Robert Taylor, formally funded the development of a research network called the ARPANET. At first the network spanned only a handful of universities across the country. By the early 1980s, it had grown to include hundreds of nodes. Finally, through a rather convoluted trajectory involving international organizations, standards committees, national politics, and technological adoption, the ARPANET evolved in the early 1990s into the internet as we know it.
Larry Roberts, the principal architect of the ARPANET, would give credit to Licklider’s vision, “The vision was really Lick’s originally. … he sat down with me and really convinced me that it was important and convinced me into making it happen”
Yasha Levine’s important new book, Surveillance Valley, deftly demonstrates the history of the big tech firms, complete with its panoptic overtones, is thoroughly interwoven with the history of the repressive state apparatus. While many people may be at least nominally aware of the links between early computing, or the proto-Internet, and the military, Levine’s book reveals the depth of these connections and how they persist. As he provocatively puts it, “the Internet was developed as a weapon and remains a weapon today”
Thus, cases of Google building military drones and silencing dissenting voices , Facebook watching us all, and Amazon making facial recognition software for the police, need to be understood not as aberrations. Rather, they are business as usual.
Levine believes that he has unearthed several new pieces of evidence that undercut parts of this early history, leading him to conclude that the internet has been a surveillance platform from its inception.
Levine begins his account with the war in Vietnam, and the origins of a part of the Department of Defense known as the Advanced Research Projects Agency (ARPA) – an outfit born of the belief that victory required the US to fight a high-tech war. ARPA’s technocrats earnestly believed “in the power of science and technology to solve the world’s problems” (23), and they were confident that the high-tech systems they developed and deployed (such as Project Igloo White) would allow the US to triumph in Vietnam. And though the US was not ultimately victorious in that conflict, the worldview of ARPA’s technocrats was, as was the linkage between the nascent tech sector and the military. Indeed, the tactics and techniques developed in Vietnam were soon to be deployed for dealing with domestic issues, “giving a modern scientific veneer to public policies that reinforced racism and structural poverty” (30).
Much of the early history of computers, as Levine documents, is rooted in systems developed to meet military and intelligence needs during WWII – but the Cold War provided plenty of impetus for further military reliance on increasingly complex computing systems. And as fears of nuclear war took hold, computer systems (such as SAGE) were developed to surveil the nation and provide military officials with a steady flow of information. Along with the advancements in computing came the dispersion of cybernetic thinking which treated humans as information processing machines, not unlike computers, and helped advance a worldview wherein, given enough data, computers could make sense of the world. All that was needed was to feed more, and more, information into the computers – and intelligence agencies proved to be among the first groups interested in taking advantage of these systems.
While the development of these systems of control and surveillance ran alongside attempts to market computers to commercial firms, Levine’s point is that it was not an either/or situation but a both/and, “computer technology is always ‘dual use,’ to be used in both commercial and military applications” (58) – and this split allows computer scientists and engineers who would be morally troubled by the “military applications” of their work to tell themselves that they work strictly on the commercial, or scientific side.
During the 1960s, the RAND Corporation had begun to think about how to design a military communications network that would be invulnerable to a nuclear attack. Paul Baran, a RAND researcher whose work was financed by the Air Force, produced a classified report in 1964 proposing a radical solution to this communication problem. Baran envisioned a decentralized network of different types of “host” computers, without any central switchboard, designed to operate even if parts of it were destroyed. The network would consist of several “nodes,” each equal in authority, each capable of sending and receiving pieces of data.
Each data fragment could thus travel one of several routes to its destination, such that no one part of the network would be completely dependent on the existence of another part. An experimental network of this type, funded by ARPA and thus known as ARPANET, was established at four universities (using 4 computers) in 1969.
From Wikipedia:
The first successful message on the ARPANET was sent by UCLA student programmer Charley Kline, at 10:30 pm on 29 October 1969, from Boelter Hall 3420. Kline transmitted from the university’s SDS Sigma 7 Host computer to the Stanford Research Institute’s SDS 940 Host computer. The message text was the word login; on an earlier attempt the l and the o letters were transmitted, but the system then crashed. Hence, the literal first message over the ARPANET was lo. About an hour later, after the programmers repaired the code that caused the crash, the SDS Sigma 7 computer effected a full login. The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the entire four-node network was established.
Levine focuses on the privatization of the network, the creation of Google, and revelations of NSA surveillance. And, in the final part of his book, he turns his attention to Tor and the crypto community. He claims that these technologies were developed from the beginning with surveillance in mind, and that their origins are tangled up with counterinsurgency research in the Third World. This leads him to a damning conclusion: “The Internet was developed as a weapon and remains a weapon today.”
Researchers at any one of the four nodes could share information, and could operate any one of the other machines remotely, over the new network. (Actually, former ARPA head Charles Herzfeld says that distributing computing power over a network, rather than creating a secure military command-and-control system, was the ARPANET’s original goal, though this is a minority view.) Al Gore was not present!
By 1972, the number of host computers connected to the ARPANET had increased to 37. Because it was so easy to send and retrieve data, within a few years the ARPANET became less a network for shared computing than a high-speed, federally subsidized, electronic post office. The main traffic on the ARPANET was not long-distance computing, but news and personal messages.
In 1972, BBN’s Ray Tomlinson introduces network email as the Internetworking Working Group (INWG) forms to address the need for establishing standard protocols.
But Arpanet had a problem: it wasn’t mobile. The computers on Arpanet were gigantic by today’s standards, and they communicated over fixed links. That might work for researchers, who could sit at a terminal in Cambridge or Menlo Park – but it did little for soldiers deployed deep in enemy territory. For Arpanet to be useful to forces in the field, it had to be accessible anywhere in the world.
Picture a jeep in the jungles of Zaire, or a B-52 miles above North Vietnam. Then imagine these as nodes in a wireless network linked to another network of powerful computers thousands of miles away. This is the dream of a networked military using computing power to defeat the Soviet Union and its allies. This is the dream that produced the internet.
Making this dream a reality required doing two things. The first was building a wireless network that could relay packets of data among the widely dispersed cogs of the US military machine by radio or satellite. The second was connecting those wireless networks to the wired network of Arpanet, so that multimillion-dollar mainframes could serve soldiers in combat. “Internetworking,” the scientists called it.
Internetworking is the problem the internet was invented to solve. It presented enormous challenges. Getting computers to talk to one another – networking – had been hard enough. But getting networks to talk to one another – internetworking – posed a whole new set of difficulties, because the networks spoke alien and incompatible dialects. Trying to move data from one to another was like writing a letter in Mandarin to someone who only knows Hungarian and hoping to be understood. It didn’t work.
In response, the architects of the internet developed a kind of digital Esperanto: a common language that enabled data to travel across any network. In 1974, two Arpa researchers named Robert Kahn and Vint Cerf (the duo said by many to be the Fathers of the Internet) published an early blueprint. Drawing on conversations happening throughout the international networking community, they sketched a design for “a simple but very flexible protocol”: a universal set of rules for how computers should communicate.
These rules had to strike a very delicate balance. On the one hand, they needed to be strict enough to ensure the reliable transmission of data. On the other, they needed to be loose enough to accommodate all of the different ways that data might be transmitted.
“It had to be future-proof,” Cerf tells me. You couldn’t write the protocol for one point in time, because it would soon become obsolete. The military would keep innovating. They would keep building new networks and new technologies. The protocol had to keep pace: it had to work across “an arbitrarily large number of distinct and potentially non-interoperable packet switched networks,” Cerf says – including ones that hadn’t been invented yet. This feature would make the system not only future-proof, but potentially infinite. If the rules were robust enough, the “ensemble of networks” could grow indefinitely, assimilating any and all digital forms into its sprawling multithreaded mesh.
Eventually, these rules became the lingua franca of the internet. But first, they needed to be implemented and tweaked and tested – over and over and over again. There was nothing inevitable about the internet getting built. It seemed like a ludicrous idea to many, even among those who were building it. The scale, the ambition – the internet was a skyscraper and nobody had ever seen anything more than a few stories tall. Even with a firehose of cold war military cash behind it, the internet looked like a long shot.
In 1973, Global networking becomes a reality as the University College of London (England) and Royal Radar Establishment (Norway) connect to ARPANET. The term Internet is born. A year later, the first Internet Service Provider (ISP) is born with the introduction of a commercial version of ARPANET, known as Telenet.
Then, in the summer of 1976, it started working.
If you had walked into Rossotti’s beer garden on 27 August 1976, you would have seen the following: seven men and one woman at a table, hovering around a computer terminal, the woman typing. A pair of cables ran from the terminal to the parking lot, disappearing into a big grey van.
Inside the van were machines that transformed the words being typed on the terminal into packets of data. An antenna on the van’s roof then transmitted these packets as radio signals. These signals radiated through the air to a repeater on a nearby mountain top, where they were amplified and rebroadcast. With this extra boost, they could make it all the way to Menlo Park, where an antenna at an office building received them.
It was here that the real magic began. Inside the office building, the incoming packets passed seamlessly from one network to another: from the packet radio network to Arpanet. To make this jump, the packets had to undergo a subtle metamorphosis. They had to change their form without changing their content. Think about water: it can be vapor, liquid or ice, but its chemical composition remains the same. This miraculous flexibility is a feature of the natural universe – which is lucky, because life depends on it.
The flexibility that the internet depends on, by contrast, had to be engineered. And on that day in August, it enabled packets that had only existed as radio signals in a wireless network to become electrical signals in the wired network of Arpanet. Remarkably, this transformation preserved the data perfectly. The packets remained completely intact.
So intact, in fact, that they could travel another 3,000 miles to a computer in Boston and be reassembled into exactly the same message that was typed into the terminal at Rossotti’s. Powering this internetwork odyssey was the new protocol cooked up by Kahn and Cerf. Two networks had become one. The internet worked.
“There weren’t balloons or anything like that,” Don Nielson tells me. Now in his 80s, Nielson led the experiment at Rossotti’s on behalf of the Stanford Research Institute (SRI), a major Arpa contractor. Tall and soft-spoken, he is relentlessly modest; seldom has someone had a better excuse for bragging and less of a desire to indulge in it. We are sitting in the living room of his Palo Alto home, four miles from Google, nine from Facebook, and at no point does he even partly take credit for creating the technology that made these extravagantly profitable corporations possible.
1976: Queen Elizabeth II hits the “send button” on her first email.
The internet was a group effort, Nielson insists. SRI was only one of many organizations working on it. Perhaps that’s why they didn’t feel comfortable popping bottles of champagne at Rossotti’s – by claiming too much glory for one team, it would’ve violated the collaborative spirit of the international networking community. Or maybe they just didn’t have the time. Dave Retz, one of the researchers at Rossotti’s, says they were too worried about getting the experiment to work – and then when it did, too worried about whatever came next. There was always more to accomplish: as soon as they’d stitched two networks together, they started working on three – which they achieved a little over a year later, in November 1977.
Over time, the memory of Rossotti’s receded. Nielson himself had forgotten about it until a reporter reminded him 20 years later. “I was sitting in my office one day,” he recalls, when the phone rang. The reporter on the other end had heard about the experiment at Rossotti’s, and wanted to know what it had to do with the birth of the internet. By 1996, Americans were having cybersex in AOL chatrooms and building hideous, seizure-inducing homepages on GeoCities. The internet had outgrown its military roots and gone mainstream, and people were becoming curious about its origins. So Nielson dug out a few old reports from his files, and started reflecting on how the internet began. “This thing is turning out to be a big deal,” he remembers thinking.
What made the internet a big deal is the feature Nielson’s team demonstrated that summer day at Rossotti’s: its flexibility. Forty years ago, the internet teleported thousands of words from the Bay Area to Boston over channels as dissimilar as radio waves and copper telephone lines. Today it bridges far greater distances, over an even wider variety of media. It ferries data among billions of devices, conveying our tweets and Tinder swipes across multiple networks in milliseconds.
The fact that we think of the internet as a world of its own, as a place we can be “in” or “on” – this too is the legacy of Don Nielson and his fellow scientists. By binding different networks together so seamlessly, they made the internet feel like a single space. Strictly speaking, this is an illusion. The internet is composed of many, many networks: when you go to Google’s website, your data must traverse a dozen different routers before it arrives. But the internet is a master weaver: it conceals its stitches extremely well. We’re left with the sensation of a boundless, borderless digital universe – cyberspace, as we used to call it. Forty years ago, this universe first flickered into existence in the foothills outside of Palo Alto, and has been expanding ever since.
As parts of the ARPANET were declassified, commercial networks began to be connected to it. Any type of computer using a particular communications standard, or “protocol,” was capable of sending and receiving information across the network. The design of these protocols was contracted out to private universities such as Stanford and the University of London, and was financed by a variety of federal agencies. The major thoroughfares or “trunk lines” continued to be financed by the Department of Defense.
1983: The Domain Name System (DNS) establishes the familiar .edu, .gov, .com, .mil, .org, .net, and .int system for naming websites. This is easier to remember than the previous designation for websites, such as 123.456.789.10.
By the early 1980s, private use of the ARPA communications protocol — what is now called “TCP/IP” — far exceeded military use. In 1984 the National Science Foundation assumed the responsibility of building and maintaining the trunk lines or “backbones.” (ARPANET formally expired in 1989; by that time hardly anybody noticed). The NSF’s Office of Advanced Computing financed the internet’s infrastructure from 1984 until 1994, when the backbones were privatized.
1984: William Gibson, author of “Neuromancer,” is the first to use the term “cyberspace.”
In short, both the design and implementation of the internet have relied almost exclusively on government dollars. The fact that its designers envisioned a packet-switching network has serious implications for how the internet actually works. For example, packet switching is a great technology for file transfers, email, and web browsing but not so good for real-time applications like video and audio feeds, and, to a lesser extent, server-based applications like webmail, Google Earth, SAP, PeopleSoft, and Google Spreadsheet.
Furthermore, without any mechanism for pricing individual packets, the network is overused, like any public good. Every packet is assigned an equal priority. A packet containing a surgeon’s diagnosis of an emergency medical procedure has exactly the same chance of getting through as a packet containing part of Coldplay’s latest single or an online gamer’s instruction to smite his foe.
Because the sender’s marginal cost of each transmission is effectively zero, the network is overused, and often congested. Like any essentially unowned resource, an open-ended packet-switching network suffers from what Garrett Hardin famously called the “Tragedy of the Commons.”
In no sense can we say that packet-switching is the “right” technology. One of my favorite quotes on this subject comes from the Netbook, a semi-official history of the internet:
“The current global computer network has been developed by scientists and researchers and users who were free of market forces. Because of the government oversight and subsidy of network development, these network pioneers were not under the time pressures or bottom-line restraints that dominate commercial ventures. Therefore, they could contribute the time and labor needed to make sure the problems were solved. And most were doing so to contribute to the networking community.”
In other words, the designers of the internet were “free” from the constraint that whatever they produced had to satisfy consumer wants.
We must be very careful not to describe the internet as a “private” technology, a spontaneous order, or a shining example of capitalistic ingenuity. It is none of these. Of course, almost all of the internet’s current applications — unforeseen by its original designers — have been developed in the private sector. (Unfortunately, the original web and the web browser are not among them, having been designed by the state-funded European Laboratory for Particle Physics (CERN) and the University of Illinois’s NCSA.)
The World Wide Web wasn’t created until 1989, 20 years after the first “Internet” connection was established and the first message sent.
1990: Tim Berners-Lee, a scientist at CERN, the European Organization for Nuclear Research, develops HyperText Markup Language (HTML). This technology continues to have a large impact on how we navigate and view the Internet today.
1991: CERN introduces the World Wide Web to the public.
1992: The first audio and video are distributed over the Internet. The phrase “surfing the Internet” is popularized.
And today’s internet would be impossible without the heroic efforts at Xerox PARC and Apple to develop a useable graphical user interface (GUI), a lightweight and durable mouse, and the Ethernet protocol. Still, none of these would have been viable without the huge investment of public dollars that brought the network into existence in the first place.
Now, it is easy to admire the technology of the internet. I marvel at it every day. But technological value is not the same as economic value. That can only be determined by the free choice of consumers to buy or not to buy. The ARPANET may well have been technologically superior to any commercial networks that existed at the time, just as Betamax may have been technologically superior to VHS, the MacOS to MS-DOS, and Dvorak to QWERTY. (Actually Dvorak wasn’t.) But the products and features valued by engineers are not always the same as those valued by consumers. Markets select for economic superiority, not technological superiority (even in the presence of nefarious “network effects,” as shown convincingly by Liebowitz and Margolis).
Libertarian internet enthusiasts tend to forget the fallacy of the broken window. We see the internet. We see its uses. We see the benefits it brings. We surf the web and check our email and download our music. But we will never see the technologies that weren’t developed because the resources that would have been used to develop them were confiscated by the Defense Department and given to Stanford engineers. Likewise, I may admire the majesty and grandeur of an Egyptian pyramid, a TVA dam, or a Saturn V rocket, but it doesn’t follow that I think they should have been created, let alone at taxpayer expense.
What kind of global computer network would the market have selected? We can only guess. Maybe it would be more like the commercial online networks such as Comcast or MSN, or the private bulletin boards of the 1980s. Most likely, it would use some kind of pricing schedule, where different charges would be assessed for different types of transmissions.
The whole idea of pricing the internet as a scarce resource — and bandwidth is, given current technology, scarce, though we usually don’t notice this — is ignored in most proposals to legislate network neutrality, a form of “network socialism” that can only stymie the internet’s continued growth and development. The net neutrality debate takes place in the shadow of government intervention. So too the debate over the division of the spectrum for wireless transmission. Any resource the government controls will be allocated based on political priorities.
Let us conclude: yes, the government was the founder of the internet. As a result, we are left with a panoply of lingering inefficiencies, misallocations, abuses, and political favoritism. In other words, government involvement accounts for the internet’s continuing problems, while the market should get the credit for its glories.
Continued on next page…