Making the Unaccountable Internet: The Changing Meaning of Accounting in the Design of the Early Internet

01/28/2022
by   A. Feder Cooper, et al.
cornell university
0

Contemporary concerns over the governance of technological systems often run up against compelling narratives about technical (in)feasibility of designing mechanisms for accountability. While in recent FAccT literature these concerns have been deliberated predominantly in relation to machine learning, other instances in the history of computing also presented circumstances in which computer scientists needed to un-muddle what it means to design (un)accountable systems. One such a compelling narrative can frequently be found in canonical histories of the Internet that highlight how its original designers' commitment to the "End-to-End" architectural principle precluded other features from being implemented, resulting in the fast-growing, generative, but ultimately unaccountable network we have today. This paper offers a critique of such technologically essentialist notions of accountability and the characterization of the "unaccountable Internet" as an unintended consequence. We explore the changing meaning of accounting and its relationship to accountability in a selected corpus of requests for comments (RFCs) concerning the early Internet's design from the 1970s and 80s. We characterize 4 phases of conceptualizing accounting: as billing, as measurement, as management, and as policy, and demonstrate how an understanding of accountability was constituted through these shifting meanings. Recovering this history is not only important for understanding the processes that shaped the Internet, but also serves as a starting point for unpacking the complicated political choices that are involved in designing accountability mechanisms for other technological systems today.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

12/14/2021

Towards End-to-End Error Management for a Quantum Internet

Error management in the quantum Internet requires stateful and stochasti...
06/14/2022

Future Internet Congestion Control: The Diminishing Feedback Problem

It is increasingly difficult for Internet congestion control mechanisms ...
12/28/2020

A Technological Perspective on Net Neutrality

This paper serves as a brief technical examination of Net Neutrality and...
05/15/2020

REA, Triple-Entry Accounting and Blockchain: Converging Paths to Shared Ledger Systems

In recent years, the concept of shared ledger systems offering a single ...
02/23/2019

Designing for Participation and Change in Digital Institutions

Whether we recognize it or not, the Internet is rife with exciting and o...
08/01/2018

Reassembling the English novel, 1789-1919

Sociologically-inclined literary history foundered in the 20th century d...

1. Introduction

Within the FAccT community, there has been a revival of interest in the value of accountability in technical systems, especially those that involve Machine Learning. Nissenbaum (1996) first recognized the unique challenges of accountability in computerized society in 1996; from a philosophical perspective, she discussed how the ascendance of computer technology’s use in important societal contexts was poised to introduce novel barriers to accountability. 25 years later, one can observe the difficulties Nissenbaum (1996) raised playing out in FAccT scholarship, illustrating that accountability is a slippery term with multiple valences. For example, several FAccT scholars have imported a definition of “public accountability” from Bovens (2007) into research concerning algorithmic accountability (Wieringa, 2020; Kacianka and Pretschner, 2021; Kroll, 2021), while others have attempted to institute cultures of accountability through ex ante engineering standards of care (Gebru et al., 2018; Mitchell et al., 2019) or ex post audits and impact assessments (Raji et al., 2020; Vecchione et al., 2021; Adler et al., 2018; Metcalf et al., 2021).Hendnotepage.1linkHendnote.1111macro:-¿While the FAccT community has published multiple operationalizations of the value of fairness in ML, accountability has predominantly resisted mathematical formalization, with~“citet–kacianka2021accountable˝ serving as an exception. These works draw from philosophy, computer science, and the law, and focus almost exclusively on accountability in relation to algorithms. This emphasis on algorithms has left broader systems and institutional concerns (in which algorithms are situated) unattended in terms of their relationship to accountability (Cooper et al., 2021).

In this paper, we provide a complement to prior technological accountability scholarship. We take a historical approach and examine accountability as it pertains to a specific computer system, situated in a time and a place, including its architectural components, protocols, and institutional actors. Rather than taking a top-down approach in which we posit our own definition of accountability to ground our analysis (Wieringa, 2020; Kim and Doshi-Velez, 2021; Kroll et al., 2017; Kroll, 2021), we investigate how an understanding of (un)accountability can emerge and evolve over time during a system’s development. For our subject, we concentrate on the early Internet and its predecessor—the ARPA-controlled ARPANET.aaaARPA, the Advanced Research Projects Agency, has see-sawed back and forth between including a D for “Defense” at the start of its name. The agency has settled on DARPA (since 1996), but we refer to it as ARPA throughout for clarity, since our subject, the ARPANET (sometimes referred to as the ARPAnet), was named after ARPA (Leiner et al., 1997, p. 103) (Clark, 2018). Today, the Internet is often described as lacking endemic features for accountability, fixing the perception of the Internet as an unaccountable system (Rexford and Dovrolis, 2010; Clark, 2018; Weitzner et al., 2008). We contend that identifying how this perception developed can help locate what it means for a system to be accountable, and can enable us to interrogate the dynamics that lead a research community to develop an unaccountable system. While our work is a deep dive into the particular origins of the unaccountable Internet, it is also a case study that elicits broader lessons concerning the (un)accountability of complex computer systems.

1.1. Retelling Internet Histories: From End-to-End to Accounting

Over the past two decades, Internet historians have pointed to the institutional context of the network’s protocols and standards’ development (Abbate, 1999; Russell, 2014; Yates, 2019; DeNardis, 2015), explored how commercial actors, user groups, and regulators also shaped its trajectory (Turner, 2006; Greenstein, 2015; McIlwain, 2020), and contrasted the US-origins of the Internet with networks developed in different political contexts (Medina, 2011; Peters, 2008). We follow these studies by looking at how network engineers and administrators, as a research community and governing institution, defined the goal of networking and often contrasted it with changing meanings of the concepts of “accounting” and “accountability.”bbbThe paper surveys a period over which the governing bodies and structures of the networks in question were changing, even if many of the individuals involved in the development of network computing, first under the ARPANET project, later as the Internet, remained involved over several decades. A brief but insightful overview of the process of both expansion and formalization of the network research and development community that this paper examines is provided in  Russell (2006, pp. 50-53).

These institutional perspectives on the history and politics of networks contrast with other accounts that focus on the architectural novelty of the network’s design. At the center of these architectural histories stands the now foundational “End-to-End” principle. In 1981,Hendnotepage.2linkHendnote.2222macro:-¿First written and circulated in 1981, but ultimately published in 1984. Jerome Salzer, Dave Reed, and David Clark, three members of the ARPANET project team, formalized the “End-to-End” principle in a paper, arguing that a communication network should implement most complex functions at the end-points rather than within the network itself, leaving the network’s design relatively simple and focused on routing data traffic (Saltzer et al., 1984, p. 278).Hendnotepage.3linkHendnote.3333macro:-¿The principle had been tacitly a working assumption of the ARPANET project (among other computing research efforts) for over a decade by the time the paper made it explicit~“cite[p. 49]–russell2006war˝~“cite–saltzer1984endtoend˝. Through various retellings works about the early Internet, End-to-End became its defining feature, used to explain the network’s exponential, rapid growth and celebrated for its architectural simplicity and even political commitment to decentralization, individual autonomy, and freedom, and later, a reason behind some of the Internet’s discontents (Blumenthal and Clark, 2001; Rexford and Dovrolis, 2010).cccLegal scholars attempting to theorize the relationship between network architecture and political order in particular zeroed in on “End-to-End” as a way to tie the two together (Lessig, 2006; Greenstein, 2015; Zittrain, 2008; Clark and Landau, 2011; Post, 2009). The ubiquity of “End-to-End” as a stand-in for the Internet as a whole was noted by Tarleton Gillespie, who, in 2006, argued that such a cultural uptake of one specific term both obfuscated its contentious meaning and created a sense in which the architecture of the Internet became a fixed material object (Gillespie, 2006).Hendnotepage.4linkHendnote.4444macro:-¿This cultural uptake had broader reverberations among users of the network. In November 1988, the Morris Worm caused failures that brought down the majority of nodes on the Internet~“cite–rfc1135˝. The network seemed to be in crisis: that such widespread failure could occur suggested the existing, End-to-End-guided architecture was perhaps not suited to the network’s broader use and needs. A retrospective analysis of the Worm is out of the scope of this paper. Please refer to~“citet–slayton2016worm˝. Nevertheless the culture of End-to-End prevailed. After the initial response to excise the Worm from the network, what followed shortly after was a series of ethics statements from revered institutions, including MIT and the NSF~“cite–nsf1989code,mit1989code,cpsr1989code, iab1989code, rfc1087˝. These statements centered the “emph–end˝-user as the site at which to locate the responsibility for appropriate network use, rather than calling for proposals to imagine what designing for accountability could mean for internals of the network.This End-to-End ethos has arguably carried forward to the present more generally in computing, with Zittrain (2014) recasting the End-to-End as the “procrastination principle” to capture the idea that “most problems could be solved later and by others.”Hendnotepage.5linkHendnote.5555macro:-¿“The procrastination principle rests on the assumption that most problems confronting a network can be solved later or by others”~“cite–zittrain2014blog˝. Zittrain discusses the principle in terms of the network in this blog post, and discusses its broader import for software engineering in~“citet–zittrain2008book˝. One can see this in several places, for example: evoked in Google’s now-retired motto, “Don’t be evil,” a rhetorical move that could be read as punting accountability for the behavior of its products to individual engineers; in evolving conversations concerning balancing accountability and unconstrained user behavior with respect to platform content moderation (Klonick, 2020; Gillespie, 2010; Kosseff, 2022; Citron and Solove, 2022)

; and in corporate and institutional ethics statements targeted at individual engineers to use machine learning (ML) and artificial intelligence technology responsibly.

Among the many possible narratives that get lost in the “End-to-End” telling is the story of a difficult administrative, technical, economic, and political problem at the heart of the ARPANET: distributed accounting. In the late 1980s, David Clark, one of the trio behind the original 1981 End-to-End paper, reflected back on the experience of designing the ARPANET and Internet protocols and described a set of goals that guided the design principles of the network. The top-level goal was to create a means of interconnecting existing computer networks, supporting the connection of different devices and pre-existing communication protocols (Clark, 1988, pp. 1-2). It is in relation to this top-level goal that End-to-End was often celebrated as an elegant solution. But Clark went on to articulate a list of 7 “secondary goals”. Last among the listed secondary goals was that “resources used on the network must be accountable” (Clark, 1988, pp. 1-2). Clark refers to this as the goal of “accountability” and cites earlier work on the ARPANET by Vint Cerf and Bob Kahn that already noted the need for a network to provide such features. The 1974 Cerf and Kahn paper in question, however, only makes mention of “accounting” of traffic in relation to the potential to charge for network use (Cerf and Kahn, 1974, p. 2). In this paper, we propose to take seriously the slippage between Clark’s 1988 use of “accountability” and the subordinate, discounted goal of accounting for resources. We argue that matters of accounting were never merely about charging and billing, nor were they straightforward—they raised a variety of questions that form a theory of accountability in a networked environment.Hendnotepage.6linkHendnote.6666macro:-¿Daniel Neyland has argued that in the context of Machine Learning algorithms, there is an intersection between the two registers of the term “account,” suggesting that calls to make algorithms more accountable should take note of system features that make technical systems account-able~“cite–neyland2016accountable˝. We similarly focus on the intersection of these two registers, but, by taking a historical approach, we go beyond suggesting a connection; we see how the historical actors designing the technical system that is the Internet, over time, came to define the meaning of accounting as constitutive of accountability. We trace how accounting as a feature of the network was routinely debated, deferred, and discounted among the networking community. In tracing the shifting meaning of accounting in the first two decades of network computing, we offer a retelling of Internet histories, locating the unresolved problem of accounting as the obverse of the “End-to-End” elegant solution story.

1.2. Research Method and Contribution

We enlist a variety of secondary sources, including books and papers by historians and Science and Technology Studies (STS) scholars. However, our main focus is primary sources drawn from the self-described “old boys network” (Abbate, 1999, p. 54) that participated in the architecture discussions and implementation of the ARPANET, the Internet’s US predecessor during the 1970s. We then follow this community’s writing in the early days of the Internet through the 1980s. These sources include contemporary retrospectives of Internet history written by networking researchers like David Clark, as well as early networking conference proceedings, journal articles, and technical reports. However, our main focus is on a corpus of Internet Requests for Comments (RFCs): a chronologically-ordered series of documents posted publicly and hosted online by the Internet Society.Hendnotepage.7linkHendnote.7777macro:-¿Originally, the RFCs were necessarily circulated on paper through the physical mail system, and then later were hosted at Stanford Research Institute (SRI)~“cite–abbate1999inventing˝. The current “texttt–rfc-editor˝ tool is available at “url–https://www.rfc-editor.org/˝.

We scraped and filtered the entire corpus to find instances of RFCs that mention “accounting” and “accountability.” Our process was not just a matter of collecting and counting all documents from the first RFC to mention “accounting” (Crocker et al., 1970, p. 3),Hendnotepage.8linkHendnote.8888macro:-¿RFC 33 represents the first mention of accounting; it added a notion of accounting to the earlier host-host protocol, originally defined in 1969 in RFC 11~“cite–rfc11˝. to the first RFC that uses the word “accountability” (Garlick, 1976, p. 2).Hendnotepage.9linkHendnote.99

99macro:-¿While RFC 721 in 1976 is the first to use the word “accountability,” arguably, it was RFC 808 in 1982, documenting a January 10, 1979 meeting at BBN, which was the first to distinguished between “accounting” and “accountability” explicitly~“cite[p. 3]–rfc808˝: “There was some general discussion of the impact of personal computers on mail services. The main realization being that the personal computer will not be available to handle incoming mail all the time. Probably, personal computer users will have their mailboxes on some big brother computer (which may be dedicated to mailbox service, or be a general purpose host) and poll for their mail when they want to read it. “emph–There were some concerns raised about accountability and accounting˝” (emphasis added). Rather, we performed a detailed reading to produce an understanding of how accountability evolved conceptually from accounting over time. By our accounting,

accounting moved through being understood as billing (Section 2), as measurement (Section 3), as management (Section 4), and as policy (Section 5), at which point accounting was understood as constitutive of accountability, and the lack of accounting features developed within the network’s design–an impediment to the network’s operations. The rest of the paper is organized around these 4 phases, with some overlapping chronological progression. We begin with documents dating from before the first RFC—prior to the proposal of the ARPANET project (Roberts, 1967)—and trace the changing meaning of accounting to accountability through 1990, when the ARPANET was decommissioned (Abbate, 1999, p. 195) (Cerf, 1989). We scope our project to this period because, while concerns about accountability clearly extend through to present day discussions of Internet governance (Klonick, 2020; Clark, 2018; Citron and Solove, 2022), we found that it was during this time period that a notion of an (un)accountable network first came about and took hold as a defining characterization of the Internet.

By tracing this historical arc and identifying the changing meanings of accounting in the ARPANET and early Internet, we offer the following three contributions for the study of accountability in technical systems: first, we link together the administrative and technical mechanisms of accounting for shared resources in a distributed system and an emerging notion of accountability as a social, political, and technical category, arguing that the former is constitutive of the latter. Second, we characterize a research dynamic among the technical community we studied, that deprioritizes the development of administrative tools and accounting infrastructure as existing somewhere beyond the scope of their work as a matter of ”policy,” or, as subordinate to their core research objective. Third, this retelling of the early history of the Internet offers not only a corrective for how we view its particular development, but also provides significant lessons about the role of having institutional structures in place and designing for and around their administrative needs when building new accountable tech systems.

2. Accounting as billing: The double-bind of sharing networked resources

“ARPA will not pay for the coffee and pastry being served, so please chip in to help me pay for it” (Edwin W. Meyer, Jr., 1970, p. 2)

. This was early-Internet engineer Steve Crocker’s introductory remark to the Network Working Group (NWG) of the ARPANET project in its November 16, 1970 meeting in Houston, Texas. While this may seem like an inane detail about NWG bookkeeping, it in fact serves to highlight the moment when there began a major shift concerning who was paying for using the ARPA resources, and what exactly constituted the use that needed to be paid for. That is, while ARPA’s Information Processing Techniques Office (IPTO) had comprehensively funded its researcher-contractors’ logistical and capital expenses since its inception in 1962, by 1970, IPTO-funded labs felt the purse strings tighten. Low-level, seemingly incidental operational costs like a working group’s coffee bill warranted space in the official NWG meeting record. Amid the gravitas of what its members even at the time recognized would be pivotal discussions concerning the architecture of the first distributed computing network, the question of who pays for the coffee was never too far from the question of who pays for computing 

(Edwin W. Meyer, Jr., 1970, p. 18).Hendnotepage.10linkHendnote.10101010macro:-¿Starting in 1962, IPTO, situated within the broader ARPA umbrella, essentially funded computer science research centers across the US (including at MIT, UCLA, and Carnegie Mellon), “often outspending universities significantly” in terms of research support~“cite[pp. 36-37, 44, 56]–abbate1999inventing˝. These research centers followed the local time-sharing computing paradigm~“cite–carr1970rfc33pub˝. Also in 1962, J. C. R. Licklider and Welden E. Clark wrote the piece on “On-line man-computer communication”~“cite–licklider1962network˝ that is frequently cited as the first piece to discuss communicating, interconnected computers that can share data and programs~“cite–leiner2009briefhistory, leiner1997pastandfuture˝. For more on the 1960s development of a notion of an interconnected computer network, both by Licklider and among other researchers see~“citet[Ch. 1]–abbate1999inventing˝~“citet–turner2006culture˝~“citet–aspray2008commercial˝.

Talking about the cost of resources was new for beneficiaries of IPTO’s funding. Such operational details had not been a concern when there was no network through which computing resources were to be shared. IPTO had purchased computers for its contracted computing sites, which were working on local, site-specific projects such as automated theorem proving at Stanford Research Institute (SRI) and natural language processing at Bolt, Beranek, and Newman (BBN) 

(Roberts and Wessler, 1970, p. 548) (Abbate, 1999, p. 44).Hendnotepage.11linkHendnote.11111111macro:-¿IPTO/ARPA also leased communication lines from common carriers, to serve as the physical connection medium between remote nodes~“cite–roberts1967arpanetproposal˝. Having paid for these computers up front, IPTO was not particularly concerned about the low-level specifics of how they were used.Hendnotepage.12linkHendnote.12121212macro:-¿As Abbate notes, this research ethos came from the very top of the US federal government; President Johnson supported basic research in universities, as opposed to translational or “mission-oriented,” “narrowly-defined” projects~“cite[p. 37]–abbate1999inventing˝. From the perspective of IPTO-contract-site researchers, this hands-off policy enabled conditions of unrestricted, free usage; it seemed like contracting with ARPA was easy money, as the funding seemed to come with “few strings attached” (Abbate, 1999, p. 77).

However, this status quo of unchecked use was not to last. IPTO’s original mandate included the goal of eventually connecting its funded sites, even prior to the specific proposal of the ARPANET (Leiner et al., 1997; Roberts and Wessler, 1970; Marill and Roberts, 1966).Hendnotepage.13linkHendnote.13131313macro:-¿The ARPANET proposal ultimately covered more specific goals: a network for load sharing, message service, data sharing, program sharing, remote service, specialized hardware, specialized systems software, and scientific communication, and described the basic operation of such a network to “foster the ’community’ use of computers”~“cite[p. 2]–roberts1967arpanetproposal˝. By the end of 1971, when ARPA was completing the first phase of the ARPANET’s construction to connect 15 Interface Message Processors (IMPs), IPTO’s vision became a funding precondition (Heart et al., 1970). At this time, IPTO appealed to its contractors to no longer just use their computers as local lab resources, but rather to exercise their connection to the network (Abbate, 1999, pp. 44-46, p. 55) (Roberts and Wessler, 1970).Hendnotepage.14linkHendnote.14141414macro:-¿This notion of resource-sharing was, at least at first, considered the distributed analogue of the current local time-sharing computing paradigm: “The goal of the computer network is for each computer to make every local resource available to any computer in the net in such a way that any program available to local users can be used remotely without degradation. That is, any program should be able to call on the resources of other computers much as it would call a subroutine. The resources which can be shared in this way include software and data, as well as hardware. Within a local community, time-sharing systems already permit the sharing of software resources. An effective network would eliminate the size and distance limitations on such communities”~“cite[p. 543]–robertswessler1970resourcesharing˝. See also~“citet[p. 589]–carr1970rfc33pub˝ (a conference version of ~“citet–rfc33˝: “However, early time-sharing studies at the University of California at Berkeley, MIT, Lincoln Laboratory, and System Development Corporation (all ARPA sponsored) have had considerable influence on the design of the network. In some sense, the ARPA network of time-shared computers is a natural extension of earlier time-sharing concepts.” Also, see generally~“citet–marrill1966network˝. All 15 of these sites housed other ARPA-funded computing projects; as a result, even if networking was not the focus of every site’s individual projects, each site was expected to participate (Abbate, 1999, p. 50, pp. 77-78, p. 161) (Roberts and Wessler, 1970, p. 548). In other words, IPTO needed its contractors to utilize the network they had invested in building, in order to test the network’s potential for distributed computing—for two or more remote nodes to effectively work together to complete computational tasks. So, starting in late 1970, the perception of “no strings attached” funding began to crumble (Crocker, 1970; Postel, 1970; Edwin W. Meyer, Jr., 1970; Watson, 1971; Heafner and Harslem, 1971). It was becoming clear that ARPA’s funding did in fact come with a particular yoke: Contractors did not just have to connect to the ARPANET; they also had to use it.

This pivotal moment, nearly 10 years after IPTO’s founding, marked when it was possible to move theory to practice—to empirically validate IPTO’s commitment to resource-sharing via the early ARPANET. Historian Janet Abbate discusses how this promise faded rather quickly: “the decline of the ideal of resource sharing” came as the result of ARPANET’s usability issues (Abbate, 1999, p. 104) (Pickens, 1972, p. 5).Hendnotepage.15linkHendnote.15151515macro:-¿Usability was a key concern for the adoption of the ARPANET. For more on evaluating the “friendliness” of the network, see RFC 369 ~“cite–rfc369˝; see RFC 451~“cite–rfc451˝ and RFC 666~“cite–rfc666˝ for defining the unified user level protocol (UULP)—a proposed solution to ARPANET usability issues, which suggested a command language for “user convenience,” “’resource sharing’,” “economy of mechanism,” “front-ending…onto existing commands,” “accounting and authorization,” and “process-process functions”~“cite[pp. 1-2]–rfc666˝. In other words, UULP was an attempt to come up with a single protocol that would help with network usability functions. While connecting to the network had been a grueling engineering task, it was just the beginning of ARPANET’s resource-sharing challenges. Once connected, it was difficult to locate specific resources in the network and lingering interoperability issues meant that, even once a resource was found, it often remained unclear how to access it (Edwin W. Meyer, Jr., 1970, pp. 5-6) (Pickens, 1972, p. 6) (Padlipsky, 1973a). Abbate thus concludes that resource-sharing seemed more onerous than it was worth, such that the “demand for remote resources fell,” leaving “many sites rich in computing resources…looking for users” and the ARPANET a technology in search of an application (Abbate, 1999, p. 104).

But the usability of a network is itself constructed through choices regarding its administrative infrastructure. Seeking to unpack Abbate’s invocation of “usability,” we argue that the challenges of resource-sharing, and the changes it produced in IPTO-funded computers’ use, can also be ascribed to seemingly mundane (but in fact very difficult) issues of bookkeeping. Even though ARPA continued to foot the entire bill for the ARPANET, both in terms of capital and communication costs (Abbate, 1999, p. 85, p. 161), from the perspective of individual research sites, resource-sharing constituted a sacrifice—a loss of the unrestricted, free local computer use that had been the status quo.dddIt is possible that this was just a perceived sacrifice of local resources for collective external use, with no actual scarcity of computing resources for those that wanted it. Nevertheless, even if not an actual sacrifice, as we discuss, there was a reluctance to even agree to share local resources with the network. That is, by requiring sites to reallocate a portion of their computing resources for distributed use, resource-sharing seemed akin to ceding control of one’s own local computing budget to remote users with their own respective, perhaps even competing, needs.Hendnotepage.16linkHendnote.16161616macro:-¿Abbate writes of this briefly, saying that resource-sharing seemed like an intrusion on local research, and that PIs would rather continue local operations than collaborate on the ARPANET~“cite[p. 50]–abbate1999inventing˝.

As a result, even though IPTO required them to use the ARPANET, many contractors exhibited unwillingness to do so, wondering how to prioritize local and remote use. Richard G. Mills, the director of MIT’s information processing services, succinctly captured this hesitancy, saying: “There is some question as to who should be served first, an unknown user or our local researchers” (Abbate, 1999, p. 226). ARPA was no longer covering all operational costs: It was not paying for the coffee and pastries, and it was not compensating for the loss of previously unrestricted local resources now being shared with others in the network. Inducing resource-sharing therefore exposed a fundamental, underlying tension in distributed computing. Individual labs may not have wanted to give up their local resources, but they also recognized the potential value of being able to use other labs’ resources. One suggested solution to this tension was for site administrators to bill remote users in order to recoup losses or disincentivize remote use (Edwin W. Meyer, Jr., 1970, p. 7).Hendnotepage.17linkHendnote.17171717macro:-¿For example, in December 1970, Douglas Engelbart noted “We are pretty sure that eventually SRI will have to charge because of many potential users not at primary sites seeking limit[sic] resources”~“cite[p. 7]–rfc82˝. As J. Pickens asserted in RFC 369, “if distributed computing is allowed, then distributed billing is a necessity” (Pickens, 1972, p. 6).Hendnotepage.18linkHendnote.1818

1818macro:-¿It is worth noting that inherent issues with consistency between nodes in a distributed system further complicate the mechanics of correct accounting. While consistency in distributed databases is now known to be a topic of fundamental importance in distributing computing, it seems that this issue was first noted in RFC 677~“cite–rfc677˝ in relation to maintaining duplicate databases for correct, consistent Terminal Interface Processor (TIP) accounting. RFC 677 notes that its contents go beyond “ARPA-like networks” and “are generally applicable to distributed database problems”~“cite[p. 1]–rfc677˝ (p. 1), talking about issues of partition tolerance~“cite[p. 3]–rfc677˝ and consistency~“cite[p. 4]–rfc677˝, as well as how timestamps can be useful for maintaining consistency because they are monotonically increasing (noting, however, that this can be complicated by clock skew between nodes). For a more contemporary treatment of these issues, refer to~“citet–abadi2012tradeoff˝.

For labs to charge for the use of their resources represented a fundamental shift in the management paradigm of the ARPANET. Until this point, there was no need to do low-level accounting of individual line items of resource usage—of who was using (or even misusing) specific parts of the network—because ARPA was managing the overarching cost. Billing, however, now imposed a new burden of operational costs on individual sites, in which low-level bookkeeping was going to become crucial for the first time. In this section we show how the challenges presented by the need to develop accounting mechanisms structured the ambivalence towards resource-sharing in the early ARPANET. Accounting posed a non-trivial problem,Hendnotepage.19linkHendnote.19191919macro:-¿Local time-sharing computers already “possess[ed] elaborate and definite accounting and resource allocation mechanisms”~“cite[p. 5]–rfc33˝, but these did not naturally extend to the distributed resource-sharing environment. As RFC 504 asked, “If you employ accounting Procedures that require cost recovery, how, if at all, should they be modified to work in a network resource sharing environment?”~“cite[p. 4]–rfc504˝. which researchers neither knew how to solve, nor really desired to spend time solving in place of performing their individual research.

2.1. Explicit and implicit billing: The case of “free” file transfer

The debate over “free” file transfer gives an intuition for the complexity of accounting in the distributed resource-sharing environment. File transfer was (and remains) one of the basic functions of resource-sharing over networked computers. It facilitates the transfer of files from one node to another, enabling distributed sharing among remote users. The File Transfer Protocol (Bhushan, 1972) first described this capability, which requires resource usage, including memory and CPU utilization, and thus made it a prime candidate feature for billing.Hendnotepage.20linkHendnote.20202020macro:-¿To enable billing, RFC 385 added an account (ACCT) command to the FTP protocol, in order to distinguish accounts used for resource accounting as a different entity from users logged onto the network~“cite–rfc385˝. Nevertheless, despite the apparent necessity for billing to support the network, many individual users wanted to avoid payment. When using FTP, they leveraged a loophole: the MAIL FILE feature, which allowed for bypassing login on the remote host that housed the file of interest, and made it possible for a user to mail the file to themselves, in place of transferring it via TELNET-based connections.Hendnotepage.21linkHendnote.21212121macro:-¿For an early RFC recognizing the important relationship between login/authentication and accounting, see RFC 399, which introduced an optional login function to file transfer with arbitrary user and account fields as a way to model file transfer billing prior to implementation~“cite–rfc399˝.

In response to the popular use of FTP’s accounting loophole, Rob Bressler, a network protocol developer at BBN, issued an RFC calling to codify a more appropriate accounting-free FTP use pattern: the implementation of free, loginless file transfer, which would give users the free access they wanted without abusing FTP’s intended use (Bressler, 1973). Rather than using MAIL FILE, his proposal expressly allowed for users to bypass authentication via a deliberate loginless facility and, without logging in, it would not be possible to account for who was transferring the file. The transfer would be “free,” as it would not be possible to bill an account for it.Hendnotepage.22linkHendnote.22222222macro:-¿On hardware for which such loginless access was not possible, such as TENEX machines, Bressler proposed adding an account named “FREE,” which could be used to avoid billing to specific accounts~“cite–rfc487˝. Bressler’s proposal for an intentional, free file transfer feature immediately raised questions about what it meant for resource usage to be “free” (Padlipsky, 1973). Notably, fellow BBN network engineer, Ken Pogran rebuffed Bressler’s RFC for making “sweeping assumptions…about the nature and use of accounting mechanisms (Pogran, 1973, p. 1).” Pogran resolved to “un-muddle” so-called ’free file transfer’,” making the case that the resource usage involved in FTP (deemed by Bressler as negligible) was in fact quite costly. Pogran made clear that nothing was actually free from the perspective of the site whose resources were being used.eeeFor example, while CPU utilization for transferring one file might seem negligible, over time such costs add up and certainly cannot be called “free” (Pogran, 1973, p. 4). In short, Pogran argued, nothing is free if you are the one who has to worry about costs.Hendnotepage.23linkHendnote.23232323macro:-¿Moreover, beyond the issue of assuming that FTP’s CPU utilization is free, there are other technical issues with Bressler’s proposal. Bressler noted that it may be necessary to charge for file storage, even if not for the transfer itself. He claimed that this was still possible even with his free, loginless scheme, as it was possible to charge storage to the account of the user who owned the directory where the file was ultimately saved~“cite–rfc487˝. However, as Pogran pointed out, FTP enabled a user to send a file to any other user; thus, the user performing the transfer could (at this time, given limited access control) save the file to another user’s directory, thereby evading payment and making another user responsible for it. Pogran asked, “should the recipient of Network mail be charged for the resources consumed by someone else in sending it? (Would you care to pay the postage for all the junk mail that arrives in your home (U.S. Mail) mailbox?)” (Pogran, 1973, p. 1).

Yet, while Pogran challenged what it meant for resource usage to be called “free,” he did not claim that such actually not-“free” usage should be disallowed. Rather, he contended that “free” should mean that resource usage was free for a user at a research site—and that such “free” usage should get charged to an overhead “network services account” that ultimately got billed to ARPA (Pogran, 1973, p. 3). In other words, while Pogran seemed to have taken a more nuanced view than Bressler about the costs of resource-sharing, RFC 501 does not ultimately “un-muddle” the issue of distributed accounting. Like Bressler, Pogran also saw merit in the approach of avoiding the particulars of accounting; he, too, found it desirable for researchers like himself to not be concerned with the minutiae of how costs got covered. However, unlike Bressler, Pogran made explicit that ignoring costs did not simply make them go away. Rather, he highlighted how explicitly implementing mechanisms to evade accounting corresponded to the status quo of ARPA being on the hook for the bill. The exchange between Bressler and Pogran underscores the same contradiction: Both talked about accounting as necessary to recoup local site losses due to resource-sharing, but both also affirmed the common desire of ARPANET researchers to not pay to use remote resources. The ad hoc strategy of “free” account would prove infeasible in the longer term, when it was expected for the network to host nodes and users not funded by ARPA.Hendnotepage.24linkHendnote.24242424macro:-¿Pogran realized this issue, and hinted at the future problem of other, potentially commercial non-ARPA players needing to be billed for resource usage on the ARPANET~“cite[p. 3]–rfc501˝. Moreover, neither Pogran nor Bressler’s RFCs addressed the contemporary problem that designated “FREE” accounts could be, and often were, eliminated by the hosts, disrupting the ability to resource-share across hosts altogether~“cite[p. 6]–rfc369˝.

3. Accounting as measurement: Contesting the necessary functions of networked computing

The debate over “free” file transfer demonstrates that deciding how to classify what resource usage needed to be accounted for was a challenging and contentious problem. As IPTO pushed for resource sharing more actively, it was not clear what needed to be accounted for. Making the decision to explicitly build workarounds to avoid accounting—folding some types of resource-sharing costs back to ARPA—ultimately obscured the complex and pervasive role of accounting in a distributed, resource-sharing environment. The architects of the ARPANET repeatedly punted on designing mechanisms for accounting. Even during the ARPANET’s earliest years, around the completion of the connection of the first part of the network in 1971, accounting had frequently come up as a necessary function, albeit one with unclear requirements 

(Crocker, 1970; Postel, 1970; Edwin W. Meyer, Jr., 1970; Watson, 1971).Hendnotepage.25linkHendnote.25252525macro:-¿At this time, as noted by Richard W. Watson, “The advanced Host-Host protocol study committee is looking at the accounting problem. There was a brief mention made of a network banking system. Bob Kahn of BBN indicated that he would start a dialog on the subject of accounting by producing a paper putting down the issues as he sees them”~“cite[p. 13]–rfc101˝. It was this paper that became RFC 136, discussed in greater detail in Section~“ref–sec:measurement˝. In response, Bob Kahn, co-lead of the ARPANET protocols team, prepared RFC 136: the first unified attempt to clarify accounting’s role in resource-sharing.Hendnotepage.26linkHendnote.26262626macro:-¿RFC 136 was concurrent with discussions about how to distinguish between billed Research Centers and free (but limited) Service Centers (Appendix~“ref–app:sec:researchservice˝). Notably, this predates the debate over “free” file transfer discussed in Section~“ref–sec:freeftp˝, indicating that the problems RFC 136 raised remained unresolved and carried over into later debates such as FTP. He raised ten, as-yet-unanswered crucial questions related to accounting, which ranged from a future of potential private control of the ARPANET through government regulations of network use and to how resource usage should be measured and characterized. These questions, in attempting to clarify what accounting requires, instead serve to clarify just how complicated and expansive accounting is: While accounting clearly involves billing, billing is not the only component of accounting.Hendnotepage.27linkHendnote.27272727macro:-¿Adding an “account” field to different network functions perhaps could help facilitate accounting, but it was not sufficient on its own to capture the wide-ranging semantics of accounting implicated by Kahn’s considerations, outlined in RFC 136. Such “account” syntax includes, for example, the aforementioned addition of an “ACCT” command to the FTP facility~“cite–rfc385˝, as well as a notion of account ID to socket format (“A socket is defined to be the unique identification to or from which information is transmitted in the network”~“cite[p. 2]–rfc147˝). See RFC 101~“cite–rfc101˝ and 129~“cite–rfc129˝ for early RFCs concerning socket description, particularly in relation to identification at the user level; see RFC 147~“cite–rfc147˝ concerning an account field in the socket definition.

Notably, several of Kahn’s RFC 136 questions concerning accounting implicated the ability to take measurements of network activity.Hendnotepage.28linkHendnote.28282828macro:-¿“The method of network operation and the potential for its growth are relevant factors to be considered in formulating a plan for Host accounting. For example, the answers to the following questions provide a useful background for reference: 1. Who or what operates the Network? 2. What is the criteria upon which new sites should be incorporated into the Network? 3. What regulations, if any, apply to the connection of non-ARPA sites? 4. What is the relation, if any, between the ARPA Network and common carrier services? 5. What procedures are required to bring new sites on board and up to speed? 6. What is the most effective way to characterize their Resources? 7. What usage of other Network resources do they anticipate? 8. What procedures will be required for a typical user to obtain access to that Host? 9. What is their charging policy and for what items? 10. Are their rates in accordance with government standards?” ~“cite[p. 1]–rfc136˝. Prior to this RFC, accounting and measurement had generally been considered separate—albeit both necessary—functional concerns (Karp, 1971, e.g., p. 19), neither of which had been solved by the ARPANET architects. While accounting was conceived of as billing and treated as a nuisance to be kept separate from research (Section 2.1),fffWe can see this treatment of accounting as separate from research in both the debate over “free” file transfer (Section 2.1) and in the distinction between research sites and service sites (Appendix A). measurement was afforded the status of being integral to research. Measurement was a necessity for those “interested in the network as an object of study,” (Postel, 1970, p. 2) (Edwin W. Meyer, Jr., 1970) while accounting was not considered to have such a central role.

The earliest example of measurement’s importance as a research function concerns the work of Gerry Cole at UCLA. As late as 1971, Cole collected and analyzed network data in order to better understand resource usage patterns in the ARPANET’s novel distributed environment (Watson, 1971, p. 2).Hendnotepage.29linkHendnote.29292929macro:-¿One early RFC noting this project read: “Gerry requested that when people are set up to use the Network, they inform him so that he can gather statistics. UCLA will eventually have a program to scan the Network for utilization, but if people could tell him when they were going on to use the Network, it would be easier to measure meaningful things and interpret the data from a knowledge of type of usage”~“cite[p. 2]–rfc101˝. Shortly after these initial measurement efforts, BBN took over the role more formally and set up the Network Control Center (NCC), led by Alex McKenzie, to measure network statistics.Hendnotepage.30linkHendnote.30303030macro:-¿In RFC 101, Bob Kahn was recorded to have mentioned that BBN had an interest in collecting measurements on the network~“cite[p. 2]–rfc101˝. Abbate’s work confirms this. She documents Alex McKenzie’s role at BBN: He joined the ARPANET project when BBN’s ARPANET node went online as part of the November 1970 deployment push~“cite–rfc77, rfc82˝, and in 1971 he took charge of the NCC~“cite[pp. 64-67]–abbate1999inventing˝. The NCC monitored all nodes attached to the ARPANET at this time, and assumed the role of ensuring that the entire network ran smoothly by documenting reliability issues, debugging and diagnosing malfunctions, and monitoring resource usage on the network (Abbate, 1999, pp. 64-67,72).Hendnotepage.31linkHendnote.31313131macro:-¿As Abbate noted: “the NCC acquired a full-time staff and began coordinating upgrades of IMP hardware and software. The NCC assumed responsibility for fixing all operational problems in the network, whether or not BBN’s equipment was at fault. It’s staff monitored the ARPANET constantly, recording when each IMP, line, or host went up or down and taking trouble reports from users. When NCC monitors detected a disruption of service, they used the IMP’s diagnostic features to identify its cause. Malfunctions in remote IMPs could often be fixed from the NCC via the network, using the control functions that BBN had built into the network”~“cite[p. 65]–abbate1999inventing˝; “By 1976, the Network Control Center was, according to McKenzie, ’the only accessible, responsible, continuously staffed organization in existence which was generally concerned with network performance as perceived by the user.’ … The NCC had become a managerial reinforcement of ARPA’s layering scheme”~“cite[p. 66, internal citations omitted]–abbate1999inventing˝. The NCC was not limited solely to conducting research on network utilization, but was responsible for its de facto operation, a role we discuss in Section 4.

As Bob Kahn suggested in RFC 136, these functions, aside from playing a role in developing an understanding of network use, were an essential component of accounting (Kahn, 1971, p. 1). He made clear that it would not be possible to account for resource usage without having appropriate mechanisms in place to measure resource usage. Accounting-related measurements would not just involve the “who, what, where, and when” of network use (Crocker et al., 1973, p. 3); they would also involve metrics concerning site performance, such as user response times and frequency of crashes (Pickens, 1972).Hendnotepage.32linkHendnote.32323232macro:-¿Such metrics would be important for accurately accounting for past resource usage~“cite[p. 5]–rfc585˝, and would come to be recognized in the 1980s as important for enabling individual sites to predict future resource usage and corresponding costs~“cite[pp. 1-3,8,27,44]–rfc869˝. By indicating overlap with functions like measurement, the accounting-related considerations Kahn raised in RFC 136 implicated fundamental questions about what the network should do, and how it should be implemented.

These considerations raised not only practical challenges, but also ideological questions about the network’s purpose. The act of performing accounting itself consumes network resources; it costs something to account for costs. The early ARPANET architects considered these costs to be overhead. They were concerned that accounting “costs space” (Edwin W. Meyer, Jr., 1970, p. 3) and “undue delays in accessing distributed resources” (Watson, 1973, p. 4). They viewed the resource consumption involved in accounting as an imposition that was in tension with the ARPANET’s fundamental goal of distributed resource sharing (Abbate, 1999, pp. 96-97) (Roberts and Wessler, 1970). Accounting therefore presented a seemingly irreconcilable contradiction: On the one hand, ARPANET architects repeatedly acknowledged that accounting was critical to facilitate interconnection between distributed nodes; on the other, they viewed accounting as hostile to that very same goal.

This view of accounting as a burden to the network can be understood in relation to the End-to-End architectural principle guiding the construction of the ARPANET. As mentioned in Section 1, from a technical perspective, End-to-End, is a preference toward parsimony in the network—of placing application-specific functionality where it is needed at the end hosts, rather than implementing it inside the network as a feature accessible to all hosts.Hendnotepage.33linkHendnote.33333333macro:-¿“The principle, called the End-to-End argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level. … Low level mechanisms to support these functions are “emph–justified only as performance enhancements˝”~“cite[p. 1]–saltzer1984endtoend˝. This principle biases toward only placing the essentials for connectivity inside the network, with the notable exception for features “justified only as performance enhancements” (Saltzer et al., 1984, pp. 1,9). Accounting certainly could not be considered a performance enhancement.gggThe ability to record measurements, which we note is fundamentally tied to accounting, is essential from the ideological perspective of End-to-End; it is not possible to justify performance enhancements without being able to measure performance. In fact, as we have seen, ARPANET architects viewed accounting as a performance hit that should be kept “to a minimum” (Edwin W. Meyer, Jr., 1970, [. 3). Abiding by End-to-End, it would be “uneconomical” to include accounting within the network, rather than pushing its implementation to end hosts. The reluctance to implement accounting was therefore not just at the level of specific engineers who wanted to evade paying for FTP; rather, it reflected the significant ideological challenges that accounting presented for a network priding itself on its parsimony.

Consistent with an End-to-End approach, RFC 136 was scoped to “Host Accounting.” Its intent was to address the issue as a matter for the end nodes rather than inside the network. Yet, toward the end of the proposal, Kahn posed a speculative question for the future, which can be read as in direct tension with End-to-End: “Should Host accounting information eventually flow via the Network?” (Kahn, 1971, p. 4, emphasis added).Hendnotepage.34linkHendnote.34343434macro:-¿Given that the architects commonly recognized the importance of accounting, as was clear both in Kahn’s own RFC (RFC 136) and those that preceded it~“cite–rfc75,rfc77,rfc82,rfc101˝, it was arguably reasonable for Kahn to question whether accounting was fundamental enough to be an in-network feature. In this context, one could conceivably read Kahn’s question as a provocation: Will there be a time at which accounting will be important enough to violate End-to-End? This would have been unthinkable to consider viable in 1971. Instead, Kahn recommended a more gradual, conservative approach: “the implementation of standard automated accounting procedures involving the use of the Network will be deferred until non-automated procedures have been understood and stabilized. Early experimentation in this area is appropriate, however”~“cite[p. 2]–rfc136˝. However, since accounting was clearly not “performance enhancement” (Saltzer et al., 1984, p. 1), for it to not violate End-to-End, it would have to be justified as an essential architectural feature of connecting distributed nodes. An attempt at such a justification would have run contrary to the RFCs at this time, which always list accounting as distinct from the technical requirements of connectivity (Crocker, 1970; Postel, 1970; Edwin W. Meyer, Jr., 1970; Watson, 1971).hhhEven after RFC 136, accounting continued to be punted until some unspecified, but inevitable point in the future. RFC 82 acknowledges that one possibility would be to “worry about accounting when saturation occurs” (Edwin W. Meyer, Jr., 1970, p. 7)—i.e., when resource usage reached an extent for which it would be absolutely necessary to do accounting, especially since at that time “non-ARPA folks [would] be able to connect” (Kahn, 1971, p. 4). Accounting could also be put off until a future in which ARPA was no longer responsible for the network infrastructure—when another government agency or a private commercial entity assumed the “total cost of operating” (Kahn, 1971, p. 2). It is thus reasonable to conclude that, even as early as 1971, commercialization loomed as a possible future for the ARPANET, at which point accounting would be absolutely necessary for reimbursement “on both a connect and usage basis” (Kahn, 1971, p. 4). In the interim, there remained no plan in place for developing accounting procedures.

3.1. The policy of no policy takes hold

In the years that followed RFC 136, the ARPANET fell short of achieving IPTO’s goal of facilitating resource-sharing (Roberts, 1967). Abbate’s canonical narrative attributes the decline of this ideal to the fact that the ARPANET was very difficult to use, especially for new users trying to join the network.iiiSee Appendix B for discussion situating Abbate’s argument about the usability challenges facing early ARPANET users in relation to accounting. Abbate argues that this created an identity crisis for the ARPANET; it was a technology in search of an application, a role that email was well-posed to assume as it did not suffer from the same usability issues that plagued resource-sharing (Abbate, 1999, p. 106, concerning the “smash hit” of email). While usability issues presented a challenge for the adoption of resource-sharing, we argue that the decline of resource-sharing can also find its roots in the inability to account for resource usage. With no mechanism to account for resource usage, it was not possible to realize the imagined potential of resource-sharing. To reiterate and rephrase Pickens’ assertion, without the necessary functionality of distributed billing, distributed computing was not feasible.

Accounting was fundamental to the network, but was also fundamentally unresolved. It was non-trivial in the same ways that devising protocols for the use of shared resources was non-trivial: inescapably tied to measuring and managing network performance, and implicated in everything from resource allocation to quality of service. Tensions surrounding accounting extended well beyond the early 1970s (Padlipsky, 1973b; Johnson and Thomas, 1975; Postel, 1980; ISO, 1983; Leiner, 1987, e.g.). In 1994, one RFC called accounting (grouped with security) “the bane of the network operator,” but admitted that it is the feature “most requested…by those who are responsible for paying the bills” (Almquist and Kastenholz, 1994, pp. 149-150). More recently, David Clark referred throughout his latest book to the importance of accounting, and yet also belittled it, saying it “only plays a supporting role compared to the core objective of forwarding data, and researchers like to work on the lead problem, not the supporting role” (Clark, 2018, pp. 46-47). As we argue in the next sections, these accounting tensions carried over to network-wide policy tensions. There were repeated attempts to minimize or redefine the role of accounting, often in the service of deprioritizing its implementation. In the process of attempting to preserve the “free” aspects of the early ARPANET, the network became a free-for-all.

4. Accounting as management: Tracing the boundaries of responsibility and authority

The ambivalence toward accounting’s role in the network persisted in the decade that followed the ARPANET’s expansion to more sites. Our discussion of the early years of the network argues that, even during a time of relatively low saturation, accounting emerged as a function with bearing on both billing and measurement. But by the mid-1970s, users’ demands for a more reliable network to support their research work showed how without management, the network was not sufficiently operational. Later, the ARPANET’s connection with other networks that constituted the Internet, the “network of networks,”jjjDuring the 1980s, “network of networks” was used interchangeably with the term “Internet” to emphasize the transition from talking about the ARPANET specifically to the workings of connecting ARPA’s sites with other existing networks. We discuss this shift later in this section. For example, in sketching out a proposal for an interagency research institute, Barry Leiner referred to the “’Network of networks’ or Internet model of interconnection” (Leiner, 1987, p. 3). put the possibility of interacting with mistrusted agents front of mind for the network protocol developers. This section explores how accounting, the flexible term for many administrative aspects of network development, was initially contrasted with the core need to create features for network management. By the late 1980s, however, descriptions of what would constitute effective network management explicitly referred to accounting as a main component. If the early case of FTP shows how even adding a data field that would allow for future billing carries with it meaningful decisions about the purpose of the network, the 1980s Internet discussions over network management proved accounting raised serious questions about the boundaries of responsibility for and authority over elements of a networked system.

4.1. Management as failure: The visibility of humans at the Network Control Center

As Section 3 explored, there was no obvious way to measure activity on the network and implementing accounting required tackling various trade-offs (for example, losing data). But over time, ARPANET engineers also faced changing expectations of users from the network, dictating new needs for accounting that supported diagnostics. Alex McKenzie, who led the measurement efforts at BBN and established the Network Control Center (NCC) there (Section 3), later reflected on the ARPANET’s growing pains as users came to expect greater operational reliability from it: “Once a set of host protocols were defined to allow connections…it was remarkable how quickly all of the sites really began to want to view the network as a utility rather than as a research project” (Anderson, 1990, p. 11). Meaning that, although ARPANET was itself envisioned as an ongoing research project in network communication, it also became the infrastructure that supported other research activities. Allowing that level of reliability required proactive management of use issues.

Between 1972 and 1976 the NCC became the focal unit for troubleshooting issues on the ARPANET, gradually expanding the coverage of sites for which they offered support, alongside the network’s expansion (Anderson, 1990, p. 15).Hendnotepage.35linkHendnote.35353535macro:-¿Alex McKenzie later speculated that he was tasked with running the NCC during this period because of his personal belief that the ARPANET should become more of a utility, focused on operational issues than an experimental research project that tolerated routine disruptions, to “take it very seriously if anything was broken.” For context, the extent to which “things were broken” can be seen in some data put forward in RFC 369, “Evaluation of ARPANET Services,” which surveyed a couple of months in early 1972, just preceding McKenzie’s role at the NCC. The survey highlighted that the reported mean time between failure was at best 2 hours and at worse 5 minutes. The average of time of “trouble free operation” amounted only to 35“%, which the RFC described as “a figure untenable for regular user usage”~“cite[p. 4]–rfc369˝ Later on, McKenzie reported that the NCC considered their efforts to treat the network as a utility successful when providing reliability 98-99“% of time, which he conceded was not comparable for what constituted reliability for a utility such as electricity~“cite[p. 15]–anderson1990mckenzie˝. It stepped into this role “because the users really needed a single point of contact” in case any component of the network failed.Hendnotepage.36linkHendnote.36363636macro:-¿We have previously discussed Janet Abbate’s argument about the “decline of the ideal of resource sharing” as a matter of neglected usability issues (see Appendix~“ref–app:sec:abbateusability˝). The role that the NCC occupied during this time highlights our point about the ways accounting activities (in this case, measurement data that informed diagnostics) constitutes a significant aspect of “usability.” But it is also worth noting that the NCC’s responses to users’ demands did not concern creating interfaces for finding resources or navigating their use, but more explicitly, issues users had with figuring out whether a specific device or service was down for some reason. McKenzie describes the following scenario: “It’s absolutely no use to some geophysicist in Washington who’s trying to do something in the DARPA seismic monitoring program, to say, ‘Well, you can call the Network Control Center, if that doesn’t work call MITRE and ask about their modems, and if that doesn’t work call ISI, and ask about their computer.’ Nobody would work that way, I wouldn’t work that way myself”~“cite[p. 13]–anderson1990mckenzie˝. Even when we focus our attention on the experience of users of the network, rather than its engineers, we see that accounting plays a significant role in the ongoing ability to access services reliably. The bind facing the NCC team was that “even though [their] authority didn’t expand, [their] responsibility expanded quite a bit” (Anderson, 1990, p. 13). While the NCC had no authority over how a host on the network in another institution operated, they still found themselves answering for issues that arose with its use. The lack of authority that characterized the NCC’s efforts to “control” the network pointed to the need to develop clearer protocols for network management.

If the NCC was committed to taking network failure seriously and addressing operational issues as they arose, some ARPANET engineers believed that management itself was a sign that the research project of ARPANET was a failure. In his 2018 book, David Clark, devoting an entire chapter to the topic, defines management as “those aspects of network operation that involve a person” (Clark, 2018, p. 260). Defined in this way, through the need for human intervention, management is already set up to contrast with what network engineers might consider the core of their work. Clark, who clearly has come to view management as an important aspect of networking, points out that some engineers see management as a signal for design failure (as opposed to simply the temporary failure of a component of the network) because “a properly designed network should run itself” (Clark, 2018, p. 260). In other words, developing the infrastructure for accounting that would aid the NCC’s management role was to admit that the ARPANET required such ongoing, labor-intensive human operation.Hendnotepage.37linkHendnote.37373737macro:-¿By the mid-1970s, the NCC constituted a 3-person team during East and West Coast business hours, and was further manned by 1-2 people at other times, to offer 24-hour, 7-days a week coverage~“cite[p. 13]–anderson1990mckenzie˝.

From his present-day vantage point, Clark describes the network engineers who consider management as a necessary aspect of any robust network as “pragmatists.” Throughout this paper, our argument has been that designing accounting mechanisms that would enable accountability is especially key for distributed technical systems: In the case of network failures, accounting would allow for inspection after-the-fact. Recognizing human operators performing “management” as part of the network is, therefore, not giving up on the project of network computing, but acknowledging the range of design decisions that have to be addressed to develop appropriate infrastructure.Hendnotepage.38linkHendnote.38383838macro:-¿For example, already in 1981, Jon Postel proposed the Internet Control Message Protocol (ICMP), a means to send back information about a lost datagram, recognizing that the network’s protocols are “not designed to be absolutely reliable”~“cite[p. 1]–rfc777˝. These accounting functions of ICMP allowed individual users to probe the network to identify when a particular host was down and is used to this day as a “ping” by both professional network managers and ordinary users of the Internet~“cite[p. 263]–clark2018internet˝. But such minimal diagnostics implementations still provided a rather limited amount of management tools. Throughout the 1980s, developing protocols to support recording data relevant to management became increasingly more mainstream at the IETF. In 1988, Vint Cerf circulated a memo on behalf of the Internet Activities Board recommending the adoption of a common network management framework to avoid incompatibility issues across the network (Cerf, 1988, p. 4). This marked a decade-long shift from implementing a patchwork of accounting and monitoring tools such as ICMP to developing an entire conceptual language around the issues of network management.

The new framework divided the meaning of management into five functional areas: “fault management, configuration management, performance management, accounting management, and security management,” later formalized as the FCAPS framework (Warrier and Besaw, 1989, p. 8) (Clark, 2018, p. 261). The framework at once narrowed the role of accounting management, defining it as making “it possible to charge users for network resources used and to limit the use of those resources,”  (Warrier and Besaw, 1989, p. 8) and expanded its centrality by implicitly recognizing that the tools needed to allow for accounting management were also needed to support a variety of other management functions.

Even in the seemingly discounted role of “accounting management” within the FCAPS framework, we can already see hints to the types of management decisions that went beyond network failure diagnostics. We have previously argued that accounting, conceived as billing, already surfaced a variety of questions about what it means to share resources over a network. Accounting as management recognized the potential need to limit access to resources. The developing language around network management in the 1980s marked a transition from the early days of management, consisting of identifying failing components on the network and troubleshooting them to allowing network managers to make decisions about network traffic and resource access, to prioritize some users over others.

4.2. The “network of networks”: Management without trust

As the scale of the network grew, not only adding new sites to the ARPANET, but incorporating connections with other existing networks, differences in management across these domains came to the fore.Hendnotepage.39linkHendnote.39393939macro:-¿Throughout this paper, we focused on the ARPANET as a precursor to the Internet, and the differences between the two, while both technical and administrative, are primarily outside the scope of our discussion of accounting and accountability in the development of network computing. However, during the 1980s, the ARPANET, which, thanks in part to management efforts such as the NCC, was considered relatively reliable, was inter-networked with other, less reliable networks to constitute the early version of what we may consider as “the Internet.” For more on the architectural and institutional challenges involved in creating the Internet, see “Chapter 4: From ARPANET to Internet” in~“citet–abbate1999inventing˝ and “Chapter 7: Alternative Network Architectures”~“citet–clark2018internet˝.) Throughout this transition to a “network of networks,” the applicability of a particular management design captured in the FCAPS framework came to define the boundaries of “Administrative Domains” (ADs)—sectors of a network defined precisely through their shared management (Warrier and Besaw, 1989, p. 8).kkkWhile calling different units of the network “Administrative Domains” (ADs) and at times even “Administrative Regions” (ARs) emphasized that conflict or incompatibility on the network often stemmed from the differences in management priorities and systems, over time the term was replaced with “Autonomous Systems” (ASs), a term that obfuscated the type of management considerations we are exploring with in this section. In our discussion of the developing language around management and accounting above, we have treated the kinds of traffic and resource access issues that occur within an AD. But the introduction of inter-AD management raised a new set of concerns and competing interests which needed to be accounted for.

In 1983, Barry Leiner took over directing the Internet research program at ARPA and led a broader effort of organizational restructuring, which both he and Clark have attributed to the growth of the network (Russell, 2006, p. 50)  (Leiner et al., 2009, p. 29).Hendnotepage.40linkHendnote.40404040macro:-¿In Leiner’s 2013 posthumous induction into the “Internet Hall of Fame,” the citation credited him as having “helped set up the bureaucratic structures that developed Internet communication protocols”~“cite–halloffame2013leiner˝. These bureaucratic structures included the establishment of the Internet Activities Board (IAB) and the task-force structure for tackling specific technical aspects of the network’s development, that will result in the IETF. Historian Andrew Russell argued that the Internet’s technical standards governing structure was itself a political innovation, often overlooked by narratives that focus on the innovative technical aspects of the Internet~“cite–russell2006war˝. It is therefore not surprising for Leiner to emerge as one of the more prolific RFC author that focused on the administrative, accounting needs of the growing network. In a series of RFCs during his tenure, Leiner became a vocal commentator on a variety of accounting concerns that the new inter-AD age raised. In an “idea paper” circulated as an RFC in 1987, Leiner used the experience of the growing Internet to sketch out the challenges facing any kind of interagency research network. The conceptual problem at the core of this type of network remained the lack of a “consistent mechanism to allow sharing of the networking resources” (Leiner, 1987, p. 1). In framing the issues facing a “network of networks” in terms of the need of different research agencies running their own networks, or ADs, Leiner paid close attention to the possibility that these intra-AD priorities might conflict with the inter-AD design: “[to] assure appropriate accountability for the network operation, the mechanism for interconnection must not prevent agencies from retaining control over their individual network” (Leiner, 1987, p. 9). In 1972, Alex McKenzie found himself responsible for the performance of individual components of the network without the authority to manage them. Here, over a decade later, Leiner raised the reversed concern—that individual agencies will not be able to be accountable to what was going on on their networks due to a lack of authority over aspects of internetworking activity. Leiner called for a “management approach” that would allow local control while still sharing resources with users sponsored by other agencies (Warrier and Besaw, 1989, p. 11). In Leiner’s description, necessary tools for individual network managers included tools of both user access control and privacy and accounting mechanisms “to support both cost allocation and cost auditing” (Leiner, 1987, p. 9). The shift to a “network of networks,” then, only heightened what has already been a persistent, unresolved need for accounting as part of a network management approach.

At the interface of the different ADs of the Internet sit special host computers with the responsibility to route and ability to translate data across domains, called gateways (Leiner, 1987, p. 6) (Abbate, 1999, pp. 128-129). Gateways acted as “buffers” between ADs, simplifying the need for each AD to have working knowledge of others (Abbate, 1999, p. 129). The Exterior Gateway Protocol (EGP), introduced in 1982, was a means of allowing this patchwork of ADs to appear to an end-user as “a single internet” (Rosen, 1982, p. 3). EGP contained mechanisms for discovering “neighboring” ADs and allowed those “neighbors” to exchange reachability information—to know whether the “neighbor” was open to receiving traffic. A key feature of the EGP was to enable “each gateway to control the rate at which it sends and receives network reachability information, allowing each system to control its own overhead” (Rosen, 1982, p. 5). The resource-sharing concerns we first encountered in Section 2, now cast against the changing Internet environment that expected increasing numbers of internetworked ADs, took on a new aspect of accounting as management—a baked in assumption of mistrust between the different networks. If one cannot account for users of other networks, then a buffer between these ADs was needed.

A few years later, David Clark described the purpose of the EGP as follows: “to permit regions of the Internet to communicate reachability information, even though they did not totally share trust” (Clark, 1989, p. 2). The assumption of mistrust cut in two directions: on the one hand, an AD might wish to limit the amount of information other actors on the network would have access to, using gateways as a buffer; on the other, mistrust also required higher degree of accounting information to allow for individual AD management decisions. EGP was an imperfect solution, limiting the amount of information different ADs had to share in order to still make use of the network of networks, but under-developing the tools of network management and accounting that would allow for the kind of responsibility Leiner was advocating for.Hendnotepage.41linkHendnote.41414141macro:-¿In 1989, a new protocol, the Border Gateway Protocol (BGP) was developed, “built on experience gained with EGP”~“cite[p. 1]–rfc1105˝. In his 2018 book, David Clark describes the shift to BGP as necessary to support a commercial network that would allow for different Internet Service Providers (ISPs) to have non-hierarchical relationship with one another (as opposed to EGP’s more hierarchical structure that put the ARPANET at the core). When discussing BGP’s limitations, however, Clark raises the issue of its “limited expressive power,” providing different ADs (in this case ISPs) with minimal reachability information that could inform decisions about routing traffic through other ADs~“cite[pp. 244-245]–clark2018internet˝.

The network still maintained its original goal: being able to connect a large number of systems of different types of networks—to provide high-performance communication and interoperability among diverse hardware. But it now also had the added goal of supporting “Multiple organizations with mutual distrust and policy/legal restrictions” (Hares and Katz, 1989, pp. 4-5)

. The change in the network environment to one of mistrust unearthed critical architectural problems, as mistrust was fundamentally at odds with central assumptions in the Internet Protocol (IP), the network’s core routing protocol. Routing “determines the series of networks and gateways a packet will traverse in passing from the source to the destination” 

(Clark, 1989, p. 1). Embedded in the process is a notion of how to determine the series over which traffic should travel, which IP implemented by “minimizing some measure of the route, such as delay” (Clark, 1989, p. 1) and only promising to provide “point-to-point best-effort data delivery” (Braden et al., 1994, p. 2). In only needing to satisfy the constraint of minimizing some cost, the selected route could hypothetically use any path to transport a packet from source to destination. This strategy would not be sufficient in an environment of mistrust: a packet traveling between two regions that trusted each other could potentially travel through an untrusted region while in transit. Point-to-point, best-effort service gave ADs no explicit control over how their packets were routed, instead locating best-effort routing decisions in the internals of the network (Hares and Katz, 1989, p. 1) (Estrin, 1989, p. 5). This left packet data—and any downstream effects that data might have on the destination’s resources—at risk of interception or tampering.

As the only mechanism for transporting packets between ADs, best-effort routing was therefore in direct tension with the intra-AD management concerns we have discussed in this section. For an AD’s management goals to be met in the context of a distributed network, it would be important to add additional mechanisms of control “to select routes in order to restrict the use of network resources to certain classes of customers” (Clark, 1989, p. 1). It became clear that administrators needed to be able to set specific policies for how to select routes—constraints for how data should traverse the network in order to conform with local resource usage goals. As we discuss in the next section, being able to support policies through network architecture would require fundamental changes. That is, changes for “controlled network resource sharing and transit [would] require that policy enforcement be integrated into the routing protocols themselves and [could] not be left to network control mechanisms at the end points” (Estrin, 1989, p. 5, emphasis added). Policies would test the supposed fundamental commitment to End-to-End that had governed the network since its inception (Saltzer et al., 1984). Comprehensive accounting for resource usage would demand placing additional functionality in the network. Being able to connect was no longer enough; ADs needed policies to control how and when that connection occurred.

5. Accounting as policy: From accounting to accountability

As we have shown in Section 3, in the early days of the ARPANET accounting was considered a “supporting role compared to the core objective of forwarding data” (Clark, 2018, pp. 46-47). But, as we discussed in Section 4, over the course of the 1980s and growth of the network, accounting, and its role in management, was directly implicated in the network’s core objective of routing. Questions of routing started getting framed as questions of policy, and ARPANET engineers thus began to reckon with the notion that matters of technical design could not be separated from issues like who got to use the network and for what purpose. As Dave Clark noted back in 1989, “Policy matters are driven by human concerns, and these have not turned out to be amenable to topological constraints, or indeed to constraints of almost any sort” (Clark, 1989, p. 2). Routing decisions could no longer just be about technical constraints, such as minimizing a cost metric like overall distance; they would also need to incorporate legal and political constraints so that routing could produce “predictable, stable result[s] based on the desires of the administrator” (Braun, 1989, p. 2).

STS scholars have shown that technical and scientific work like this often involves such delineation of social and political aspects as existing beyond the scope of the engineers’ work. Much of the literature produced by these scholars has focused on exploring the inherent embeddedness of technical decisions within social and political structures, clarifying that there is no real way to carve out the socio from the technical (Bijker et al., 1987; Latour, 1993; Jasanoff, 2004). We can see this insight from STS play out in the network’s engineers’ attempts to weed policy out of the network’s design to no avail. Even if policy was a “human concern,” the network still needed a technical architecture in order to implement it.

5.1. Policy routing: An implementation for in-network control and accountability

By the end of the 1980s, some engineers were arguing that the network needed a “new generation of routing protocols” that would allow each AD “to independently express and enforce policies regarding the flow of packets to, from, and through its resources” (Estrin, 1989, p. 1) (Estrin and Tsudik, 1989).Hendnotepage.42linkHendnote.42424242macro:-¿And ultimately, even though accounting had become more expansive than the act of billing, policy remained inextricably tied to billing: “The discussion of lost packets makes clear an important relationship between billing and policy. If a Policy Route takes packets through a region of known unreliability, the regions preceding it on the path may be quite unwilling to forgive the charges for packets which have successfully crossed their region, only to be lost further down the route. A billing policy is a way of asserting that one region wishes to divorce itself from the reliability behavior of another region. … The use of a specific policy condition can make clear to the end user which [ADs] do not view themselves as interworking harmoniously”~“cite[p. 12]–rfc1102˝. These new architectural demands to support policies for routing control could not be separated from accounting. Accounting for resources was essential for controlling resources, since it was directly implicated in preventing, tracking, and correcting for unintended use (Braun, 1989, p. 9). The resulting proposed solution, Policy Routing (PR), was the architectural approach put forth to enable different communication policies between ADs in the network. A PR consisted of a sequence of gateways from source AD to destination AD; if such a route existed, then the policy associated with that PR was satisfied (Breslau and Estrin, 1990; Clark, 1989).Hendnotepage.43linkHendnote.4343

4343macro:-¿There were various competing proposals on how exactly to implement policy routing. David Clark’s RFC 1102 was among the most-cited, ultimately expanded upon by systems researcher Deborah Estrin~“cite–estrin1991policy˝. Estrin collaborated on several different policy routing implementation schemes. All proposals, regardless of specifics, had to handle “three design parameters: location of routing decision (i.e., predetermined at the source or hop-by-hop), algorithm used (i.e., link state or distance vector), and expression of policy in topology or in link status”~“cite[p. 231]–breslau1990ad˝. Such policies came in two flavors: access control and charging policies, frequently tied to quality of service (QoS) 

(ISO, 1984, p. 12).Hendnotepage.44linkHendnote.44444444macro:-¿These two classes can be further subdivided into various other policy types. For example, for determining charging policies: “classes of users (large or small institutions, for-profit or non-profit); classes of service; time varying (peak; off-peak); volume (discounts or surcharges); access charges (per port, e.g.); distance (circuit miles, number of hops, airline miles)”~“cite[p. 35]–rfc1009˝.

Access control policies controlled “who [could] use resources and under what conditions” (Estrin, 1989, p. 10). These policies enabled filtering out traffic considered “administratively inappropriate” (Little, 1989, p. 6); this included blanket policies, which defined “users, applications, or hosts” that could never “be permitted to traverse certain segments of the network” (Leiner, 1988, p. 9),Hendnotepage.45linkHendnote.4545

4545macro:-¿While policy routing is not an architectural requirement within the Internet today, it remains an often-used enterprise solution that can be implemented in routers used by organizations and ISPs. Geoblocking, while similar in purpose to some access-based policies, is a different function for restricting traffic based on a user’s geographical location. Rather than examining the internals of packet flows, traffic is blocked at the source using an IP address (which is a heuristic for geographic location, since IP addresses are generally assigned by country). Thus, as Clark notes, geoblocking is “supported approximately” on today’s Internet; it is fairly easy to circumvent via using a VPN~“cite[p. 298]–clark2018internet˝. For a very detailed treatment of geoblocking, see~“citet–goldsmithwu2006geoblocking˝. In computer networking, the recent advent in software-defined networking (SDN) technology indicates that it is possible Policy Routing will experience renewed interest; SDN could allow for the implementation and enforcement of application-specific policies~“cite[p. 210]–clark2018internet˝~“cite–greenberg2019cleanslatesdn˝. and finer-grained policies, which could allow a network manager to enforce traffic restrictions on a “particular misbehaving host” 

(Braden and Postel, 1987, p. 34) for a specific period of time. In contrast, charging policies could “be based upon equity (‘fairness’) or upon inequity (‘priority’)” (Braden and Postel, 1987, p. 49) concerning network traffic; they controlled the level of service guarantees for packet-forwarding, ranging from best-effort (with no service guarantees) to prioritized service, in which, for a premium,Hendnotepage.46linkHendnote.46464646macro:-¿Best-effort service, which minimizes some measure of routing path cost, could perhaps be free but come with no delivery guarantees; policies that implemented more control, and were at odds with the cheapest/minimal path would be paid for, perhaps using an overarching “enhanced service” charge or by charging “on a usage sensitive basis”~“cite[p. 71]–estrin1991policy˝). packets could jump to the head of the forwarding queue.Hendnotepage.47linkHendnote.47474747macro:-¿Even if the division of types of policies was clear at a high-level, it still remained unclear what a reasonable charging policy might be. That is, even if now it was clear “emph–who˝ was paying for a particular quality of service, it was not clear that “that the services provided at the network layer [would] map well to the sorts of services that network consumers [were] willing to pay for”~“cite[p. 9]–rfc1104˝. As noted in RFC 1104, “In the telephone network (as well as public data networks), users pay for End-to-End service and expect good quality service in terms of error rate and delay (and may be unwilling to pay for service that is viewed as unacceptable). In an internetworking environment, the heterogeneous administrative environment combined with the lack of End-to-End control may make this approach infeasible”~“cite[p. 9]–rfc1104˝. Policies enabling higher-quality service options would be useful for new types of networked applications, like videoconferencing;Hendnotepage.48linkHendnote.48484848macro:-¿In the mid 1980s, new software and hardware capabilities for end-node workstations presented the opportunity to develop entirely new classes of applications~“cite[p. 2]–rfc1633˝. Applications like videoconferencing could be widely available, not just fodder for flashy occasional demos as in the late 1960s~“cite[pp. 138-142, Mother of All Demos]–bardini2001engelbart˝. However, new workstation technology would not alone be sufficient to support such demanding applications; the network would also need to update its protocols in order to better support them. As mentioned in Section~“ref–sec:policy˝, best-effort routing is not well-suited to real-time applications like videoconferencing: packets get lost or dropped due to congestion; there are unpredictable periods of slow service due to packet queuing delays. Supporting additional qualities of service in network protocols would better accommodate these new hardware- and software-enabled applications. such real-time applications are more sensitive to the disruptions common to best-effort service, including unpredictable network latency and delays. Additionally, on a saturated network, the ability to set policies to prioritize service could be useful to ADs that wanted to treat some types of traffic as more important than others. Network engineers were less concerned with discussing these specific policy choices, but rather wanted to ensure that the policy routing architecture was sufficiently flexible to implement a wide range of AD-specific policy requirements (Clark, 1989, p. 5) (Estrin, 1989, p. 4).

All of these policies fundamentally concerned resource usage, and therefore ultimately implicated accounting. That is, while “Network accounting [was] generally considered to be simply a step that leads to billing,” by the late 1980s it was clear that accounting had much broader utility (Leiner, 1988, p. 34), as these same “records [could] also be used to determine usage patterns for the system” (Holbrook and Reynolds, 1991, p. 28). Irrespective of whether accounting records were used for billing, they contained useful information concerning resource usage, and therefore were essential for informing policy to control usage (Leiner, 1988, p. 35) (Clark, 1989, p. 11) (Estrin, 1989, p. 6)).Hendnotepage.49linkHendnote.49494949macro:-¿Cost recovery and billing remained important considerations of policy: “Almost all of the existing Internet has been paid for as capital purchase and provided to the users as a free good. There are limited examples of cost recovery, but these are based on an annual subscription fee rather than a charge related to the utilization. “textbf–There is a growing body of opinion which says that accounting for usage, if not billing for it, is an important component of resource management. For this reason, tools for accounting and billing must be a central part of any policy mechanism˝”~“cite[p. 11, emphasis added]–rfc1102˝. And yet, just as the early conversations over billing in the 1970s indicated (Section~“ref–sec:billing˝), setting an overarching billing policy is especially challenging in a distributed network, which was made even more difficult when different regions of the network clarified their individual administrative needs (Section~“ref–sec:management˝): “However, precisely because the administrative regions are autonomous, we cannot impose a uniform form of billing policy on all of the regions…The billing problem is thus a very complicated one, for the user would presumably desire to minimize the cost, in the context of the various outstanding conditions”~“cite[p. 11]–rfc1102˝. For example, records could be used in post hoc audits to confirm that actual resource usage aligned with set access and QoS policies (Holbrook and Reynolds, 1991, pp. 49,66,77) (Leiner, 1987, p.11).Hendnotepage.50linkHendnote.50505050macro:-¿This same data, aside from setting and enforcing policy, also retained its overarching role of allowing “network management personnel to determine the ‘flows’ of data on the network, and the identification of bottlenecks in network resources,”~“cite[p. 11]–rfc1017˝ meaning it continued to support monitoring and diagnostics for operational reasons. In particular, “unusual accounting records [could] indicate unauthorized use of the system” (Holbrook and Reynolds, 1991, p. 28). If a pattern of “malicious” (Leiner, 1987, p. 11) use or other of “abuse (e.g., unauthorized use) develop[ed], an accounting system could track this and allow corrective action to be taken, by changing routing policy or imposing access control (blocking hosts or nets)” (Braun, 1989, p. 10). Routing policy tied together accounting information with the “human concerns” of how data moved around the network and introduced, into the architecture, considerations that conflicted with strict notions of efficiency.

5.2. An (un)accountable network: Enabling accountability required designing for accounting

This connection between accounting and enforcement of resource control policies brought about the necessary conditions to produce the first working definition of accountability in relation to the network. In RFC 1125, networking researcher Deborah Estrin made the interdependence between accounting, policy, and accountability unimpeachably clear:

One way of reducing the compromise of autonomy associated with interconnection is to implement mechanisms that assure accountability for resources used. Accountability may be enforced a priori, e.g., access control mechanisms applied before resource usage is permitted. Alternatively, accountability may be enforced after the fact, e.g., record keeping or metering that supports detection and provides evidence to third parties (i.e., non-repudiation). Accountability mechanisms can also be used to provide feedback to users as to consumption of resources. … [I]t becomes more appropriate to have resource usage visible to users, whether or not actual charging for usage takes place (Estrin, 1989, p. 6).

In short, achieving accountability in the network meant being able to implement policies for dealing with resource misuse. While some policies emphasized prevention and others concerned identifying, isolating, and mitigating misuse after it had already occurred, all policies were ultimately dependent on accounting records.Hendnotepage.51linkHendnote.51515151macro:-¿RFC 1104 tries to distinguish the function of accounting from the function of policy-based routing. In the process, it becomes clear that, while accounting’s role is more expansive than its implications in policy-based routing, policy-based routing entirely depends on accounting: “Accounting vs. Policy Based Routing: Quite often Accounting and Policy Based Routing are discussed together. While the application of both Accounting and Policy Based Routing is to control access to scarce network resources, these are separate (but related) issues. The chief difference between Accounting and Policy Based Routing is that Accounting combines history information with policy information to track network usage for various purposes. Accounting information may in turn drive policy mechanisms (for instance, one could imagine a policy limiting a certain organization to a fixed aggregate percentage of dynamically shared bandwidth). Conversely, policy information may affect accounting issues”~“cite[p. 9]–rfc1104˝.

In showing the importance of accounting for enabling accountability, RFC 1125 captures the fundamental argument that we have taken up in this paper: The tensions that come up in a resource-sharing, networked computing environment (Sections 23), in which there is no global authority (Section 4), ultimately reflect tensions concerning accountability and autonomy. While we have shown throughout this paper that issues of resource management were never not contentious, it was at this point in time that the network architects and engineers considered accounting-related issues at the level of architecture, instead of an annoyance. In Estrin’s words, “the lack of global authority, the need to support network resource sharing as well as network interconnection, the complex and dynamic mapping of users to ADs and rights, and the need for accountability across ADs, are characteristics of inter-AD communications which must be taken into account in the design of both policies and supporting technical mechanisms”; “it would be inexcusable to ignore resource control requirements and not to pay careful attention to their specification” (Estrin, 1989, pp. 6,7).

Rather than accounting not meeting the parsimony requirements of End-to-End, accounting’s necessary role in accountability meant that the engineers promoting a policy routing architecture—notably, including End-to-End (Saltzer et al., 1984) author and network architect David Clark (Clark, 1989)—were willing to consider accounting features as fundamental to incorporate within the network’s foundational routing protocol. This would require placing mechanisms for accounting in the network, instead of just at the end nodes. This perspective marked a significant shift from the early days of the ARPANET, well-characterized in Bob Kahn’s RFC 136 (Section 3). Before, accounting was discounted as being a part of the network’s core architecture, with respect to the End-to-End principle: It was separated from the function of routing and was not a “performance enhancement” (Saltzer et al., 1984; Kahn, 1971), and therefore was not considered a candidate feature of the network’s essential architecture. Given accounting’s necessity for accountability, accountability was thus directly at odds with this earlier interpretation of End-to-End. Accountability would require placing mechanisms for accounting in the network, instead of just at the end nodes.lllIn fact, for policy routing to work—particularly for QoS related functions (Braden et al., 1994)—it would be necessary to store state in the network gateways (Section 4.2), thereby moving away from the notion of “dumb” gateways that had been an essential feature of the End-to-End architecture. To resolve this tension by choosing to acknowledge accountability as an essential feature, the engineers that promoted a policy routing architecture—including, notably, End-to-End (Saltzer et al., 1984) author and network architect David Clark (Clark, 1989; Braden et al., 1994)—were forced to re-imagine the meaning and primacy of End-to-End. For an accountable network, the next evolution of its architecture would need to be shaped by a new, competing ideology.

6. Conclusion

In this paper, we have traced the changing meaning of the term “accounting” among the research community which developed the ARPANET and early Internet, from the late 1960s and through the later 1980s. Throughout this historical arc, we have paid attention both to how accounting was and was not considered part of the set of research problems facing network developers, and to how its meaning shifted in relation to the changing environment of the network’s deployment and institutional context. We have characterized 4 phases of accounting with the ARPANET and early Internet RFCs: accounting as billing, accounting as measurement, accounting as management, and accounting as policy. We concluded this characterization by arguing that the different meanings of accounting and the stakes of debates under each phase provide us with an emerging articulation of accountability, placing accounting and its administrative associations squarely within the domain of both deep technical questions about the network and the political expectations of accountable technical systems..

This analysis resonates beyond the historiographical considerations of the Internet’s development and the contemporary policy issues of its governance. We offer three insights that have bearing on research concerning accountability and complex technical systems, which together serve as a cautionary tale for the design of contemporary systems. First, the core design question of the story above is about resource sharing and allocation, a motivating problem in many computing applications today, especially in machine learning. The non-trivial questions involved in designing mechanisms to account for resource use in a distributed system are constitutive of the possibility of creating accountability; it is thus worth interrogating how mechanisms for accounting may facilitate or obscure accountability. Second, we argued that developing accounting mechanisms was routinely deferred, not only because of the foundational tensions involved in developing accounting schemes, but because of a dynamic of discounting its significance. Accounting used the administrative language around issues of delineating different actors’ autonomy, negotiating trust, and creating a policy of prioritization, allowing such issues to be dismissed as “operational,” beyond the core set of research concerns. Finally, by taking an institutional approach to history of technical object, we argued that the broader context in which technological systems develop shapes the ways accountability is defined and implemented (or ignored). Taking the administrative needs of this setting seriously, rather than casting them aside as overhead concerns, would help clarify the necessary infrastructure to support concrete notions of accountability in complex technical systems.

Acknowledgements.

References

  • J. Abbate (1999) Inventing the internet. MIT Press, Cambridge, MA, USA. External Links: ISBN 0262011727 Cited by: Appendix B, Appendix B, Appendix B, Appendix B, §1.1, §1.2, §1.2, §2, §2, §2, §2, §2, §3.1, §3, §3, §4.2.
  • P. Adler, C. Falk, S. A. Friedler, T. Nix, G. Rybeck, C. Scheidegger, B. Smith, and S. Venkatasubramanian (2018) Auditing black-box models for indirect influence. Knowledge and Information Systems 54, pp. 95–122. Cited by: §1.
  • P. Almquist and F. Kastenholz (1994) Towards Requirements for IP Routers. Request for Comments, RFC Editor. Note: RFC 1716 External Links: Document, Link Cited by: §3.1.
  • A. Anderson (1990) Oral history interview with Alexander A. McKenzie. Charles Babbage Institute. Note: Retrieved from the University of Minnesota Digital Conservancy External Links: Link Cited by: §4.1, §4.1.
  • A. Bhushan (1972) File Transfer Protocol. Request for Comments, RFC Editor. Note: RFC 354 External Links: Document, Link Cited by: §2.1.
  • W. E. Bijker, T. P. Hughes, and T. J. Pinch. (Eds.) (1987) The social construction of technological systems : new directions in the sociology and history of technology. MIT Press, Cambridge, Mass.. External Links: ISBN 0262521377 Cited by: §5.
  • M. S. Blumenthal and D. D. Clark (2001) Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World. ACM Trans. Internet Technol. 1 (1), pp. 70–109. External Links: ISSN 1533-5399, Link, Document Cited by: §1.1.
  • M. Bovens (2007) Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal 13 (4), pp. 447–468. External Links: Document, Link Cited by: §1.
  • R. Braden and J. Postel (1987) Requirements for Internet gateways. Request for Comments, RFC Editor. Note: RFC 1009 External Links: Document, Link Cited by: §5.1.
  • R. T. Braden, Dr. D. D. Clark, and S. Shenker (1994) Integrated Services in the Internet Architecture: an Overview. Request for Comments, RFC Editor. Note: RFC 1633 External Links: Document, Link Cited by: §4.2, §5.2, footnote l.
  • H. Braun (1989) Models of policy based routing. Request for Comments, RFC Editor. Note: RFC 1104 External Links: Document, Link Cited by: §5.1, §5.1, §5.
  • L. Breslau and D. Estrin (1990) Design of Inter-Administrative Domain Routing Protocols. In Proceedings of the ACM Symposium on Communications Architectures & Protocols, SIGCOMM ’90, New York, NY, USA, pp. 231–241. External Links: ISBN 0897914058, Link, Document Cited by: §5.1.
  • B. Bressler (1973) Free file transfer. Request for Comments, RFC Editor. Note: RFC 487 External Links: Document, Link Cited by: Appendix A, Appendix A, §2.1.
  • V. Cerf and R. Kahn (1974) A Protocol for Packet Network Intercommunication. IEEE Transactions on Communications 22 (5), pp. 637–648. External Links: Document Cited by: §1.1.
  • V. Cerf (1988) IAB recommendations for the development of Internet network management standards. Request for Comments, RFC Editor. Note: RFC 1052 External Links: Document, Link Cited by: §4.1.
  • V. Cerf (1989) Reqieum for the ARPANET. In The Internet History, pp. 7–10. Note: Accessed January 9, 2022 External Links: Link Cited by: §1.2.
  • D. K. Citron and D. J. Solove (2022) Privacy Harms. Boston University Law Review 102. Cited by: §1.1, §1.2.
  • D. Clark (1988) The Design Philosophy of the DARPA Internet Protocols. In Symposium Proceedings on Communications Architectures and Protocols, SIGCOMM ’88, New York, NY, USA, pp. 106–114. External Links: ISBN 0897912799, Link, Document Cited by: §1.1.
  • D. Clark (1989) Policy routing in Internet protocols. Request for Comments, RFC Editor. Note: RFC 1102 External Links: Document, Link Cited by: §4.2, §4.2, §4.2, §5.1, §5.1, §5.1, §5.2, §5.
  • D. D. Clark (2018) Designing an internet. 1st edition, The MIT Press, Cambridge, MA, USA. External Links: ISBN 0262038609 Cited by: Appendix A, §1.2, §1, §3.1, §4.1, §4.1, §5, footnote a.
  • D. Clark and S. Landau (2011) Untangling Attribution. Proceedings of A Workshop on Deterring Cyberattacks: Informing Strategies and Developing Options for US Policy, pp. 25–40. Cited by: footnote c.
  • A. F. Cooper, K. Levy, and C. De Sa (2021) Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems. In Equity and Access in Algorithms, Mechanisms, and Optimization, New York, NY, USA. External Links: ISBN 9781450385534, Link Cited by: §1.
  • D. Crocker, N. Neigus, J. Feinler, and J. Iseli (1973) ARPANET users interest working group meeting. Request for Comments, RFC Editor. Note: RFC 585 External Links: Document, Link Cited by: §3.
  • S. Crocker, S. Carr, and V. Cerf (1970) New Host-Host Protocol. Request for Comments, RFC Editor. Note: RFC 33 External Links: Document, Link Cited by: Appendix B, §1.2.
  • S. Crocker (1970) Network Meeting. Request for Comments, RFC Editor. Note: RFC 75 External Links: Document, Link Cited by: §2, §3, §3.
  • L. DeNardis (2015) The Internet Design Tension between Surveillance and Security. IEEE Annals of the History of Computing 37 (2), pp. 72–83. External Links: Document Cited by: §1.1.
  • Edwin W. Meyer, Jr. (1970) Network Meeting Notes. Request for Comments, RFC Editor. Note: RFC 82 External Links: Document, Link Cited by: Appendix A, Appendix A, Appendix A, §2, §2, §2, §2, §3, §3, §3, §3, §3, footnote h.
  • D. Estrin and G. Tsudik (1989) Security Issues in Policy Routing. In Proceedings of 1980 IEEE Symposium on Security and Privacy, Cited by: §5.1.
  • D. Estrin (1989) Policy requirements for inter Administrative Domain routing. Request for Comments, RFC Editor. Note: RFC 1125 External Links: Document, Link Cited by: §4.2, §4.2, §5.1, §5.1, §5.1, §5.2, §5.2.
  • L. Garlick (1976) Out-of-Band Control Signals in a Host-to-Host Protocol. Request for Comments, RFC Editor. Note: RFC 721 External Links: Document, Link Cited by: §1.2.
  • T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. M. Wallach, H. D. III, and K. Crawford (2018) Datasheets for Datasets. abs/1803.09010. External Links: Link Cited by: §1.
  • T. Gillespie (2006) Engineering a Principle: ’End-to-End’ in the Design of the Internet. Social studies of science 36 (3), pp. 427–457 (eng). External Links: ISSN 0306-3127 Cited by: §1.1.
  • T. Gillespie (2010) The politics of ’platforms’. New Media & Society 12 (3), pp. 347–364. External Links: Document, Link Cited by: §1.1.
  • S. Greenstein (2015) How the internet became commercial: innovation, privatization, and the birth of a new network. Princeton University Press, Princeton, NJ. External Links: ISBN 1-4008-7429-7 Cited by: §1.1, footnote c.
  • S. Hambridge and A. Lunde (1999) DON’T SPEW A Set of Guidelines for Mass Unsolicited Mailings and Postings (spam*). Request for Comments, RFC Editor. Note: RFC 2635 External Links: Document, Link Cited by: Appendix B, footnote m.
  • S. Hares and D. Katz (1989) Administrative Domains and Routing Domains: A model for routing in the Internet. Request for Comments, RFC Editor. Note: RFC 1136 External Links: Document, Link Cited by: §4.2.
  • J. Heafner and E. Harslem (1971) Service center standards for remote usage: A user’s view. Request for Comments, RFC Editor. Note: RFC 231 External Links: Document, Link Cited by: Appendix A, §2.
  • F. E. Heart, R. E. Kahn, S. M. Ornstein, W. R. Crowther, and D. C. Walden (1970) The Interface Message Processor for the ARPA Computer Network. In Proceedings of the May 5-7, 1970, Spring Joint Computer Conference, AFIPS ’70 (Spring), New York, NY, USA, pp. 551–567. External Links: ISBN 9781450379038, Link, Document Cited by: §2.
  • P. Holbrook and J. K. Reynolds (1991) Site Security Handbook. Request for Comments, RFC Editor. Note: RFC 1244 External Links: Document, Link Cited by: §5.1.
  • P. J. N. II (1999) The Internet and the Millennium Problem (Year 2000). Request for Comments, RFC Editor. Note: RFC 2626 External Links: Document, Link Cited by: §C.2.
  • ISO (1983) ISO Transport Protocol specification. Request for Comments, RFC Editor. Note: RFC 892 External Links: Document, Link Cited by: §3.1.
  • ISO (1984) Protocol for providing the connectionless mode network services. Request for Comments, RFC Editor. Note: RFC 926 External Links: Document, Link Cited by: §5.1.
  • S. Jasanoff (2004) States of knowledge: the co-production of science and the social order. International Library of Sociology, Routledge, Abingdon, Oxon. External Links: ISBN 9780415333610 Cited by: §5.
  • P. R. Johnson and R. H. Thomas (1975) Maintenance of duplicate databases. Request for Comments, RFC Editor. Note: RFC 677 External Links: Document, Link Cited by: §3.1.
  • S. Kacianka and A. Pretschner (2021) Designing Accountable Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 424–437. External Links: ISBN 9781450383097, Link, Document Cited by: §1.
  • R. Kahn (1971) Host accounting and administrative procedures. Request for Comments, RFC Editor. Note: RFC 136 External Links: Document, Link Cited by: §3, §3, §5.2, footnote h.
  • P. Karp (1971) Categorization and guide to NWG/RFCs. Request for Comments, RFC Editor. Note: RFC 100 External Links: Document, Link Cited by: §3.
  • B. Kim and F. Doshi-Velez (2021) Machine Learning Techniques for Accountability. AI Magazine 42 (1), pp. 47–52. External Links: Link Cited by: §1.
  • K. Klonick (2020) The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression. Yale Law Journal 129. Cited by: §1.1, §1.2.
  • J. Kosseff (2022) A User’s Guide to Section 230, and a Legislator’s Guide to Amending It (or Not). Berkeley Technology Law Journal 37. Cited by: §1.1.
  • J. A. Kroll, J. Huey, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu (2017) Accountable Algorithms. University of Pennsylvania Law Review 165, pp. 633–705. Cited by: §1.
  • J. A. Kroll (2021) Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 758–771. External Links: ISBN 9781450383097, Link, Document Cited by: §1, §1.
  • B. Latour (1993) We have never been modern. Harvard University Press, Cambridge, Mass.. External Links: ISBN 0674948386 Cited by: §5.
  • B. M. Leiner, V. G. Cerf, D. D. Clark, R. E. Kahn, L. Kleinrock, D. C. Lynch, J. Postel, L. G. Roberts, and S. Wolff (2009) A Brief History of the Internet. SIGCOMM Comput. Commun. Rev. 39 (5), pp. 22–31. External Links: ISSN 0146-4833, Link, Document Cited by: §4.2.
  • B. M. Leiner, V. G. Cerf, D. D. Clark, R. E. Kahn, L. Kleinrock, D. C. Lynch, J. Postel, L. G. Roberts, and S. S. Wolff (1997) The Past and Future History of the Internet. Commun. ACM 40 (2), pp. 102–108. External Links: ISSN 0001-0782, Link, Document Cited by: §2, footnote a.
  • B. M. Leiner (1987) Implementation plan for interagency research Internet. Request for Comments, RFC Editor. Note: RFC 1015 External Links: Document, Link Cited by: Appendix A, §4.2, §4.2, footnote j.
  • B. M. Leiner (1987) Network requirements for scientific research: Internet task force on scientific computing. Request for Comments, RFC Editor. Note: RFC 1017 External Links: Document, Link Cited by: Appendix A, §3.1, §5.1.
  • B. Leiner (1988) Critical issues in high bandwidth networking. Request for Comments, RFC Editor. Note: RFC 1077 External Links: Document, Link Cited by: §5.1, §5.1.
  • L. Lessig (2006) Code version 2.0. [2nd ed.]. edition, Basic Books, New York (eng). External Links: ISBN 0465039146 Cited by: footnote c.
  • M. Little (1989) Goals and functional requirements for inter-autonomous system routing. Request for Comments, RFC Editor. Note: RFC 1126 External Links: Document, Link Cited by: §5.1.
  • T. Marill and L. G. Roberts (1966) Toward a Cooperative Network of Time-Shared Computers. In Proceedings of the November 7-10, 1966, Fall Joint Computer Conference, AFIPS ’66 (Fall), New York, NY, USA, pp. 425–431. External Links: ISBN 9781450378932, Link, Document Cited by: §2.
  • C. D. McIlwain (2020) Black software : the internet and racial justice, from the afronet to black lives matter. New York, NY (eng). External Links: ISBN 9780190863845 Cited by: §1.1.
  • E. Medina (2011) Cybernetic revolutionaries : technology and politics in allende’s chile. MIT Press, Cambridge, Mass. (eng). External Links: ISBN 9780262016490 Cited by: §1.1.
  • R. Merryman (1973) UCSD-CC Server-FTP facility. Request for Comments, RFC Editor. Note: RFC 532 External Links: Document, Link Cited by: Appendix A.
  • J. Metcalf, E. Moss, E. A. Watkins, R. Singh, and M. C. Elish (2021) Algorithmic impact assessments and accountability: the co-construction of impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, USA, pp. 735–746. External Links: ISBN 9781450383097, Link, Document Cited by: §1.
  • M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru (2019) Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, New York, NY, USA, pp. 220–229. External Links: ISBN 9781450361255, Link, Document Cited by: §1.
  • H. Nissenbaum (1996) Accountability in a Computerized Society. Science and Engineering Ethics 2, pp. 25–42. Cited by: §1.
  • M. A. Padlipsky (1973) What is ”Free”?. Request for Comments, RFC Editor. Note: RFC 491 External Links: Document, Link Cited by: §2.1.
  • M. A. Padlipsky (1973a) Feast or famine? A response to two recent RFC’s about network information. Request for Comments, RFC Editor. Note: RFC 531 External Links: Document, Link Cited by: Appendix B, §2.
  • M. A. Padlipsky (1973b) Two solutions to a file transfer access problem. Request for Comments, RFC Editor. Note: RFC 505 External Links: Document, Link Cited by: §3.1.
  • B. Peters (2008) How not to network a nation : the uneasy history of the soviet internet. Information Policy Series, MIT Press, Cambridge, Massachusetts (eng). External Links: ISBN 0-262-33419-4 Cited by: §1.1.
  • J. Pickens (1972) Evaluation of ARPANET services January-March, 1972. Request for Comments, RFC Editor. Note: RFC 369 External Links: Document, Link Cited by: §2, §2, §3.
  • K.T. Pogran (1973) Un-muddling ”free file transfer”. Request for Comments, RFC Editor. Note: RFC 501 External Links: Document, Link Cited by: Appendix A, Appendix A, §2.1, §2.1, footnote e.
  • D. G. Post (2009) In search of Jefferson’s moose : notes on the state of cyberspace. Law and current events masters, Oxford University Press, Oxford ; New York. External Links: ISBN 9780195342895 Cited by: footnote c.
  • J. Postel (1970) Network meeting report. Request for Comments, RFC Editor. Note: RFC 77 External Links: Document, Link Cited by: Appendix A, §2, §3, §3, §3.
  • J. B. Postel (1980) Internet Message Protocol. Request for Comments, RFC Editor. Note: RFC 759 External Links: Document, Link Cited by: §3.1.
  • I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes (2020) Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, New York, NY, USA, pp. 33–44. External Links: ISBN 9781450369367, Link, Document Cited by: §1.
  • J. Rexford and C. Dovrolis (2010) Future Internet architecture: clean-slate versus evolutionary research. Commun. ACM 53, pp. 36–40. External Links: Document, ISSN 0001-0782, Link Cited by: §1.1, §1.
  • L. G. Roberts and B. D. Wessler (1970) Computer Network Development to Achieve Resource Sharing. In Proceedings of the May 5-7, 1970, Spring Joint Computer Conference, AFIPS ’70 (Spring), New York, NY, USA, pp. 543–549. External Links: ISBN 9781450379038, Link, Document Cited by: Appendix A, §2, §2, §3.
  • L. G. Roberts (1967) Multiple Computer Networks and Intercomputer Communication. In Proceedings of the First ACM Symposium on Operating System Principles, SOSP ’67, New York, NY, USA, pp. 3.1–3.6. External Links: ISBN 9781450373708, Link, Document Cited by: Appendix B, §1.2, §3.1.
  • E. C. Rosen (1982) Exterior Gateway Protocol (EGP). Request for Comments, RFC Editor. Note: RFC 827 External Links: Document, Link Cited by: §4.2.
  • A. L. Russell (2006) ’Rough Consensus and Running Code’ and the Internet-OSI Standards War. IEEE Ann. Hist. Comput. 28 (3), pp. 48–61. External Links: ISSN 1058-6180, Link, Document Cited by: §4.2, footnote b.
  • A. L. Russell (2014) Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge Studies in the Emergence of Global Enterprise, Cambridge University Press, Cambridge. Cited by: §1.1.
  • J. H. Saltzer, D. P. Reed, and D. D. Clark (1984) End-to-End Arguments in System Design. ACM Trans. Comput. Syst. 2 (4), pp. 277–288. External Links: ISSN 0734-2071, Link, Document Cited by: §1.1, §3, §3, §4.2, §5.2.
  • F. Turner (2006) From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University of Chicago Press, Chicago, IL, USA. Cited by: §1.1.
  • B. Vecchione, K. Levy, and S. Barocas (2021) Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies. In Equity and Access in Algorithms, Mechanisms, and Optimization, EAAMO ’21, New York, NY, USA. External Links: ISBN 9781450385534, Link, Document Cited by: §1.
  • U. Warrier and L. Besaw (1989) Common Management Information Services and Protocol over TCP/IP (CMOT). Request for Comments, RFC Editor. Note: RFC 1095 External Links: Document, Link Cited by: §4.1, §4.2, §4.2.
  • R. Watson (1973) Some thoughts on system design to facilitate resource sharing. Request for Comments, RFC Editor. Note: RFC 592 External Links: Document, Link Cited by: §3.
  • R. W. Watson (1971) Notes on the Network Working Group meeting, Urbana, Illinois, February 17, 1971. Request for Comments, RFC Editor. Note: RFC 101 External Links: Document, Link Cited by: Appendix A, §2, §3, §3, §3.
  • D. J. Weitzner, H. Abelson, T. Berners-Lee, J. Feigenbaum, J. Hendler, and G. J. Sussman (2008) Information Accountability. Commun. ACM 51 (6), pp. 82–87. External Links: ISSN 0001-0782, Link, Document Cited by: §1.
  • J. E. White (1971) Network specifications for UCSB’s Simple-Minded File System. Request for Comments, RFC Editor. Note: RFC 122 External Links: Document, Link Cited by: Appendix A.
  • J. E. White (1973) Responses to critiques of the proposed mail protocol. Request for Comments, RFC Editor. Note: RFC 555 External Links: Document, Link Cited by: Appendix B.
  • M. Wieringa (2020) What to Account for When Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, New York, NY, USA, pp. 1–18. External Links: ISBN 9781450369367, Link, Document Cited by: §1, §1.
  • J. Yates (2019) Engineering rules : global standard setting since 1880. Hagley Library Studies in Business, Technology, and Politics, Johns Hopkins University Press, Baltimore, Maryland. External Links: ISBN 9781421428901 Cited by: §1.1.
  • J. Zittrain (2008) The future of the internet–and how to stop it. Yale University Press, USA. External Links: ISBN 0300124872 Cited by: footnote c.
  • J. Zittrain (2014) The Virtues of Procrastination. Note: Accessed January 21, 2022 External Links: Link Cited by: §1.1.

Appendix A The “Research” vs. “Service” Distinction

It is important to note that Bressler (Bressler, 1973) and Pogran (Pogran, 1973) were not alone in conceiving of rationales for circumventing accounting; punting on both implementing and performing accounting for certain aspects of network use was a common theme in the early ARPANET. Most notably, there was an overarching attempt to distinguish categories of work on the network, such that some preferred types of work could be “free”—that is, have the bill covered by ARPA. In response to this desire, a binary distinction emerged at site level—“Research Centers vs. Service Centers” (Postel, 1970, p.1); or, “free but limited access research sites” used strictly for experimental purposes and “billing sites” (Edwin W. Meyer, Jr., 1970, p. 3, p. 18). Early Internet pioneer Jon Postel further refined this classification in relation to hardware and access patterns: “The Service Centers tend to have big machines, lots of users, and accounting problems; while the Research Centers tend to have specialized hardware, a small number of users, and no accounting at all” (Postel, 1970, p.1). Even with these definitions, it is not entirely clear what “service” meant in terms of function, aside from the common need for accounting in order to support site use. The word “service” remains simultaneously vague and overloaded, both during the 1970s when ARPANET engineers were teasing out research/service distinction and with respect to contemporary use. For example, in his 2018 book, David Clark notes that “service” is a term that has repeatedly confused him, and that he makes sense of it by reducing it to the following: “A service is something that you sell; it is how you make money” (Clark, 2018, p. 311).

One can attempt to elicit far more precise understanding of service from Service Center examples, which included the Network Information Center (NIC) at SRI, Multics at MIT (Watson, 1971, p. 13), UCSB’s Simple-Minded File System (SMFS, which provided a secondary storage node that anyone on the ARPANET could pay to use; it had limited availability and was not intended to become the storage node for the whole network) (White, 1971), and UCSD’s FTP service site, which had to bill for usage in order to support itself (each FTP file transfer was billed separately in the accounting system, based on lower-level accounting for processor, I/O, and core usage, and (if used) external storage tapes) (Merryman, 1973).

However, these varied examples of site functions, particularly the inclusion of FTP, the costs of which were debated in unique detail (Section 2.1, indicate that what constituted the distinction between research and service was contentious and unclear. (This contention is explicitly addressed as a difference in “orientation” in RFC 231: “In the network at large, with our research orientation, personnel tend to have a different approach to computing than that required by a service bureau.” Service Centers were believed to be subject to “market-oriented requirements” to rate-limit use, while Research Centers were free from such forces (Heafner and Harslem, 1971, p. 4) Moreover, it is additionally not trivial to consider how billing should be handled when research and service sites resource-share. Notably, Larry Roberts acknowledged this, but did not provide a clear idea on how to resolve it: “What happens when a research site talks to a billing site? I think it is do-able” (Edwin W. Meyer, Jr., 1970, p. 3). The UCSD FTP site, at least by 1973, had to account for usage and issue bills in order to support itself (it is unclear if this site had an option to use a “free” account for research purposes). However, as the RFCs debating “free” file transfer show, this need for accounting did not extend to all FTP usage whether it was hosted at a service site or not—at least not immediately (Bressler, 1973; Pogran, 1973).

That is, even if accounting was not necessary at research sites to start, it was acknowledged that, as more users joined the ARPANET and wanted access to limited resources, it would eventually be necessary to account and bill for research usage, as well. For example, as early as RFC 82, Douglas Engelbart was recorded as saying that SRI will eventually have to bill as more users come online: “A system will exist in Spring 1971, to allow an agent to insert into a catalog. The dialogue that goes on will determine which way the data base grows. We are pretty sure that eventually SRI will have to charge because of many potential users not at primary sites seeking limit[sic] resources. … Each site is registered. Any person who gets in on a site’s account has its access. We won’t worry about accounting until saturation occurs. We would like to encourage use of the agent system to create and use a survey of resources at each site,” (Edwin W. Meyer, Jr., 1970, p. 7) which included SRI’s research theorem prover tools (Roberts and Wessler, 1970, p. 548). In fact, by 1987, the interagency research Internet proposal made it clear that accounting would absolutely be necessary for research (Leiner, 1987, 1987).

Appendix B The Decline of the Ideal of Resource-Sharing

Abbate describes this stage of development of the ARPANET as “the decline of the ideal of resource sharing” (in line with the vision of the ARPANET outline in  Roberts (1967)). As the network spread to more ARPA contractors sites, the “demand for remote resources fell,” which she argues by pointing out that “many sites rich in computing resources seemed to be looking for users” (Abbate, 1999, p. 104). Abbate credits this “decline of the ideal” to severe usability issues in the ARPANET. Throughout Chapter 3, she discusses the practical difficulties of using the network—even with an action as simple as finding a particular resource, as the network lacked appropriate search tools (Abbate, 1999, p. 86)(See also RFC 531, concerning ARPANET usability problems, the creation of a resource notebook to improve these problems, and ultimately the additional gaps the resource notebook highlighted in relation to documentation reliability issues (Padlipsky, 1973a)).

Even if users got past the initial hurdle of finding a resource, there remained additional steps in order to access it. Abbate acknowledges that this in part has to do with accounting, but discusses the issue as one of usability: the user would have to contact an administrator to set up an account on the remote host in order to access it, which required finding and contacting the appropriate administrator at the remote site; if the remote site wanted to charge for usage, the user then also usually had to initiate a purchase order at their local institution. Only then could they access the resource, at which point it often remained a challenge to figure out how the resource was supposed to be used (Abbate, 1999, pp. 87-88).

This lack of usability, which reflected both technical and administrative issues, ideally should not have been a relevant concern for individual users; however, it was particularly an obstacle for novice users, which became a bottleneck in the network’s ability to grow and reach saturation. Abbate therefore reasons that the ARPANET became a technology seeking an appropriate application and an interested user base, as it fell short of its goal to facilitate resource sharing. The ARPANET needed to experience a fundamental shift in “identity and purpose” if it was to be a useful technology (Abbate, 1999, p. 109).

Abbate writes that this shift ultimately occurred when the ARPANET found such a “smash hit” application in email (Abbate, 1999, p. 106). In contrast to the technical and administrative usability issues that plagued resource-sharing, email was very simple to use; it connected users at remote sites, but was an application that users could access locally. Similarly, local area networks (LANs) also became a popular, unexpected use of the ARPANET at this time, as LANs did not have the same usability issues as resource-sharing. In the early 1970s, sites like USC and SRI used the ARPANET as LAN such that, by 1975, 30% of traffic on the network was intra-node (as opposed to inter-node resource-sharing) (Abbate, 1999, p. 94). In short, Abbate argues that, while resource-sharing struggled to find users, these two ARPANET uses were successes that validated the utility of the network.

Unpacking Abbate’s focus on usability, we argue that it is possible to understand the “decline of the ideal of resource sharing” and its attending challenges to the would-be user in terms of the lack of appropriate, fleshed-out accounting mechanisms. While usability was certainly a relevant factor concerning the (lack of) ease of adoption of the ARPANET for resource-sharing, it should not be conflated with a lack of desire to resource-share altogether. In fact, the debate over “free;; file transfer (Section 2.1) and the attempt to classify different sites as research or service centers is evidence that ARPANET users wanted to resource-share (with services like UCSB’s SMFS and UCSD’s FTP node being sufficiently utilized to require accounting to recoup costs, see Appendix A). Instead, as we discussed in Sections 2 and  3, accounting was (and remains) an extremely challenging technical and administrative problem, which the early ARPANET architects did not have an appetite to address. As a result, accounting did not become a first-order feature of the network, and was instead a messy patchwork of non-interoperable, ill-defined systems that made recouping the costs of resource-sharing intractable.

From this perspective, it is possible to recast Abbate’s examples of unexpected successes of usability—the “smash hit” of email and the proliferation of LANs—as successes owing to their independence from distributing accounting. Email leveraged the distributed network, but it did not have the same distributed accounting problems as resource-sharing. Aside from using gateway nodes for routing email to its final destination, email (at least its 1970s iteration) was a local application, using local CPU and local storage. If accounting needed to be done, it could be handled locally at the site level, using local administrative procedures (if there were any), such as those used for time-sharing (Crocker et al., 1970, p. 4) (White, 1973, concerning UCLA’s internal billing for email). In other words, it is possible to view email as a success not just because of ease of use, but also because it was possible to treat email as “free” in a way similar to the initial assumptions about the negligible costs of “free” file transfer, in which local billing could be used for local usage or billing could be punted back to ARPA. (Of course, just like “free” file transfer, email was not literally free. Email eventually became quite costly, particularly when spam became a growing practice. Spam put a strain on receivers’ local resource usage, which meant that it, too, became an accounting problem (Hambridge and Lunde, 1999)mmmRFC 2635, “Don’t Spew: A Set of Guidelines for Mass Unsolicited Mailings and Postings (spam*),” discusses the costs of spam via a comparison with physical mail. It notes that it is easier to send email, so the scale of junk email is much greater. The costs are also quite different: It costs the sender very little to send spam; “the recipient bears the majority of the cost” (Hambridge and Lunde, 1999, pp. 3-4). The RFC thus calls spam “unethical behavior” (Hambridge and Lunde, 1999, p. 3), and goes so far as to compare it to a seizure of private property, since it eats up local resources—a “theft of service” (Hambridge and Lunde, 1999, p. 4) Similarly, LANs were intra-node, and also represented a problem of local accounting, as opposed to distributed accounting.

Appendix C Computational Methods

In this appendix we document our procedure for identifying RFCs related to our project. The accompanying code can be found at REDACTED.

c.1. Our rfc-scraper Tool

We developed a tool to scrape the RFC Editor, which pulls down the .txt version of each RFC and associated metadata. We wrote a separate script to filter and map the above data to identify RFCs. The filtering capability is simple: The script takes as an argument a search term to grep for (e.g., “account”, treated as a prefix and case insensitive); if there is a match in and RFC, the RFC ID is mapped to the associated metadata and the results are saved to a .csv file. Separately, each matched instance in each RFC is printed to a .html file (with 5 leading and trailing lines of surrounding context, and each matched word highlighted in color and emphasized).

For more detailed documentation on this code, please refer to REDACTED. The README in the repository has the most up-to-date information related to python version, dependencies (including bash scripts called within python for extra efficiency), and the ANSI HTML adapter package used to develop the color-coded .html search results files. We intend to update this tool with more sophisticated search functionality so that others can use it in the future for additional RFC-related research (e.g., filtering by time range, working group, author; and other data manipulation functions).

c.2. Search Terms for this Project and Manual Verification Process

We ran the rfc-scraper tool on several search terms related to our project purpose:

  • “account”, which matches account* and served as our superset search for all accounting terms

  • “accounting”, which matches a subset account*

  • “accountable”, which matches a subset of account*

  • “accountability”, which matches a subset of account*

  • “time-shar”, which matches time-shar* and therefore includes time-share, time-sharing, and similar terms

  • “survivabl”, which matches survivable and survivability

We then manually read the .html files to determine which RFCs we should read in more detail for our project. Of the 9085 RFCs published at the time of running the tool, this process enabled us to identify a subset of 136 RFCs to read in full. 19 of the 136 where false positives (i.e., they were not ultimately relevant). During detailed reading of the 136 RFCs, we further identified 12 RFCs that our search terms missed, which were relevant to our project. We include our RFC tracker code book in Appendix C.3. As an aside, through this process we learned that our search tool bears a coincidental resemblance to (though is much simpler than) the search-and-filter tools built to identify time datatype issues related to Y2K in Internet protocols (II, 1999).

c.3. Code Book

RFC_ID Read Coded Page Count False Positive False Negative
75 1 1 2
77 1 1 10
82 1 1 19
100 1 1 38
101 1 1 15
122 1 1 22
129 1 1 7
136 1 1 5
147 1 1 4
150 1 1 12
154 1 1 2 1
160 1 1 5
167 1 1 5
187 1 1 12 1
216 1 1 17
231 1 1 5
283 1 1 10 1
369 1 1 12
385 1 1 7
399 1 1 3
426 1 1 13 1
436 1 1 2 1
451 1 1 4
454 1 1 36
471 1 1 3
487 1 1 3
501 1 1 6
504 1 1 6
524 1 1 41
532 1 1 4
542 1 1 42
555 1 1 12
585 1 1 10
592 1 1 6
610 1 1 89
640 1 1 17
666 1 1 20
672 1 1 10
677 1 1 11
721 1 1 7 1
733 1 1 38 1
739 1 1 11 1
740 1 1 19 1
755 1 1 12
759 1 1 78
765 1 1 70
808 1 1 9 1
822 1 1 50
827 1 1 47 1
869 1 1 72
RFC_ID Read Coded Page Count False Positive False Negative
871 1 1 29
873 1 1 12
892 1 1 82
905 1 1 165
913 1 1 16 1
926 1 1 108
939 1 1 21
942 1 1 89
943 1 1 51 1
949 1 1 3 1
959 1 1 70 1
984 1 1 32 1
1009 1 1 55
1010 1 1 44
1015 1 1 25
1017 1 1 19
1052 1 1 15 1
1065 1 1 21 1
1066 1 1 90 1
1068 1 1 27
1077 1 1 47
1087 1 1 2 1
1095 1 1 67
1099 1 1 22
1102 1 1 22 1
1104 1 1 10
1105 1 1 17 1
1107 1 1 19
1109 1 1 8 1
1114 1 1 25 1
1125 1 1 21
1126 1 1 25
1135 1 1 33 1
1157 1 1 36 1
1167 1 1 8
1180 1 1 28
1192 1 1 13
1244 1 1 101
1272 1 1 19
1281 1 1 10
1287 1 1 29
1297 1 1 12 1
1310 1 1 23 1
1322 1 1 38
1346 1 1 6
1380 1 1 22
1454 1 1 15
1550 1 1 6
1633 1 1 34
1671 1 1 9
1672 1 1 4
1675 1 1 5
1681 1 1 6
1716 1 1 193
RFC_ID Read Coded Page Count False Positive False Negative
1726 1 1 32
1855 1 1 22
1862 1 1 28
2058 1 1 65
2063 1 1 38
2064 1 1 39
2123 1 1 35
2139 1 1 26
2194 1 1 36
2196 1 1 76
2271 1 1 57 1
2310 1 1 6
2475 1 1 37
2477 1 1 13
2504 1 1 34 1
2512 1 1 16
2513 1 1 30
2607 1 1 16
2620 1 1 14
2621 1 1 16
2626 1 1 276
2635 1 1 19
2702 1 1 30
2722 1 1 49
2768 1 1 30
2804 1 1 11
2828 1 1 213
2866 1 1 29
2867 1 1 12
2881 1 1 21
2903 1 1 27
2904 1 1 36
2905 1 1 54 1
2906 1 1 24
2924 1 1 37
2975 1 1 55
2977 1 1 28
2989 1 1 29
2990 1 1 25
3127 1 1 85
3198 1 1 22
3272 1 1 72
3334 1 1 45
3539 1 1 42
Total 148 148 4776 19 12