RFC-164

by Darius Kazemi, June 13 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Extensive notes from SJCC

RFC-164 is titled “Minutes of Network Working Group Meeting 5/16 through 5/19/71”. It's authored by John Heafner of RAND and dated May 16th through May 19th (the first RFC I've seen with a date range, though the date is implied in the title of the document rather than in the usual header location).

A business note: my ten-month Mozilla Fellowship has supported this project up until now, but that has ended. If you like what I'm doing here, please consider supporting me via my Patreon.

The technical content

Whereas RFC-163 was a brief summary of a portion of the Network Working Group meetings at the Spring Joint Computer Conference, this RFC is the full report, weighing in at 32 pages.

About half of the document contains updates from the various Host sites and related institutions. This is in line with the request of Steve Crocker in RFC-116 a month prior. The updates are in bulleted list form and each site reports:

Most sites are either already in compliance with RFC-107, are about a month out from achieving compliance, or are not online yet and make no promises at all.

The MIT Project MAC site reports that they've been playing around with a non-FTP form of ASCII file transfer.

There is a “resource notebook” that “has been compiled and distributed”, containing information on special information about 12 of the 19 total ARPANET sites. This is the kind of physical handbook you'd want if you wanted to know what, for example, the log in procedures are for a site you've never connected to before.

There are updates from various governmental organizations outside of the U.S. military, including Terry Shepard of the Canadian Computer/Communications Task Force, who attended the February NWG meeting described in RFC-101. Apparently one of their computer network worries is related to “the size of the country in relation to the sparseness of the population”. A UK government representative, Eric Foxley, is also present. According to his personal website he was faculty at Nottingham University and also Secretary of the “Inter University Committee on Computing” which dealt with the United Kingdom Computer Board, their government funding source. Rather tantalizingly, his update notes that the UK post office “has plans for a digital network in the distant future”, perhaps related to the Post Office Telecommunications department that provided a kind of pre-Internet ISP in the form of the PSS network in the late 1970s.

There is also an update from EDUCOM, which was a big association for the use of computers in education founded in 1964 . EDUCOM is known today as Educause after a 1998 merger. At one point J.C.R Licklider was on their board of trustees! They would eventually run EDUNET, a network resource accessible by members of the organization. Notably:

They have conducted a survey of 70 universities, polled about their interests in the ARPA network: 60 of 70 are interested, 14 have money and are ready to become sites.

The Network Information Center reports that they plan to have full online document access by the summer of 1971, with the ultimate goal being a file transfer protocol based system that allows remote text editing of documents. The offline (mail-based) distribution system will continue to operate alongside the online system.

The plan is for RFC numbers to “eventually go away” in favor of NIC numbers. (This obviously did not happen as the RFC numbering continues 50 years later.)

Telnet is discussed at the meeting and various “issues were raised but not resolved”. There is discussion of RFC-158, which was written essentially at the same time that these meeting notes were taken but was assigned a lower RFC number as part of the glut of documents that came out of the NWG meeting.

Both the RFC-114 file transfer protocol and the RFC-122 Simple-Minded File System are discussed. The latter is described as “an operational program; not a proposal”. Apparently a bunch of host sites are already using it or soon will be.

Socket structure is going to be further discussed, particularly the issue of whether socket identifiers should be 16 bits or 32 bits, as there is some worry that the TIP/IMP will not be able to handle 32-bit sockets.

The Initial Connection Protocol, with its various race conditions mentioned in RFCs 123, 127, and 151, now has a cleanup committee assigned to it.

As promised in RFC-140, time is set aside for people to talk about the similarities between operating system protocols and network protocols:

An analogy was drawn on the basis that the ARPA Network with its hosts and protocols is in a sense an "operating system" and that a study of what makes a good operating system might help define what makes a good ARPA Network.

There is a presentation by Art J. Berstein of SUNY Stony Brook on the features of a flexible operating system, namely:

  1. a flexible file structure involving directory trees, active file tables, etc.
  2. a process hierarchy involving a “father-son relationship” where a father process can spawn a son process
  3. a system for interprocess communication involving “channels, status return, and software interrupts”

The note-taker notes that these are all primary features of MULTICS.

Then Bob Metcalfe offers a retort of sorts, specifically that while tree structures make sense for operating systems, the network is a directed graph and that solutions that apply to tree structures don't generalize to directed graphs. (A tree structure is like a corporate organization chart where every employee has one person they report to, up to the highest levels of the organization which is the “root” of the tree. A directed graph would be kind of like if you worked at a company and it was possible for you to be your manager's manager... much more wild and freeform.)

The Data Reconfiguration Service committee reports that they have solved the remaining technical issues with RFC-138, and a couple of sites have committed to implement a data reconfiguration service and make it available to the network. However, it seems like RAND is the only site that has really bought in at this point, which makes sense since they are the originators of the proposal.

Next the data management committee presents information related to RFC-144 on sharing data over the network. Dick Winter of the Computer Corporation of America notes that with multiple data computers, data sharing

becomes decentralized.  All data computers have identical hardware and software.  Their objective is to dispose and restructure data throughout the Net to optimize its use, i.e., relocate it close to where it is used most heavily.  For small files of wide interest multiple copies can be maintained.

This is extremely similar to the modern concept of a content delivery network, where internet content is replicated in caches distributed around the world so that a user in Tokyo accessing, say, The New York Times, can get their data from a server in Tokyo instead of a server in New York.

Mitre attempted to discuss their data management system with the NWG but it got derailed into a discussion of general principles of data management.

The TIP, or Terminal IMP, is discussed at length. This is essentially an upgrade of the IMP that makes it smarter about things like protocols and line-vs-character communication and so on. Future sites will be installing TIPs instead of IMPs. A site that wants a TIP will need to pay for it. A high end estimate is $100,000, or about $600,000 in 2019 dollars. There is also a lease program offered at $40,000/year ($250k/year in 2019) over three years with a two-year minimum.

Larry Roberts offers some general comments looking at the future of the ARPANET. He notes that:

Lastly there is concern about the size of the Network Working group, though there are no definitive takeaways from this discussion.

Analysis

Broadly speaking, this is a very exciting time for the network. Many sites now support many protocols and are actively sharing computing resources, computer time, and knowledge with one another.

For example, not every site is making their own NCP at this point. SRI-ARC is running TENEX, which is a BBN-developed operating system, so they are just waiting for BBN to release the new version of the NCP at which point they'll install it.

I believe Peggy Karp of Mitre is the only woman in attendance at this meeting.

Further reading

Computer Corporation of America checks in and reports that they offer a system that has something called “laser memory”. This must be a reference to the UNICON 690 Laser Mass Memory System by the Precision Instrument Company. This laser memory system powered the trillion-bit store, essentially a giant network hard drive.

According to an ARPA survey, the laser memory system is a permanent physical storage medium and works by using a laser to punch holes of 3 microns in diameter into polyester sheets coated with rhodium metal. For reference, a human hair is about 50 microns wide, so you could fit about two ASCII characters into a space the width of a human hair. It's unclear to me whether it's truly a read-write system, or whether it simply simulates being read-write by considering deleted data to be unused space on the physical strips. I did find a 1972 survey that refers to the UNICON 690-212 as a “non-erasable” storage medium.

According to this 1972 ARPA report the plan was to use the laser memory system as a “tertiary store” so perhaps for backups, with conventional disks providing the read/write mass storage? There is also discussion of timing and access with the laser memory system. Because the system involves polyester strips and servos and lasers and all that stuff, the system needs “between five and ten seconds” to make a file ready for reading or writing.

When querying a database for records, the system is highly dependent on how physically close data records are to one another on these strips: if a query result consists of 10 records that are near each other, it can return the data in less than half a second. If the 10 records are physically scattered on the storage medium, it can take up for 5 seconds.

This 1975 paper from the Computer Corporation of America describes the trillion-bit store in more detail.

Eric Foxley, the UK representative, has a cool online computer museum page with lots of photos.

You can read a history of EDUCOM on their archives. I also stumbled on this neat 1996 interview with Vint Cerf in their archives.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.