365 RFCs

Commenting on one RFC a day in honor of the 50th anniversary of the first RFC.

by Darius Kazemi, Jan 11 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

GORDO and the IMP

RFC-11 is another Gérard Deloche paper. This one is dated Aug 1st, 1969 and is an extremely in-depth expansion on the contents of RFCs 7, 8, and 9. It's our longest RFC so far, clocking in at a monstrous 54 pages typewritten.

It's called “Implementation of the HOST – HOST Software Procedures in GORDO” and details how a HOST running the GORDO operating system would connect to another HOST on the network (more on that operating system in my notes on RFC-7).

The technical content

This paper has enough technical detail that if you were a programmer whose job was writing software for GORDO you could absolutely use the information here to connect a machine to ARPANET. Though remember ARPANET was still 3 months away this point and this is a spec for how you would talk to a theoretical future ARPANET.

The main goal of this document is to outline how multiple users connected to a GORDO machine via a time-sharing system could all simultaneously access ARPANET. So it's not just one user at a time; this could mean an entire department could connect to a GORDO machine at UCLA using their terminals at their desks and each make what seems like simultaneous communications with a remote computer. The paper defines a “Network program” whose job it is to manage all the input from different users, send that input over the network, receive data back from the network, and then parcel it back out to the users again. In technical terms this type of activity where you take multiple streams of information and put them into one stream and then pick them back out again is sometimes called multiplexing and demultiplexing, which is the terminology used in this paper.

The paper states up front that it “is convenient to consider the Network as a black box – a system whose behavior is known but whose mechanisms are not”. “Network” is capitalized throughout, which reminds me of early internet style where “internet” was always capitalized like a proper noun. Nobody really does that anymore, although I remember bitter fights about it.

In terms of content, this paper covers what was already covered by Deloche in RFCs 7, 8, and 9. The main difference is that it explicitly covers GORDO implementation. We learn that GORDO's file system is centered around pages. A page is basically a chunk of memory in a computer that is organized in a way so that it's very fast for us to access to it, and once we access the page we can then more slowly go through its contents. Imagine a paper book: I could tell you to count up to the 10,000th word and tell me what it is, but assuming I kept track of exactly how many words are on each page of the book it would be faster to ask you to look up page 204 and tell me the 52nd word on the page. A simple file system is basically a whole bunch of tables of data that keep track of how many words are on each page of this book so we can quickly look up the information we need!

GORDO contains concepts of processes, forks of processes, and users/jobs. If you're familiar with Unix you've certainly seen these terms before, and I am guessing GORDO borrowed these concepts from Multics, the early time-sharing operating system developed by MIT, GE, and Bell Labs. Unix draws a direct lineage to Multics which would make GORDO a kind of cousin of early Unix.

Overall this document really is just a fleshed-out version of the previous three Deloche RFCs. This is the last RFC Deloche will author and since he graduated UCLA with Ph.D. in 1970 I'm imagining that these four RFCs represent his piecemeal thesis work, which he wrapped up with this RFC and then moved on to whatever the next thing in his career was. This paper does have the smell of a Ph.D. supervisor going “look, document what you're working on and we'll call it a thesis and then you can leave the lab.” Ahem, not that I've witnessed this exact thing happen in engineering schools, heaven forbid.

Analysis

There is no mention in this RFC of Deloche's claim in RFC-9 that there are 256 links between HOSTs. But there is also no claim of 32 links like in other docs. We do see specific link numbers mentioned in examples but they only go as high as 25. Odd, though. I wonder if he's hedging after making a mistake claiming the 256 links? Or maybe “link” was already a semantically overloaded term before the internet even existed and it's referring to something totally different.

Of note was a single mention of “slave mode” for a process. There's been a lot of heated discussion around master/slave terminology in computing in the last ten years so this kind of jumped out at me.

Further reading

A history of Multics compiled by people who contributed to the operating system over the years (the whole site is worth checking out).

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 10 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Shuffling the deck

RFC-10 explicitly revises RFC-3, which you may recall was our first attempt at defining what an RFC is. This is also titled “Documentation Conventions” and is once again authored by Steve Crocker, author of the first RFC. It was published July 29, 1969.

The technical content

The main updates to this are the who of RFCs: who is in the Network Working Group and who is on the distribution list.

The list of who the NWG “seems to” consist of (still using that awesome tentative language) has changed SRI's roster from Jeff Rulifson and Bill Duvall to Elmer Shapiro (author of RFC-4) and Bill English. Gerard Deloche is removed from the UCLA reps. John Haefner of RAND is added (I mentioned RAND in my RFC-1 writeup because they published a paper in 1962 on redundant communications networks). Paul Rovner and Jim Curry of Lincoln Labs are added.

The duty of assigning serial numbers for RFCs is now passed from Bill Duvall of SRI to Steve Crocker of UCLA.

RAND, SRI, SDC, and Lincoln Lab are added to the distribution list. (Remember in RFC-3 I was wondering why SRI was left off the distribution list? Well, that injustice is now rectified.)

Analysis

The “SDC” added to the distribution list stands for System Development Corporation, considered the first dedicated computer software company in history. Based in Santa Monica, California, it was founded in 1955 as a RAND spinoff specifically to build software for the US military. SDC had a long-distance connected computer as early as 1965 talking to MIT's Lincoln Lab on the opposite coast. This wasn't packet-switched and was a 1:1 direct communication so it wasn't internet-like, but still, it's obviously impressive work and it makes complete sense why they'd be an early ARPANET participant. (The project lead on those early connected SDC/Lincoln computers was Lincoln Lab's Larry Roberts, who would eventually lead the entire ARPANET project.)

Further reading

A huge collection of System Development Corporation papers resides at the Charles Babbage Institute Archives at the University of Minnesota. They don't seem to be scanned for online browsing. I hope to make it out there to examine these myself.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 9 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Host software, take 3

RFC-9 is, like RFC-8, a scan of a document rather than a transcription. It is also authored by Gérard Deloche. Mercifully, this one is typewritten rather than handwritten. It's dated May 1, four days earlier than RFC-8. It seems to be a more formal document and it refers to itself as a “paper” although I'm not sure if it's a reprint of a paper that was published elsewhere (like RFC-5 was). It shares its title, “Host Software”, with RFCs 1 and 2. It covers the same material and like RFC-8 seems to be a kind of synthesis document, though with some crucial differences.

The technical content

Up until this RFC we have consistently heard that there are 32 links (bidirectional data channels) between HOSTs (link 0 used for control and links 1 through 31 used to transfer data). This paper suddenly claims there are 256 links between HOSTs, though 0 is still the control link. I'm not sure what changed here, or if this is simply a proposal and was never implemented. I skimmed future RFCs as well as BB&N IMP reports and a formal paper on the Network Control Center operations from 1972 and couldn't find anything concrete on the number of links. (Interestingly “links” seems to be a Network Working Group term while BB&N refers to “data lines”, which is in my opinion a clearer term for what these things actually are.)

Apparently the IMP network couldn't handle more than 64 simultaneous connections! Or at least this is the assumption as of May 1 1969. Remember no actual messages were sent until October so things could change by then.

Section 2.2.2 combined with Figure 3 in the document provide the most specific example yet of what two HOSTs talking to each other over the network would look like. The specifics are worth reading in the PDF of the RFC itself but the example goes through:

  • Initial request by HOST X over link 0 to open a primary (command) connection to HOST Y via link 12
  • HOST Y acknowledges the request and says “yup let's talk via link 12”
  • HOST X send login data over link 12 to HOST Y, which sends back “yup, you can now send me commands”
  • HOST X asks to communicate with a program on HOST Y, and data about this program will be sent over auxillary link 25
  • files for that program are exchanged back and forth over link 25
  • finally, HOST X communicates via link 0 that it is closing its connections on link 12 and link 25, freeing those up for other HOSTs

The paper also describes data structures in detail. An individual HOST keeps a “host table” which is basically N rows and 256 columns per row. The columns represent the 256 links possible and so if HOST 3 is using 4 different links on the network, the row for HOST 3 will have 4 different columns with a 1, the rest 0. The document notes that the table should never have more than 64 1s at any given time, because otherwise the network would be overloaded. I think the idea was for the HOST machines to keep track of this and then not attempt to open more connections if it knew there were already 64 in use.

The HOST also has a “link table”, with one row per established link containing detailed information about said link like the link's current status and what local user is monopolizing the link. This may also contain information about remote links but it's unclear to me from the document how this information is passed back from a remote host. (I know from reading ahead to RFC-11 that this table wouldn't know about any links that don't involve the local HOST.) There is also a “user table” that contains information about specific users, specifically for a given user, how many links they have open at a given time and what the ID of those links are.

Analysis

I'm left scratching my head over a lot of things implied but never stated outright by this document. Fortunately RFC-11 will expand upon this greatly — it's basically this 15 page paper but with three more months of work put into the details, and expanded to 54 pages. But we'll get there in two more days.

Further reading

Nothing this time.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 8 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

An attempt at synthesis

RFC-8 is the first RFC in the official archives that is only available in scanned form. It was never typeset. It is hand written.

At least the title page is typed on a typewriter, so we know without a doubt that it's called “ARPA Network Functional Specifications”, dated May 5th 1969, and authored by Gérard Deloche of UCLA, also the author of RFC-7. You'll recall that RFC-7 has a note apologizing for the fact that the handwriting is atrocious and it was very difficult to transcribe. This one is too, but this time we get to enjoy squinting at the handwriting ourselves.

For reference, this is what we're dealing with:

a bunch of squiggly hand writing

Not the worst it could be but... not great.

The technical content

The document spells out the math for HOST-to-HOST checksums (error checking). It mentions that IMP-to-IMP checksums exist but that it's a BB&N thing so not relevant to this working group right now.

It reiterates what was already said in RFC-1 and RFC-2 about the link system between hosts: 32 links, 0 is a control link, these are TTY-style connections.

There's a summary of the Decode-Encode Language as defined in RFC-5, and a repeat of a bunch of the information in RFC-7.

Analysis

This seems to be a synthesis document: an attempt to lay out what a connection from UCLA to SRI using DEL to do interactive remote applications on SRI's cool graphical operating system NLS would look like. The only thing “new” I can find in here is the actual HOST-to-HOST checksum math. Plus a cool stick figure.

an actually totally normal looking stick figure

Further reading

This is an aside, but the scanned copy appears to be Jon Postel's copy of the RFC. Postel was the editor of the RFC series from almost the very beginning of the series until his untimely death in 1998. Postel was also in charge of top level domain assignment and IP addresses before ICANN was established right around the time of his death. For many years Postel essentially was the internet. There's a lot of information about him at USC's Postel Center and RFC-2468 is a remembrance of Postel by Vint Cerf.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 7 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here.

Too long; didn't read

RFC-7 is dated May 1969 and titled “HOST-IMP Interface”. It's authored by one Gérard Deloche, a Frenchman who was a graduate student in Computer Science at UCLA. The version hosted at the IETF opens with this note:

[The original of RFC 7 was hand-written, and only partially illegible copies exist. RFC 7 was later typed int NLS by the Augmentation Research Center (ARC) at SRI. The following is the best reconstruction we could do. RFC Editor.]

As a result there are sections transcribed as “(unreadable)” in the document.

We also know that Deloche is described as “somewhat independent” of the working group in RFC-1 by Steve Crocker.

If you'll recall from my reading of RFC-1, the HOST is basically a server on the network and the IMP is close to what we'd consider a router today. The IMPs were made by BB&N in New England. The HOSTs could be pretty much any computer in existence.

The technical content

The document states that “This study is based upon a study of the BBN Report No. 763”. I'm 90% sure this is an error, either in transcription or in the original document, and they are referring to BBN Report No. 1763, which is a monster 82-page document released in January 1969 titled “Initial Design for Interface Message Processors for the ARPA Computer Network”. According to Hobbes' Internet Timeline, January 1969 is when BB&N was awarded the APRA packet switching contract. It seems like this document would have been part of the proposal process. Perhaps parts of it were. In one version of the document I was able to find (linked in “Further reading” below) there was a preface missing from the bitsavers version I link above:

A contract was recently awarded to Bolt Beranek and Newman Inc (BBN) for the implementation of a four-node group of interface message processors (IMPs) for the ARPA computer network. This document describes our preliminary design plans for the IMPs and the network protocol.

Since implementation is only just beginning, some aspects of this design will probably change. This document is for information only and should not be construed as a firm specification.

Cambridge, Mass.

January 6, 1969

The summary lays out the same 16-bit header that I describe in my reading of RFC-1. Maximum length of a single message is 1006 bytes (for reference this is roughly 2 short paragraphs of a plain text document), so your HOST had better be able to break up bigger messages than that into a series of multipart messages. It talks about how after the header there is a 16-bit demarcator word that indicates the text of the message is about to begin.

That's basically it — if you read the BB&N report you'll see that once the IMP gets the packet it starts to add more complex headers than that for IMP-IMP communication, and a whole bunch of other stuff, but since this RFC is written for HOST implementors who just want to get their data over to the IMPs, they can skip all of that info.

Analysis

I choose to interpret “This study is based upon a study of the BBN Report” as “all of these RFCs so far have been like 5 pages long and none of you want to read a damn 80 page document so here's the TL;DR of how IMPs are designed.” This RFC is longer than most RFCs so far, but is also about 1/10th the size of the original report that it summarizes.

In a way this document is a “black boxing” of the IMP system. If all you care about is the IMP/HOST interface, why bother reading 82 pages of documentation where only 5-10 of them are relevant to that particular interface? This is specialization of expertise in action.

The RFC also mentions “Gordo documentation” — it seems GORDO was the operating system of the HOST at UCLA in 1969. But I couldn't find any more information about it, outside of RFCs, aside from a missing reference on a Wikipedia disambiguation page! I found a listing of the Leonard Kleinrock papers that implies there was a meeting some time in late 1969 or early 1970 where GORDO's name was changed to... SEX (Sigma EXchange system).

SPADE Admin Note 20: SPADE Meeting note (name change GORDO to SEX, Sigma EXchange System)

Good job guys, GORDO was just too silly of a name for an operating system, realy glad you changed it to SEX.

At any rate, this OS ran on the SDS Sigma 7 at UCLA and my best guess is that it was probably a one-off OS that only ever ran on that machine.

There is one conflicting report on this. Most sources say UCLA ran GORDO/SEX on an SDS Sigma 7, but the book Netizens by Ronda Hauben claims it was an SDS 940 running GORDO/SEX at SRI, whereas UCLA ran an SDS Sigma 7 running GENIE. I strongly suspect this source is wrong, and multiple other sources seem to show that they swapped GORDO and GENIE between the two institutions. Even RFC-11 claims that “GORDO is a time-sharing system implemented on SDS Sigma 7”, and I've only seen it associated with UCLA.

Further reading

BBN Report No 1763. There is another edition with the preface that I quote above at this US Department of Defense website.

RFC-11 has a lot of information about GORDO from August 1969, before it was renamed to SEX. (Sorry, I can't get over that name change.)

This excellent page of IMP documents is from Dave Walden, who worked at BB&N at this time. Includes some original IMP source code, which apparently people have managed to get running on emulated IMPs!

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 6 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here.

Syncing with BB&N

RFC-6 is from April 10th, 1969 and titled “Conversation with Bob Kahn”. This puts it one day after the reported publication of RFC-2, but two months before the publication of RFC-5. Yes these things are chronologically out of order, which I was not expecting. I think it has to do with the fact that RFCs were formalized well after these first ones were written so there's a lot of backdating and that kind of thing.

This RFC consists of notes on an informal conversation Steve Crocker had with BB&N's Bob Kahn (known as the co-inventor of TCP/IP).

The technical content

This is a very short document but the main point is that BB&N was willing to convert characters to 8-bit ASCII for transmission over the network.

Crocker also briefly summarizes the kinds of messages that can be sent between HOSTs and IMPs, and presumably this was important information to have confirmed by someone at BB&N (the makers of the IMP).

Analysis

The conversation happened “yesterday” which means it must have happened on April 9th, the date of publication of RFC-2. We know Kahn hadn't read any of the RFCs yet because this RFC closes with the note “I also summarized for Bob the contents of Network Notes 1, 2, and 3.” “Network notes” were what they were calling RFCs casually back then. RFC-3 was probably not published at this point but Crocker was its author and could have easily summarized his work in progress to Kahn.

Also interesting is how rapidly these early RFCs came out and how much emphasis there was placed on a kind of dialogue, or at least documenting the dialogue. As Elizabeth “Jake” Feinler would reminisce in RFC-2555, with the RFC system “a swath was instantly cut through miles of red tape and pedantic process. Was this radical for the times or what!!”

Further reading

ASCII was formalized as an information encoding standard in 1961 and President Johnson signed a memorandum saying US federal computers needed to use ASCII to communicate in 1968. Since ARPA was a US military network I guess it made ASCII the only real choice for the job. From the memorandum:

All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.

Full memorandum text is here.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 5 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here.

DEL – the rich application layer that never was

RFC-5 was published June 2, 1969 and authored by Jeff Rulifson of Stanford Research Institute. It's a really interesting one to me. It's the first RFC that I would call extremely technical, in that it appears to be the distribution of a technical paper that was presented by Rulifson on February 13 of that year. The paper is about DEL, the Decode-Encode Language.

But what is DEL? Well, it makes a lot more sense in the context of RFC-1, which describes the problem that DEL is trying to solve. Since sending a message over the network was estimated to take a half-second delay, if you naively sent every keystroke to a remote computer, you would need to wait at least a half second between every keystroke in an interactive console. So instead of naively sending every keystroke over the network in its own message:

A better solution would be to have the front-end of the remote program — that is the part scanning for <- and <CR> — be resident in our computer. In that case, only one five character message would be sent, i.e., H E L P <CR>, and the screen would be managed locally.

We propose to implement this solution by creating a language for console control. This language, current named DEL, would be used by subsystem designers to specify what components are needed in a terminal and how the terminal is to respond to inputs from its keyboard, Lincoln Wand, etc. Then, as a part of the initial protocol, the remote HOST would send to the local HOST, the source language text of the program which controls the console.

(Above quoted from RFC-1.) The idea is to not send data until the carriage return (<CR>, what would would call the Enter or Return key today) is pressed and then send that data all at once to the remote computer. Basically it batches local interactions and only sends them over the network when it makes sense from a user experience standpoint to wait a second for a response from the remote computer.

That's the first part of DEL. The other part of DEL, which was incredibly ambitious, was for sending essentially any kind of data including interactive graphical user interfaces and have it display cross-platform on basically any kind of computer.

Spoiler alert: DEL was never actually used in production. More on this later.

A Lincoln Wand, btw, is a kind of light pen and tablet input device invented by Larry Roberts at MIT Lincoln Labs. Kind of like a Wacom drawing tablet today. It was invented around the same time as the computer mouse and was one of the early “accepting x-y input from a user” devices. As I mentioned in my article about RFC-3, Roberts was known as the “father of ARPANET”, and died very recently on December 26, 2018. Lincoln Lab is still around and still doing US Department of Defense work.

The technical content

The document is laid out like a full paper, with an abstract and a foreword (misspelled “forward”). The foreword states:

The initial ARPA network working group met at SRI on October 25-26, 1968.

Which is good to know! It also means that it was almost exactly a calendar year between that first Network Working Group meeting and the first message sent over the network on October 29, 1969. It also means that it was about 6 months of NWG meetings until Steve Crocker wrote the first RFC. Also of note is that “It was generally agreed beforehand that the runmning of interactive programs across the network was the first problem that would be faced.”

The abstract does technically describe what DEL is for but not nearly as well as the part I quoted from RFC-1 above. It's your usual unhelpful scientific paper prose. The document as archived at the IETF is also extra difficult to understand because the block diagram illustrations have been converted to exceedingly unhelpful ASCII art during transcription.

DEL is paired with a subsystem called NST (Net Standard Translators) which basically translates any message from a sending computer that's not a meta command into a character set that the receiving computer can read. Recall that these computers could be using entirely different character encodings from one another, so the numerical value that represents the letter A could be different from one computer to the next.

There is a whoooole bunch of very technical specification of DEL syntax that frankly I am not going to take the time to learn in a day.

In addition to bundling and translating strings of character data, there is also code for passing along vector data over the network for graphical displays! Really cool how they are normalizing all the vector data to values between -1 and 1 (with 0,0 at the center of the screen) so you can send things from a display of arbitrary resolution to a display of a totally different arbitrary resolution.

I also like this warning:

It is assumed that all arithmetic and bit operations take place in the mode and style of the machine running the code. Anyone who takes advantage of word lengths, two’s compliment arithmetic, etc. will eventually have problems.

Because the hardware architecture of these computers could be so vastly different in this era (as everyone was making things up as they went along), there could be fundamental mismatches with how things like long numbers are stored. The classic example of this (possibly not what they are referring to here but the principle holds) is that some processors store numbers “big-endian” and some processors store number “little-endian”. This is an oversimplifcation but basically: if you want to represent a number large enough that it takes up more than one chunk of memory, what order do you store the number in? Like if the number consists of A combined with B and it takes two chunks of memory do you store it A B or B A? Some computers do it the first way, some do it the second way. This means that low-level math operations that work on one kind of computer will totally fail to work on the other kind of computer. I'm sure there were other incompatibilities between these machines that I'm not even thinking of, but that's the first one that comes to mind. Hence the warning. (More on “endianness” here.)

I mentioned that the paper doesn't do a great job setting things up in its abstract but it does manage to do something that a lot of modern computer science papers still don't do: it includes an example program at the end, which I imagine would be very very helpful to anyone trying to write their own translator in DEL.

In addition to the usual suspects we've seen so far, the distribution list at the end (people to whom this RFC was to be sent) includes one Mehmet Baray at UC Berkeley. This is the first mention of Berkeley in an RFC and the first mention of Baray. I can't find much information on him except that he's Turkish and got his Ph.D. in Electrical Engineering and Computer Science at UC Berkeley in 1970. This means he was probably a grad student assisting in these efforts at this time, but I'd love to know more.

Analysis

I'd never heard of DEL so I thought I'd look up what became of it, and at least according to Wikipedia it was never used. From the Wikipedia page for Jeff Rulifson:

He described the Decode-Encode Language (DEL), which was designed to allow remote use of NLS over ARPANET. Although never used, the idea was small “programs” would be down-loaded to enhance user interaction. This concept was fully developed in Sun Microsystems's Java programming language almost 30 years later, as applets.

“NLS” was Doug Englebart's amazing interactive GUI system that I've mentioned here before. The comparison to applets is apt; if implemented, DEL would have allowed for very complex, highly interactive graphical applications to be run over the network. This whole idea for DEL was scrapped by Spring of 1969 when BBN delivered the specification for HOST-IMP interaction. Here's a quote from Steve Crocker himself in RFC-2555, a historical look at 30 years of RFCs published as an RFC itself in April 1999 (twenty years ago, aaaaa):

When BBN issued its Host-IMP specification in spring 1969, our freedom to wander over broad and grand topics ended. Before then, however, we tried to consider the most general designs and the most exciting applications. One thought that captured our imagination was the idea of downloading a small interpretative program at the beginning of a session. The downloaded program could then control the interactions and make efficient use of the narrow bandwidth between the user's local machine and the back-end system the user was interacting with. Jeff Rulifson at SRI was the prime mover of this line of thinking, and he took a crack at designing a Decode-Encode Language (DEL) [RFC 5]. Michel Elie, visiting at UCLA from France, worked on this idea further and published Proposal for a Network Interchange Language (NIL) [RFC 51]. The emergence of Java and ActiveX in the last few years finally brings those early ideas to fruition, and we're not done yet. I think we will continue to see striking advances in combining communication and computing.

Further reading

The 1966 paper on the Lincoln Wand.

RFC-2555 is, of course, a valuable source of historical information on RFCs.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 4 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here.

Planning the project

RFC-4 debuts a new author, Elmer B. Shapiro. This is the only RFC he'll ever write, and I couldn't find much info about him beyond a stub page at Stanford Research Institute's Artificial Intelligence center saying he worked on Shakey the Robot. He had a couple publications in 1966 and 1967 about using AI agents for spying. Seems like an oddball choice for RFC writing.

But in many ways this is an oddball RFC.

First of all it's backdated to March 24th, 1969, a month before the previous three RFCs. And it is simply a bunch of notes in outline form with no narrative structure or context.

The technical content

This is a dump of notes, an outline consisting for 14 top-level items each broken down into a handful of sub-items. I've extracted the top-level items to give you a quick overview of what the document covers:

1  (n10) network checkout

2  Installation of communcation gear 8/1/69

3  Design and construct host-Imp interface 9/1/69

4  Imp installation 9/15/69

5  Debug host-Imp interface 10/1/69

6  Test messages between UCLA-SRI 10/15/69

7  Test messages between UCSB-SRI 11/15/69

8  Test messges between UTAH-SRI 12/15/69

9  Run simple TTY systems

10  Run simple typewriter systems

11  Run arbitrary terminals without local feedback
 
12  Run arbitrary terminals

13  Move files

14  Develop debugging techniques

Clearly this is a project plan. The idea was to have the first IMPs (Interface Message Processors, basically routers) in place by September 1969, and send the first messages between UCLA and SRI by October 15. History shows that they were basically on schedule, with the first ARPANET message eventually sent at 22:30 Pacific on October 29. By December 5th, UCLA, SRI, UCSB, and UTAH (University of Utah School of Computing) were all connected, ahead of this early schedule! By March 1970 the first east coast node would be added at BBN (where the IMP itself was developed).

For the test messages between the first two ARPANET nodes, UCLA and SRI (section 6) there is this diagram that I find really funny:

  6a  Network configuration

           SRI  |
                |
                |
                |
                |
                |
                |
                |
           UCLA |

There it is. The internet, visualized. For at least a few weeks in 1969.

Analysis

One interesting thing is that this plan was in place in March, which means RFCs 1 and 2 in April were written four months before any of the gear was actually installed. They hadn't even designed how host machines would physically connect to their IMPs at this point!

Further reading

This internet history timeline by Robert Hobbes Zakon is detailed yet also concise and cobbled together from a whole bunch of sources. I found his chart of ARPANET growth especially interesting.

How to follow this blog

You can subscribe to its RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.

by Darius Kazemi, Jan 3 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here.

Buckle up, it's about to get meta

RFC-3 is the first RFC that attempts to describe, well, what an RFC is. Steve Crocker, author of RFC 1, is back in the saddle, and we are instantly hit with his sardonic writing style. Titled “DOCUMENTATION CONVENTIONS”, it opens with the classic confusion of a loosely-defined interdisciplinary research project:

The Network Working Group seems to consist of Steve Carr of Utah, Jeff Rulifson and Bill Duvall at SRI, and Steve Crocker and Gerard Deloche at UCLA. Membership is not closed.

It “seems to consist”! I say this is classic confusion because remember, these are people at completely different institutions working with each other at great distances and by the way they are INVENTING THE INTERNET so it's not like they can all share a Trello board and Google Drive to stay in sync. In fact, Crocker himself has said “I remember having great fear that we would offend whoever the official protocol designers were” (quoted in Katie Hafner's Where the Wizards Stay Up Late). They just assumed that the official designers were sitting at BB&N or some other east coast defense contractor, when really their own group was about as close to anything official as existed at the time.

The technical content

This RFC is really short and non-technical so I'm just going to include the full text here and comment as we go.

The content of a NWG note may be any thought, suggestion, etc. related to the HOST software or other aspect of the network. Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable.

So right away the content scope is left extremely broad as long as it's related to the network they are trying to build. These “NWG notes” (which will eventually be coded as RFCs) are basically like a very slow email listserv uhhh two years before the invention of email. This basically says “anything goes but keep it on topic.”

Also they recommend notes be “timely rather than polished”. The reason these notes exist is to synchronize ideas early and iterate as rapidly as possible across these great geographical and institutional distances.

The minimum length for a NWG note is one sentence.

Can you imagine typing a single sentence and having it duplicated on paper and manually mailed to universities, defense contractors, and military bases and everyone basically has to read it? Someone really ought to invent a system for doing this remotely via computer...

These standards (or lack of them) are stated explicitly for two reasons. First, there is a tendency to view a written statement as ipso facto authoritative, and we hope to promote the exchange and discussion of considerably less than authoritative ideas. Second, there is a natural hesitancy to publish something unpolished, and we hope to ease this inhibition.

More Crocker wisecrack asides. I love this guy. And they are basically saying “please don't be a perfectionist about early ideas, you will stifle the development of the project if you do.” These are very wise rules to abide by when you are embarking on any creative endeavor, technical or otherwise. And in hindsight it seemed to work pretty well for them.

Every NWG note should bear the following information:

        1.  "Network Working Group"
            "Request for Comments:" x
            where x is a serial number.
            Serial numbers are assigned by Bill Duvall at SRI

        2.  Author and affiliation

        3.  Date

        4.  Title.  The title need not be unique.

Just laying out what should be in the header for these things. This is the exact header format used until RFC-5742 in December 2009 when they decided to organize RFCs by “streams” based on which organizations were publishing a given RFC. At that point they retired “Network Working Group” and started labeling things “Internet Engineering Task Force”, “Internet Architecture Board”, “Independent”, etc.

Also I laughed at “the title need not be unique”, which I guess was immediately demonstrated by RFC-1 and RFC-2 having the same title.

One copy only will be sent from the author's site to"

        1.  Bob Kahn, BB&N
        2.  Larry Roberts, ARPA
        3.  Steve Carr, UCLA
        4.  Jeff Rulifson, UTAH
        5.  Ron Stoughton, UCSB
        6.  Steve Crocker, UCLA

Reproduction if desired may be handled locally.

Okay so these RFCs are being sent to 6 facilities. BB&N, the New England defense contractor. ARPA, the government funding body. And then three western US research universities. It's weird to me that Stanford Research Institute isn't on this list?

Also note the trailing " in the first line. RFCs are not corrected after they are published, though I don't think this convention was formalized until later. Modern RFCs go through an extensive draft process before publishing because these things are meant to last forever unchanged.

And finally, sadly, Larry Roberts, second on that list, known as a “father of ARPANET”, died very recently on December 26, 2018. My internet went down as I attempted to look this up, which I choose to interpret as the network's equivalent of a moment of silence.

How to follow this blog

You can subscribe to its RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm a Mozilla Fellow and I do a lot of work on the decentralized web with both ActivityPub and the Dat Project.