365 RFCs

Commenting on one RFC a day in honor of the 50th anniversary of the first RFC.

by Darius Kazemi, June 20 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

DTP

RFC-171 is titled “The Data Transfer Protocol”. It's authored by Abhay Bhushan, Bob Braden, Will Crowther, Alex McKenzie, Eric Harslem, John Heafner, John Melvin, Dick Watson, Bob Sundberg, and Jim White. It's dated June 23rd, 1971.

The technical content

RFCs 171 and 172 form a pair. This RFC, 171, describes something called the “Data Transfer Protocol”, which is a low-level protcol for transferring data. It is essentially the file transfer protocol proposed in RFC-114 but without file system specific stuff, so it can be used for any sort of data transfer, not just files.

RFC-172 will be based on this document and will add the file system specific stuff on top of it.

DTP offers three operating modes.

  1. A bit stream that simply opens a connection, sends bits, and then closes the connection.
  2. A “block” transaction that begins with a transaction type byte and ends with a special series of character codes denoting the end of a transaction
  3. A count transaction that begins with a header containing the length of the data in the transaction. This is exactly the mode described in RFC-114 under the “Transactions” section.

Analysis

This is a weird proposal that seems very design-by-committee, and each of the operating modes seems like it needs to be its own protocol. It really looks like half the RFC is just an abstraction of the FTP proposal in RFC-114, and then it has operating modes 1 and 2 (bit stream and block mode) kind of bolted on to it.

I suppose one reason this could be good is that there are error codes shared among all three operating modes.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 19 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Another list

RFC-170 is titled “RFC List by Number” and authored by the SRI Network Information Center. It's dated June 1st, 1971.

The technical content

This RFC is a table of all RFCs to date, sorted by RFC number. For each entry it contains the first author, title, date, NIC number, and RFC number.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 18 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Other networks

RFC-169 is titled “Computer Networks” and authored by Steve Crocker of UCLA. It's dated May 27th, 1971.

The technical content

This RFC is information about an upcoming IEEE Computer Society workshop on the topic of computer networks. Crocker is a co-chair of the workshop, and it's

intended especially for those manufactureers, users and researchers who have just entered, or are about to enter, the network field.  Presentations are invited on all aspects of computer networks, particularly including user communities, inter-node protocols, terminal and switching equipments, and communications technology.

Specifically it draws attention to the fact that “the number of networks has grown” recently. In 1971 there are computer networks other than ARPANET, such as NPL and MERIT, which both predate the ARPANET by a few years.

Topics covered at the workshop will include:

  • overview of existing networks
  • network services like file transfer and remote job entry
  • hardware and software design considerations
  • network management

The workshop is off-the-record and limited to 65 invited participants to maximize productive discussion.

Analysis

There is a cool IEEE letterhead you can see on page 3 of the PDF scan.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 17 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Air mail

RFC-168 is titled “Arpa Network Mailing Lists” and dated May 26th, 1971. It's authored by Jeanette B. North of the SRI Network Information Center.

The technical content

This RFC discusses how RFCs are to be mailed to the various ARPANET sites and related organizations.

If you wanted to send an RFC out to the Network Working Group, you needed to send a copy by the postal service to some 30 different organizations around the United States. These are the “usual suspects” of the NWG we have heard so much about: SRI, BBN, MIT, UCLA, UCSB, RAND, etc. These are all the “site” participants, aka the organizations that are actively connected to the ARPANET.

One of the organizations on that list is the NIC, who then make further copies of the document and mail them to a list of ten other organizations, such MERIT, EDUCOM, SUNY, and so on. These are all organizations that are interested in being on the ARPANET but are not actively connected.

There is a third list of “NIC Station Agents”. Some of these organizations overlap with above organizations, but the station agents themselves are usually information science professionals tasked with maintaining the libraries of documents at the various sites. So for example, while Dr. Lawrence Roberts at ARPA is a recipient of all RFC documents, Margaret Goering at ARPA also gets a copy. Goering's job is to keep the library up to date and she receives not just RFCs themselves but also a boatload of related reference material that she is tasked with making available as needed to people at ARPA.

All RFC correspondence should be sent via Air Mail.

Further reading

The mention of Air Mail reminds me that 1971 was the waning days of “air mail” as a thing separate from other mail. Starting in 1975 the United States Post Office simply used airplanes whenever convenient, making them no different from trucks or other forms of delivery.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 16 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Sockets, reconsidered

RFC-167 is titled “Socket Conventions Reconsidered” and authored by Abhay Bhushan of MIT Project MAC, Bob Metcalfe of Harvard, and Joel Winett of MIT Lincoln Laboratory. It's dated May 24th, 1971.

The technical content

The problem as laid out in this RFC is that there are two competing considerations for socket numbering as it currently exists:

  • sockets should be limited to 16 bits for smaller hosts (like TIPs)
  • sockets should be 32 bits and should include lots of metadata for “accounting and access control” (namely figuring out who is using what service so that sites can charge money to their users)

The authors suggest doing neither of these and instead waiting for an overhaul of the Network Control Program (NCP) protocol.

According to the authors, “The socket number, as it is used in the current NCP Protocol is a small number with a big function.” They say that there is probably going to need to be “a substantially more powerful identification mechanism” in order to provide the kind of features that the Network demands, that can meet both criteria above: able to account for who is using what services, but also able to be processed by less powerful systems.

One of the main issues is that they want socket allocation to be both unique and repeatable: that is, if you connect one of your processes to a process on a remote server via a socket, they would like that socket to at least remain the same for “reconnection on a regular basis”, though they don't say how regular exactly. The authors say that this means socket allocation should be tied to access controls somehow, aka, sockets should be reservable by individual users.

A “bad way” is the naive solution: keep a list of sockets, their assigned users, and how long they have the socket reserved for. An alternative strategy they recommend is partitioning sockets at a host among its network users. So for example, maybe the first time a user connects to a host at UCLA, they are given a range of sockets that are “theirs” to use as they see fit.

Further reading

“A small number with a big function” is a problem that persists in one form or another on the internet to this day. This blog post is a history of the routing protocol BGP but it covers the history of IP address and routing table growth in detail. These days, an IP address plus a socket number acts as what a socket number (which included the site identifier, analogous to a modern IP address) did back in the ARPANET days.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 15 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

FML

RFC-166 is titled “Data Reconfiguration Service — an Implementation Specification”. It's authored by the Data Reconfiguration Committee, consisting of R.H. Anderson, V.G. Cerf, E. Harslem, J.F. Heafner, J. Madden, R.M. Metcalfe, A. Shoshani, J.E. White, D.C.M. Wood, and dated May 25th, 1971, one week after SJCC.

The technical content

This RFC is a spec for the Data Reconfiguration Service (see RFC-83 and RFC-138 for background). In brief, it's a service that anyone on the ARPANET can access that lets you specify in a custom language a list of rules you want it to enact on a data stream. For example you could put in rules that say “translate the first 50 bytes of a message to ASCII and insert the letter 'e' in between each letter of the message”. The idea is you could easily translate from one data format to another as needed but you wouldn't need to know a specific programming language to do it, and by using the network you wouldn't even need to have the software installed yourself. It is, as I mention in my article on RFC-138, basically what we know as a web service or a web API today.

The way the DRS works is: the user connects to it through a “control connection”. This is where the user specifies the rules for transforming data that the user would like to run on a data stream. Then the user hooks the program that needs data translation services into the “user connecton”. And lastly, there is a “server connection” that connects the DRS to its own host server.

+------------+              +------+          +---------+
| ORIGINATING|     CC       | DRS  |    SC    | SERVER  |
| USER       |--------------|      |----------| PROCESS |
+------------+     ^        +------+     ^    +---------+
                |           /         |
                |        UC/ <-----\  |
                |         /         \ |
                |   +-----------+    \|
TELNET ---------+   | USER      |     +-- Simplex or Duplex
Protocol            | PROCESS   |         Connections
Connection          +-----------+

       Figure 1.  DRS Network Connections

The control connection uses the TELNET protocol for communication, so the idea is you TELNET from your local terminal directly to the DRS, which is always listening on a well-known socket number at its site. You then give a six-character user ID, and then there are some simple commands for entering a form in the Form Machine Language (yes, FML) and then specifying which sockets you'd like to connect to from your actual program that you plan to hook up to it.

In addition to the above situation where a user wants to connect a process they are running to the DRS for its services, there is also a more “interactive” REPL-style mode. This is where you pretty much just log in via the control connection and hook your user connection back up to your own Telnet process!

The remainder of the document is about the Form Machine Language itself and also discusses the mechanics of input/output streams (via input and output pointers). The language supports arithmetic, translation between different literal types (binary, octal, hex, EBCDIC, ASCII), truncation, deletion, paddding, insertion of fields, parsing of variable length records, string length computation, and more. It is very similar in its core capabilities and purpose to something like sed, a UNIX “stream editor” which was developed just a few years later, though Form Machine Lanugage doesn't support regular expressions.

Analysis

Moreso than other RFC specs, this one seems really well-written. I could imagine implementing my own DRS service in a language of my choice using this spec.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 14 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

A corrected ICP

RFC-165 (PDF) is titled “A Proferred Official Initial Connection Protocol”. It's authored by Jon Postel and dated May 25th, 1971.

The technical content

In RFC-164 it was promised that Postel et al would “clean up the ICP [Initial Connection Protocol] specification”, and this is their deliverable for that, about a week after the SJCC meeting.

This isn't too different from previous ICP specifications, but it notably addresses the issue brought up in RFC-161 by fully incorporating the ICP connection sequence suggested therein, and by suggesting that all sites queue incoming ICP requests.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 13 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Notes from SJCC

RFC-164 is titled “Minutes of Network Working Group Meeting 5/16 through 5/19/71”. It's authored by John Heafner of RAND and dated May 16th through May 19th (the first RFC I've seen with a date range, though the date is implied in the title of the document rather than in the usual header location).

A business note: my ten-month Mozilla Fellowship has supported this project up until now, but that has ended. If you like what I'm doing here, please consider supporting me via my Patreon.

The technical content

Whereas RFC-163 was a brief summary of a portion of the Network Working Group meetings at the Spring Joint Computer Conference, this RFC is the full report, weighing in at 32 pages.

About half of the document contains updates from the various Host sites and related institutions. This is in line with the request of Steve Crocker in RFC-116 a month prior. The updates are in bulleted list form and each site reports:

  • how far along it is in implementing a Network Control Program that follows the spec laid out in RFC-107 along with estimated completion dates if they aren't already done
  • basic hardware availablility
  • services offered
  • protocols supported (NETRJS, TELNET, etc)
  • any relevant notes related to IMP connections
  • other notes

Most sites are either already in compliance with RFC-107, are about a month out from achieving compliance, or are not online yet and make no promises at all.

The MIT Project MAC site reports that they've been playing around with a non-FTP form of ASCII file transfer.

There is a “resource notebook” that “has been compiled and distributed”, containing information on special information about 12 of the 19 total ARPANET sites. This is the kind of physical handbook you'd want if you wanted to know what, for example, the log in procedures are for a site you've never connected to before.

There are updates from various governmental organizations outside of the U.S. military, including Terry Shepard of the Canadian Computer/Communications Task Force, who attended the February NWG meeting described in RFC-101. Apparently one of their computer network worries is related to “the size of the country in relation to the sparseness of the population”. A UK government representative, Eric Foxley, is also present. According to his personal website he was faculty at Nottingham University and also Secretary of the “Inter University Committee on Computing” which dealt with the United Kingdom Computer Board, their government funding source. Rather tantalizingly, his update notes that the UK post office “has plans for a digital network in the distant future”, perhaps related to the Post Office Telecommunications department that provided a kind of pre-Internet ISP in the form of the PSS network in the late 1970s.

There is also an update from EDUCOM, which was a big association for the use of computers in education founded in 1964 . EDUCOM is known today as Educause after a 1998 merger. At one point J.C.R Licklider was on their board of trustees! They would eventually run EDUNET, a network resource accessible by members of the organization. Notably:

They have conducted a survey of 70 universities, polled about their interests in the ARPA network: 60 of 70 are interested, 14 have money and are ready to become sites.

The Network Information Center reports that they plan to have full online document access by the summer of 1971, with the ultimate goal being a file transfer protocol based system that allows remote text editing of documents. The offline (mail-based) distribution system will continue to operate alongside the online system.

The plan is for RFC numbers to “eventually go away” in favor of NIC numbers. (This obviously did not happen as the RFC numbering continues 50 years later.)

Telnet is discussed at the meeting and various “issues were raised but not resolved”. There is discussion of RFC-158, which was written essentially at the same time that these meeting notes were taken but was assigned a lower RFC number as part of the glut of documents that came out of the NWG meeting.

Both the RFC-114 file transfer protocol and the RFC-122 Simple-Minded File System are discussed. The latter is described as “an operational program; not a proposal”. Apparently a bunch of host sites are already using it or soon will be.

Socket structure is going to be further discussed, particularly the issue of whether socket identifiers should be 16 bits or 32 bits, as there is some worry that the TIP/IMP will not be able to handle 32-bit sockets.

The Initial Connection Protocol, with its various race conditions mentioned in RFCs 123, 127, and 151, now has a cleanup committee assigned to it.

As promised in RFC-140, time is set aside for people to talk about the similarities between operating system protocols and network protocols:

An analogy was drawn on the basis that the ARPA Network with its hosts and protocols is in a sense an "operating system" and that a study of what makes a good operating system might help define what makes a good ARPA Network.

There is a presentation by Art J. Berstein of SUNY Stony Brook on the features of a flexible operating system, namely:

  1. a flexible file structure involving directory trees, active file tables, etc.
  2. a process hierarchy involving a “father-son relationship” where a father process can spawn a son process
  3. a system for interprocess communication involving “channels, status return, and software interrupts”

The note-taker notes that these are all primary features of MULTICS.

Then Bob Metcalfe offers a retort of sorts, specifically that while tree structures make sense for operating systems, the network is a directed graph and that solutions that apply to tree structures don't generalize to directed graphs. (A tree structure is like a corporate organization chart where every employee has one person they report to, up to the highest levels of the organization which is the “root” of the tree. A directed graph would be kind of like if you worked at a company and it was possible for you to be your manager's manager... much more wild and freeform.)

The Data Reconfiguration Service committee reports that they have solved the remaining technical issues with RFC-138, and a couple of sites have committed to implement a data reconfiguration service and make it available to the network. However, it seems like RAND is the only site that has really bought in at this point, which makes sense since they are the originators of the proposal.

Next the data management committee presents information related to RFC-144 on sharing data over the network. Dick Winter of the Computer Corporation of America notes that with multiple data computers, data sharing

becomes decentralized.  All data computers have identical hardware and software.  Their objective is to dispose and restructure data throughout the Net to optimize its use, i.e., relocate it close to where it is used most heavily.  For small files of wide interest multiple copies can be maintained.

This is extremely similar to the modern concept of a content delivery network, where internet content is replicated in caches distributed around the world so that a user in Tokyo accessing, say, The New York Times, can get their data from a server in Tokyo instead of a server in New York.

Mitre attempted to discuss their data management system with the NWG but it got derailed into a discussion of general principles of data management.

The TIP, or Terminal IMP, is discussed at length. This is essentially an upgrade of the IMP that makes it smarter about things like protocols and line-vs-character communication and so on. Future sites will be installing TIPs instead of IMPs. A site that wants a TIP will need to pay for it. A high end estimate is $100,000, or about $600,000 in 2019 dollars. There is also a lease program offered at $40,000/year ($250k/year in 2019) over three years with a two-year minimum.

Larry Roberts offers some general comments looking at the future of the ARPANET. He notes that:

  • right now the reason to get on the ARPANET is to access services that you don't normally have access to at your own site, though in the future you might connect just to boost your CPU resource (something we do now with “cloud” computing)
  • 1972 is going to be a big year for international sites connecting to ARPANET, with plans for England, Mexico, France, Israel, Australia, Canada, Japan, “etc.”
  • at this point they are already looking at transitioning ARPANET to a civilian organization; AT&T is brought up as a possible organization (though we know now that this will not actually happen)
  • the sites have been “extremely poor and slow” in their progress on NCP development and it's unacceptable that they are still on essentially their first protocol iteration
  • civilian organizations can purchase a TIP but only through a three-year lease and ARPA will be very picky about who they bring on
  • to use the trillion-bit store, they are charging about “10^-4 cents/bit”, which taken literally means storage cost users $0.000001 per bit, or about $1/megabyte

Lastly there is concern about the size of the Network Working group, though there are no definitive takeaways from this discussion.

Analysis

Broadly speaking, this is a very exciting time for the network. Many sites now support many protocols and are actively sharing computing resources, computer time, and knowledge with one another.

For example, not every site is making their own NCP at this point. SRI-ARC is running TENEX, which is a BBN-developed operating system, so they are just waiting for BBN to release the new version of the NCP at which point they'll install it.

I believe Peggy Karp of Mitre is the only woman in attendance at this meeting.

Further reading

Computer Corporation of America checks in and reports that they offer a system that has something called “laser memory”. This must be a reference to the UNICON 690 Laser Mass Memory System by the Precision Instrument Company. This laser memory system powered the trillion-bit store, essentially a giant network hard drive.

According to an ARPA survey, the laser memory system is a permanent physical storage medium and works by using a laser to punch holes of 3 microns in diameter into polyester sheets coated with rhodium metal. For reference, a human hair is about 50 microns wide, so you could fit about two ASCII characters into a space the width of a human hair. It's unclear to me whether it's truly a read-write system, or whether it simply simulates being read-write by considering deleted data to be unused space on the physical strips. I did find a 1972 survey that refers to the UNICON 690-212 as a “non-erasable” storage medium.

According to this 1972 ARPA report the plan was to use the laser memory system as a “tertiary store” so perhaps for backups, with conventional disks providing the read/write mass storage? There is also discussion of timing and access with the laser memory system. Because the system involves polyester strips and servos and lasers and all that stuff, the system needs “between five and ten seconds” to make a file ready for reading or writing.

When querying a database for records, the system is highly dependent on how physically close data records are to one another on these strips: if a query result consists of 10 records that are near each other, it can return the data in less than half a second. If the 10 records are physically scattered on the storage medium, it can take up for 5 seconds.

This 1975 paper from the Computer Corporation of America describes the trillion-bit store in more detail.

Eric Foxley, the UK representative, has a cool online computer museum page with lots of photos.

You can read a history of EDUCOM on their archives. I also stumbled on this neat 1996 interview with Vint Cerf in their archives.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 12 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

Notes from SJCC

RFC-163 is titled “Data Transfer Protocols”. It's authored by Vint Cerf of UCLA and dated May 19th, 1971.

The technical content

Cerf is recapping some information discussed in Atlantic City at the long-awaited 1971 Spring Joint Computer Conference. His main points are: they need a formal, agreed-upon file transfer protocol, and they need to figure out how to interpret the file data that is tranferred.

Cerf posits a “Data Manager” that is a process which is always open on each Host that's in charge of sending and receiving files and operates on a well-known, fixed socket number. But you should also be able to send and receive outside of this Data Manager. He discusses “transient files” that don't need to have names, that are used for simply moving data back and forth between active processes rather than for storage.

Analysis

This is kind of a mess but it states at the outset that it's basically informal notes from a meeting of the Data Transfer Committee (which Cerf suggests be renamed the Data Transmission Committee). I'm gathering from the quality of these notes that it was not a very productive meeting...

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.

by Darius Kazemi, June 11 2019

In 2019 I'm reading one RFC a day in chronological order starting from the very first one. More on this project here. There is a table of contents for all my RFC posts.

NETBUGGER3

RFC-162 is titled “NETBUGGER3”. It's authored by Mark Kampe of the UCLA Network Measurement Center and dated May 22nd, 1971.

The technical content

This brief RFC describes NETBUGGER3: a third-level program that itself is designed to help with debugging and simulating third-level programs and protocols. It can also debug (but not simulate) second-level protocols. The impetus was that UCLA Network Measurement Center wanted to write a program to connect to a program running UCLA's Campus Computing Network, but the CCN's server was not yet implemented and wouldn't be for a few months. The NMC folks still wanted to get work, hence Kampe worked on his “third level debugger-simulator”.

So NETBUGGER3 basically pretends to be the remote server running the program you want. It can also act as an intermediary between you and another remote server, acting as a passthrough and examining and even editing data. It's not that different from using developer tools in a web browser that way.

One convenient thing you can do is log in to UCLA NMC via Telnet, start their copy of NETBUGGER3 up, configure it, and then debug whatever service you like from your own site against their NETBUGGER3 server.

Analysis

I like writing NETBUGGER3.

I assume it's named after the third-level layer that it is designed to interact on, and perhaps the implication is that one day there could be a NETBUGGER2 or a NETBUGGER4.

It of course makes sense that the Network Measurement Center would be the place to come up with a useful tool for, well, measuring the network. It does not, however, seem to have been used much, as I can't find any subsequent references to it.

How to follow this blog

You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@365-rfcs@write.as” and follow it there.

About me

I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.