by Darius Kazemi, May 18 2019
Data reconfiguration for Atlantic City
RFC-138 is titled “Status Report on Proposed Data Reconfiguration Service” and authored by Anderson, et al. It's dated April 28th, 1971.
The technical content
Much like RFC-137, this RFC is the result of a committee formed at Crocker's suggestion in RFC-116 to make the best use of the available time at the upcoming Atlantic City NWG meeting. Where RFC-137 reported findings of the Telnet committee, this RFC reports the findings of the data reconfiguration committee.
If you've forgotten, the data reconfiguration service (DRS) was proposed in RFC-83 by Anderson, Harslem, and Heafner of RAND. It is basically a language that is meant to transform one arbitrary data format into another. It's meant to allow programmers to concise provide formulas that can quickly translate, say, the first 20 characters of an ASCII message into EBDIC and append the length of the resulting entire message. It's functionally a little bit like a regular expression with more of a focus on alteration of content. In theory you could come up with a single-line formula that you could feed into a data reconfiguration service and then suddenly you have a function that translates IMP messages into messages that your NCP can read. And if the specifications of a format change, instead of having to write a whole new program, you could just change the single line of your formula and everything would work with the new format.
A user who wants to use this service would use Telnet to connect to the DRS at a well-known site and socket. Anyone who knows the site/socket combination can connect and request a data reconfiguration. A user can connect to the DRS and either request that it accept input data from one socket and send the output to another, or to operate it in interactive mode (where the user provides the input and receives the output, presumably over the Telnet socket itself).
The actual specification of the DRS “form syntax” (the formal rules you can provide to the DRS to explain what you want it to do) does not vary all that much from what was outlined in RFC-83. The big difference here is that it's more precisely defined, specific limits are imposed on things like field length, and many useful recipes are provided that act as illustrations of what this could be useful for.
One interesting thing is that this is a service that lives on the network. This RFC is effectively describing what we might think of as a “web API” in 2019: you send some data to a server, it does a calculation and sends you back new data. Of course, this is some 18 years before the invention of the World Wide Web.
MIT, UCLA, UCSB, and RAND are all planning to implement a DRS and provide it as a service to the network.
I really love that the document ends with the list of recipes and then the list of proposed uses of the DRS. Here's a recipe (my term, not theirs, by the way):
VARIABLE LENGTH RECORDS Some devices, terminals and programs generate variable length records. To following rule picks up variable length EBCDIC records and translates them to ASCII. CHAR(,E,,#), /*pick up all (an arbitrary number of) EBCDIC characters in the input stream*/ (,X,X"FF",2) /*followed by a hexadecimal literal, FF (terminal signal)*/ :(,A,CHAR,), /*emit them as ASCII*/ (,X,X"25",2); /*emit an ASCII carriage return*/
And some of the proposed uses include:
- reformatting during file transfer
- translating an image specified in one graphics format to another
- as a description language for message formats
- input validation before insertion into databases
In my opinion it's really effective to put forth examples and use cases. It makes a strong case for the DRS as a critical service that should be implemented as soon as possible. Whether it is implemented, I suppose we'll see.
This is very explicitly an early example of I/O streams over a network as an organizing principle of programming. (Though the concept itself goes back to the early 1960s.) James Halliday (aka Substack) has written a guidebook on I/O streams in Node.js. Specific technology aside, I think it does a good job of explaining some of the advantages of stream-based thinking/programming.