by Darius Kazemi, June 27 2019
RFC-178 is titled “NETWORK GRAPHIC ATTENTION HANDLING”. It's by first-time RFC author Ira W. Cotton of MITRE, dated June 27th, 1971.
The technical content
This RFC describes ideas for “attention-handling”, aka interactive graphics, over the network. This was under-defined in RFC-177 so I'm glad to have it better defined here.
Cotton defines an attention event as “
a stimulus to the graphic system, such as that resulting from a keystroke or light pen usage, which presents information to the system”. Information from the input is interpreted via hardware and software and then an immediate (emphasis Cotton's) action is taken: either the information is passed to another process, or the graphical display or its underlying data are updated somehow. This is separate from the program logic itself — basically it's the part of the program that governs immediate response to a user input, rather than any application logic of the software. It's the kind of software plumbing that powers something like an onKeyDown event in a modern scripting language.
The concept applies equally to simple input devices like keyboards, and also complex input devices that are computers in their own right.
The paper is laying out the case for a standard attention-handling system to go along with a standard graphical display system. The idea is to make things more modular. Without these standards, if you wanted to make your computer program compatible with some new light pen model, you'd need to rewrite a signifcant chunk of your program. But if the manufacturer could provide a kind of input processor program that translates from its input device to the standard attention-handling system, then you could in theory plug in a new device “for free”.
On top of that, if a user program could connect to ARPANET and send the standard attention-handling events and receive the standard graphical display primitives then that would mean you could connect easily connect your program to a beefy remote graphical processing system, send it your live interactive inputs, have it respond appropriately with the standard graphical primitives, and then render it locally on a graphical display.
The document then discusses different “graphical input devices”, including:
- basically keyboards
- analog devices
- something that turns continuous input into a numerical value, like a mouse or a trackball
- good to use as a “pointing device”
- something that converts a tap of a stylus on a 2D surface to a digital (x,y) coordinate
- light pens
- a pen that interacts directly with a screen, similar to a touchscreen today, though you need to use the special pen to make it work
- internal attentions
- these are “inputs” from inside the system itself, but intrinsic to the hardware rather than the software, like a hardware interrupt saying “you are trying to draw to a coordinate that is outside the bounds of the screen”
- logical attentions
- this is like an “internal attention” but based on a software rule rather than something baked into the hardware
The document then discusses what an “intelligent terminal” is, though the author doesn't seem to be able to settle on something concrete.
The last, very brief, section of the RFC is an attempt to suggest a protocol for the different kinds of attention, though the intent is not
"to formally propose such a protocol down to the level of 'this bit means that'". The authors propose that any protocol will need to be able to identify the type of device, identify the related data for the event being communicated, and then carry the data itself.
In the transcribed official versions of RFC-178, we see this text:
Figure 3 Network Configuration (Omitted due to complexity) Figure 4 Network Configuration with Intelligent Terminal (Omitted due to complexity)
As far as I'm aware, these figures are not available anywhere online. I found the original diagrams at the Computer History Museum archives and took some photographs. Here they are!
How to follow this blog
You can subscribe to this blog's RSS feed or if you're on a federated ActivityPub social network like Mastodon or Pleroma you can search for the user “@firstname.lastname@example.org” and follow it there.
I'm Darius Kazemi. I'm an independent technologist and artist. I do a lot of work on the decentralized web with ActivityPub, including a Node.js reference implementation, an RSS-to-ActivityPub converter, and a fork of Mastodon, called Hometown. You can support my work via my Patreon.