by Darius Kazemi, April 13 2019
RFC-103 is titled “Implementation of Interrupt Keys”. It's authored by Richard Kalin of MIT Lincoln Laboratory on February 24, 1971.
A business note: my ten-month Mozilla Fellowship has supported this project up until now, but that has ended. If you like what I'm doing here, please consider supporting me via my Patreon.
The technical content
The author wishes to share his concerns about the interrupt function in the Host/Host protocol. He breaks his document into three sections: The Problem, A Solution, and Commentary. I'll borrow this nomenclature.
The problem as he states it is that the interrupt key or break key (or “help request button”!) does two things:
- it halts the user process
- it switches the keyboard input stream, but anything before that point in time is still sent to the halted process; anything after is for the “supervisory” process, in most cases I'd guess the operating system shell
Kalin's problem is that the interrupt function of the NCP communicates information about halting a remote process, but it doesn't tell the remote host when to stop accepting keyboard input. Because the interrupt command for the NCP travels on an totally parallel connection (the control link) to the data itself, the receiving host can't guarantee that the “interrupt” will arrive at the exact order relative to all the keystrokes you're receiving that it was sent in. For example a user might send
L I S T [INTERRPUT] L O G O U T but the receiving host might read chronologically
L I S T L O G O U T [INTERRUPT]. So while the user intended “LIST” for the user process and “LOGOUT” for the operating system, the receiving host would attempt to run “LIST” followed by “LOGOUT” on the user process, and only then break to the operating system.
He points out that encoding the interrupt as ASCII in the data stream doesn't work for all ARPANET computers (again see RFC-48 for an example why).
He suggests that all character data be sent in 8-bit chunks. Since ASCII characters require the use of 7 bits of data, but the words on the IMP are in 8 bits, he suggests a scheme that uses the high bit to send control characters like interrupt.
He notes that this could still fail if the receiving host is out of memory, so the interrupt on the separate control link should still exist. When a computer gets that interrupt, it needs to scan for the ASCII character and synchronize everything up. But in the case of a failure, perhaps for out of memory reasons, the receiving host still process an interrupt and can at least guess at what to do next.
Kalin notes that this only works for 7-bit ASCII! There were literally scores of competing character encodings, many of which used 8 bits, and this would not work for either of them. He also notes that the scanning solution kinda sucks. And there are cases that could cause infinite loops on the remote computer.
In his estimation, the real solution is a significant rewrite of the NCP protocol. His proposal here is merely a patch until the next design revision.
Kalin authored RFC-60, which I felt was pretty persuasively written, and once again he delivers with a really well-organized rhetorical argument.