by Darius Kazemi, Feb 1 2019
RFC-32 is titled “Some Thoughts on SRI's Proposed Real Time Clock”. It's authored by Jerry Cole of UCLA and dated February 5th, 1970. It's a response to RFC-28 and RFC-29, which both concerned real time clocks on the SRI HOST. If you'll recall, RFC-28 was a literal request for comment on the clock plan. RFC-29 was Bob Kahn of BB&N, well, commenting.
The technical content
Cole notes that while the NIC (Network Information Center) can measure the amount of time it takes from a message leaving a HOST to arriving at another HOST, they don't have a mechanism for measuring the internal delay on the HOST itself between, for example, a message arriving over the network and then actually being communicated to the user. He notes that installing a clock on the HOST could allow for these measurements to be made.
I'm confused by some of the language here because I'm not so great at computer clock lingo. Cole says the resolution should be about 1 millisecond, the accuracy should be about “1 part in 10E7” and a range of about 24 hours.
According to the Network Time Protocol FAQ, resolution is the smallest possible increase in time allowed by your clock. So basically this clock would tick up one millisecond at a time.
According to the same FAQ, accuracy is how much two clocks, synchronized to the same time, will drift apart from one another. It's measured as a percentage error and so “1 part in 10E7”, which eagle-eyed readers point out is probably scientific notation. 10E7 is 1x10^8, which works out to 1 part in 10^8 or .000001%, which is wayyy too extremely accurate for his later claim of using a relatively inexpensive crystal clock? I'm not convinced this guy's math was right? But also I don't know much about the cost per accuracy of crystal clocks in 1970. That's about 1 millisecond of clock drift per day. Too good to be true by my analysis. Again, welcoming more input from eagle-eyed readers here.
Edit Feb 3 2019: This calculation is corroborated by Bill English in the upcoming RFC-34.
I'm not sure what “range” means in the context of this RFC and it's hard to search the web for because it's a common word.
He also refers to using “crystal controlled clocks” which were standard technology in 1970. If you've heard the term “quartz watch”, that's the same kind of crystal he is referring to.
There is so much to learn about clocks in computing. That Network Time Protocol FAQ is a nice resource, especially if you're a programmer. Crystal oscillators are fascinating, and their physics doubly so. Basically if you cut a thin piece of crystal and run a small electric current through it, it will move back and forth at a well-known speed. This provides the “pendulum” of the clock, with the advantage over a pendulum that you can like, move the clock around and it doesn't mess with timing. Also you can miniaturize it.
Clocks are extremely important in basically any electronic device for a variety of reasons. My favorite electronic integrated circuit of all time is the 555 timer, introduced in 1972, two years after this RFC. It's a very cheap way of keeping reasonably accurate time and some people claim it's the most popular integrated circuit component of all time. Although that kind of thing is hard to measure, it would certainly be my first guess for most popular component as well.