Computer security depends ultimately on the security of the computer -- it's an indisputable tautology so self-evident that it seems almost insulting to point it out. Yet what may be obvious in the abstract is sometimes dangerously under-appreciated in practice. Security people come predominantly from software-centric backgrounds and we're often predisposed to relentlessly scrutinize the things we understand best while quietly assuming away everything else. But attackers, sadly, are under no obligation to play to our analytical preferences. Several recent research results make an eloquent and persuasive case that a much broader view of security is needed. A bit of simple hardware trickery, we're now reminded, can subvert a system right out from under even the most carefully vetted and protected software.
Earlier this year, Princeton graduate student Alex Halderman and seven of his colleagues discovered practical techniques for extracting the contents of DRAM memory, including cryptographic keys, after a computer has been turned off [link]. This means, among other worries, that if someone -- be it a casual thief or a foreign intelligence agent -- snatches your laptop, the fact that it had been "safely" powered down may be insufficient to protect your passwords and disk encryption keys. And the techniques are simple and non-destructive, involving little more than access to the memory chips and some canned-air coolant.
If that's not enough, this month at the USENIX workshop on Large-Scale Exploits and Emergent Threats, Samuel T. King and five colleagues at the University of Illinois Urbana-Champaign described a remarkably efficient approach to adding hardware backdoors to general-purpose computers [pdf]. Slipping a small amount of malicious circuitry (just a few thousand extra gates) into a CPU's design is sufficient to enable a wide range of remote attacks that can't be detected or prevented by conventional software security approaches. To carry out such attacks requires the ability to subvert computers wholesale as they are built or assembled. "Supply chain" threats thus operate on a different scale than we're used to thinking about -- more in the realm of governments and organized crime than lone malfeasants -- but the potential effects can be absolutely devastating under the right conditions.
And don't forget about your keyboard. At USENIX Security 2006, Gaurav Shah, Andres Molina and I introduced JitterBug attacks [pdf], in which a subverted keyboard can leak captured passwords and other secrets over the network through a completely uncompromised host computer, via simple covert timing channel. The usual countermeasures against keyboard sniffers -- virus scanners, network encryption, and so on -- don't help here. JitterBug attacks are potentially viable not only at the retail level (via a surreptitious "bump" in the keyboard cable), but also wholesale (by subverting the supply chain or through viruses that modify keyboard firmware).
Should you actually be worried? These attacks probably don't threaten all, or even most, users, but the risk that they might be carried out against any particular target isn't really the main concern here. The larger issue is that they succeed by undermining our assumptions about what we can trust. They work because our attention is focused elsewhere.
An encouraging (if rare) sign of progress in Internet security over the last decade has been the collective realization that even when we can't figure out how to exploit a vulnerability, someone else eventually might. Thankfully, it's now become (mostly) accepted practice to fix newly discovered software weaknesses even if we haven't yet worked out all the details of how to use them to conduct attacks. We know from hard experience that when "theoretical" vulnerabilities are left open long enough, the bad guys eventually figure out how to make them all too practical.
But there's a different attitude, for whatever reason, toward the underlying hardware. Even the most paranoid among us often take the integrity of computer hardware as an article of faith, something not to be seriously questioned. A common response to hardware security research is bemused puzzlement at how us overpaid academics waste our time on such impractical nonsense when there are so many real problems we could be solving instead. (Search the comments on Slashdot and other message boards about the papers cited above for examples of this reaction.)
Part of the disconnect may be because hardware threats are so unpleasant to think about from the software perspective; there doesn't seem to be much we can do about them except to hope that they won't affect us. Until, of course, they eventually do.
Reluctance to acknowledge new kinds of threats is nothing new, and it isn't limited to causal users. Even the military is susceptible. For example, Tempest attacks against cryptographic processors, in which sensitive data radiates out via spurious RF and other emanations, were first discovered by Bell Labs engineers in 1943. They noticed signals that correlated with the plain text of encrypted teletype messages coming from a certain piece of crypto hardware. According to a recently declassified NSA paper on Tempest [pdf]:
Bell Telephone faced a dilemma. They had sold the equipment to the military with the assurance that it was secure, but it wasn't. The only thing they could do was to tell the Signal Corps about it, which they did. There they met the charter members of a club of skeptics who could not believe that these tiny pips could really be exploited under practical field conditions. They are alleged to have said something like: "Don't you realize there's a war on? We can't bring our cryptographic operations to a screeching halt based on a dubious and esoteric laboratory phenomenon. If this is really dangerous, prove it."Some things never change. Fortunately, the Bell Labs engineers quickly set up a persuasive demonstration, and so the military was ultimately convinced to develop countermeasures. That time, they were able to work out the details before the enemy did.
Thanks to Steve Bellovin for pointing me to www.nsa.gov/public/crypt_spectrum.cfm, a small trove of declassified NSA technical papers from the 1970's and 1980's, all of which are well worth reading. And no discussion of underlying security would be complete without a reference to Ken Thompson's famous Turing award lecture, Reflections on Trusting Trust.