Several people asked me for a list of references from my talk on "Safecracking, Secrecy and Science" Sunday morning in Sebastopol, and I promised a blog entry with pointers. (If you were there, thanks for coming; it was fun. For everyone else, I gave a talk on the relationship between progress and secrecy in security, as illustrated by the evolution of locks and safes over the last 200 years.)
Unfortunately, few of the historical references I cited are on the web (or even in print), but a bit of library work is repaid with striking parallels between the security arms races of the physical and virtual worlds.
The California Secretary of State recently announced plans for a "top-to-bottom" review of the various electronic voting systems certified for use in that state. David Wagner of U.C. Berkeley and Matt Bishop of U.C. Davis will be organizing source code and "red team" analysis efforts for the project, and they've recruited a large group of researchers to work with them, including me. This has the potential to be one of the most comprehensive independent evaluations of election technologies ever performed, and is especially significant given California's large size and the variety of systems used there. Trustworthy voting is perhaps the most elemental of democratic functions, but, as security specialists know all too well, complex systems on the scale required to conduct a modern election are virtually impossible to secure reliably without broad and deep scrutiny. California's review is a welcome and vitally important, if small, step forward.
I'll be leading one of the source code review teams, and we'll be getting to work by the time you read this. We have a lot to do in a very short time, with the final report due to be published by late summer or early fall. Until then, I won't be able to discuss the project or the details of how we're progressing, so please don't take it personally if I don't.
For some more details, the project FAQ is available here (PDF format).
UPDATE Aug 2, 2007: Our code review reports are now available. See this blog entry for details.
As interested as I am in the human-scale side of security, I suppose I should have strong opinions about last week's unscheduled evacuation drill in Boston. There's plenty to react to, after all: misguided marketing, hair-trigger over-reaction, shameless media pandering, oddball artists, and of course, disingenuous self-justification from all concerned. Yet for all the negligence and ineptitude on display, there doesn't seem to be very much to learn from these mistakes that we didn't already know. More troubling to me is the manipulative con game that triggered the whole spectacle in the first place. And, for a change, this has nothing to do with homeland security or fear mongering. But it strikes at the heart of commerce, culture and trust.
We often say that researchers break poor security systems and that feats of cryptanalysis involve cracking codes. As natural and dramatic as this shorthand may be, it propagates a subtle and insidious fallacy that confuses discovery with causation. Unsound security systems are "broken" from the start, whether we happen to know about it yet or not. But we talk (and write) as if the people who investigate and warn us of flaws are responsible for having put them there in the first place.
Words matter, and I think this sloppy language has had a small, but very real, corrosive effect on progress in the field. It implicitly taints even the most mainstream security research with a vaguely disreputable, suspect tinge. How to best disclose newly found vulnerabilities raises enough difficult questions by itself; let's try to avoid phrasing that inadvertently blames the messenger before we even learn the message.
I've long been an admirer of the James Randi Educational Foundation (JREF), tireless advocates for critical thinking, skepticism, and the scientific method. They offer a one million dollar prize to the first person who can provide convincing, testable proof of supernatural powers. The foundation recently set up a "remote viewing" challenge in which the purported psychic is asked to describe the contents of a special sealed box held at the JREF office in Fort Lauderdale, Florida.
Those who know me may be surprised to read this, but I'm pleased to announce that Jutta Degener and I have successfully visualized the contents of Randi's challenge box. We accomplished this from over a thousand miles away and entirely through mental concentration and the application of our unique talents (or, I should say, gifts), and without any physical access or inside information. We can now reveal to the world the item in the box: a small mirrored flat circular wheel or disk, such as a DVD or CD. Randi, if you're reading this, a money order or certified check will be fine.
I saw an interesting story (thanks to Dave Farber's Interesting-People list) on how the TSA is considering selling advertising space at airport security checkpoints. My distaste at the prospect of being subjected to ads during these already humiliating and irritating screenings aside, I found the most fascinating part of this article to be its glimpse at the officious technical jargon that has emerged for airport security paraphernalia. Those grey tubs that you put your laptop in (after removing it from its case, of course) are apparently properly called "divestiture bins"; after X-ray, we retrieve our items at the "composure tables". I don't know about you, but I don't usually feel especially composed after making it through a long security line.
I'd say you can't make this stuff up, but apparently someone does.
Newly armed with the official terminology, I did a bit of googling this morning and found the TSA's Airport Security Design guidelines. This 333 page (PDF format) manual specifies, in all the detail one could ever hope for, everything there is to know about designing the security infrastructure for an airport, right down to the layout of the divest tables for the X-ray ingress points at sterile concourse station SSCPs. It's all very meticulous and complete, even warning of the "potential for added delay while the passenger divests or composes" (page 99). For some geeky reason, I find all this mind-numbing detail about the physical architecture of security to make strangely compelling reading, and I can't help but look for loopholes and vulnerabilities as I skim through it.
Somehow, for all the attention to minutiae in the guidelines, everything ends up just slightly wrong by the time it gets put together at an airport. Even if we accept some form of passenger screening as a necessary evil these days, today's checkpoints seem like case studies in basic usability failure designed to inflict maximum frustration on everyone involved. The tables aren't quite at the right height to smoothly enter the X-ray machines, bins slide off the edges of tables, there's never enough space or seating for putting shoes back on as you leave the screening area, basic instructions have to be yelled across crowded hallways. According to the TSA's manual, there are four models of standard approved X-ray machines, from two different manufacturers. All four have sightly different heights, and all are different from the heights of the standard approved tables. Do the people setting this stuff up ever actually fly? And if they can't even get something as simple as the furniture right, how confident should we be in the less visible but more critical parts of the system that we don't see every time we fly?
There was a nice column by Randall Stross in the New York Times a few weeks back entitled "Theater of the Absurd at the T.S.A." [http://www.nytimes.com/2006/12/17/business/yourmoney/17digi.html]. (Unfortunately, the Times' perversely inverted pricing structure -- in which fresh news is free and old news is charged for -- means that the article may no longer be available on their free site by the time you look for it, but that's another subject for another day.) Anyway, the piece focuses on the ongoing plight of Christopher Soghoian, a computer science graduate student at Indiana University who has found himself in hot water for making available a web-based tool that creates visually convincing -- but entirely fake -- airline boarding passes. Soon after the site went online the FBI raided his home and shut down his site, but ultimately decided against filing any criminal charges. But the Transportation Security Administration isn't so sure. In spite of the FBI's declining to prosecute, the TSA is, as of this writing, apparently considering pursuing tens of thousands of dollars in administrative penalties against Mr. Soghoian. Bruce Schneier and I were both quoted in the Times piece, with the two of us suggesting that openness, rather than threats of fines and prosecution, would be a far better strategy for improving security here.
Does this mean I think putting the boarding pass generator online was a great idea? No, and in fact I have some serious reservations about it. But if Mr. Soghoian may have been guilty of a bit of poor judgement here, the TSA's behavior in response has been far, far worse -- and suggests problems much more damaging to our security than forged boarding passes.
If it works out, I expect to publish most of my non-academic-paper-style writing here.
For a variety of reasons (spam, net kooks, and so on), I've decided against including an "add a comment" feature, at least for now. So if something I write so infuriates you that you simply must respond, you'll need to get your own blog to do it.