In this morning’s NYT, there was an illuminating article on cyberwarfare. In short, for both the Libyan attacks and the strike in Pakistan against Bin Laden, the U.S. considered–but ultimately rejected–the option to leverage cyberwarfare against the air defense systems in these countries. The entire article is filled with quotable phrases, here are just a couple:
“These cybercapabilities are still like the Ferrari that you keep in the garage and only take out for the big race and not just for a run around town, unless nothing else can get you there,” said one Obama administration official briefed on the discussions.
“They were seriously considered because they could cripple Libya’s air defense and lower the risk to pilots, but it just didn’t pan out,” said a senior Defense Department official.
Why did they decide not to leverage the attacks? Not because they would not be effective. In fact, the sources in the article acknowledge that it might have reduced the risk to U.S. forces. Instead, from the article we learn, “‘We don’t want to be the ones who break the glass on this new kind of warfare,’ said James Andrew Lewis, a senior fellow at the Center for Strategic and International Studies.” So essentially the worry is once the U.S. starts leveraging cyberwarfare it invites others to do the same. The article, by Eric Schmitt and Thom Shanker, did a great job of explaining some of the trade-offs with cyberwarfare and also how hard these sorts of attacks can be to execute. From a cybersecurity perspective, there are several things to consider.
First, the evidence now seems overwhelming that any country with sufficient resources (to say nothing of non-state actors) is actively researching techniques for cyber attacks against their targets of interest. Whether you consider Stuxnet (which this article suggests American-Israeli cooperation in launching that attack) or the recent Wired article on the drone fleet infection that I previously referenced, it seems clear that major governments have paid operatives figuring out how to break into networks.
Second, I am concerned with what this means for responsible disclosure. Gone are the halcyon days, if indeed they ever existed, of a vulnerability being discovered by an intrepid researcher and responsibly disclosed to the CERT. We’re far beyond debates about whether disclosing a zero-day on bugtraq is ethical or not. It seems quite likely, though this is clearly conjecture on my part, that zero-days are being stockpiled by governments around the world against commercial software, embedded digital control systems, and everything in-between.
The question is, from a policy perspective how is this treated by the organization that discovers it? Clearly weaponizing a vulnerability is an advantage to the entity that discovers it, but if the vulnerability is in commercial software, how do you protect yourself without telling the vendor about the issue to get a fix (and thus losing the advantage of your discovery)? It would be like conventional warfare where everyone was using the same exact tanks. Imagine a mechanic discovering a hard to exploit weakness in the armor, but the only way to fix it would be to get the parts supplier to offer the fix for everyone. What do you do: protect your own troops (and everyone else’s) by disclosing the weakness to the supplier or hope the other side hasn’t discovered it yet and use it to your own advantage?
This spills over into all sorts of questions about who has the advantage in this new arms race and what role commercial security tools play in the defense against or execution of, cyberattacks. I’m just beginning to think seriously about this space and I expect the answers to these, and a host of other questions won’t come quickly or definitively.