Security Expert? No Such Thing

1. a person who has a comprehensive and authoritative knowledge of or skill in a particular area.

1. able to be trusted as being accurate or true; reliable.

Let me start by throwing myself under the bus. The tag line of my book “Network Security Architectures” from 2004 is “Expert guidance on designing secure networks”. Lately it seems there are security experts popping up everywhere. The more I think about it though, the more I think we, as a community, need to put down that title until we prove that the technology we build and the systems we implement can predictably and substantially address the problem.

By almost any empirical data, the cyber security best practices and technology we’ve built over the last 20 years are not meeting even this basic standard. Things have gotten worse since Robert Morris’s fateful original worm in 1988 started all the fun. It is comical that we often start presentations with a gloom and doom slide showing how bad things have gotten. Does anyone really not know this? Even my mom sees the headlines and can engage with me about my work in a way she never could before. Information is Beautiful has a wonderfully terrible-to-behold infographic.

There are plenty of large successful companies listed who can afford the latest tech and the smartest analysts.

And yes I know that all security measures can be circumvented… safes are rated on the number of hours they can withstand before being breached… I don’t care. What we’re dealing with is a systemic, decades long, inability to stop the bad guy in any consistent way.

I also know that there is probably an organization out there somewhere who has yet to be breached (or discover a breach) and that too, is missing the point. Such an organization, employing the best and the brightest, deploying the latest technology, and helmed by the most respected “expert” in the business, would still in the end fall back on something like, “now let’s hope for the best.” Furthermore, such an organization would need to employ a significant team to operationally respond to moment-by-moment attacks lest they become tomorrow’s breach.

Finally, there’s also a great excuse our industry trots out regularly, “This is really, really hard.” And no question, it is. I suspect it is several orders of magnitude harder than protecting your home. The reason most people in safe neighborhoods don’t live in constant fear of their belongings being taken from their homes isn’t because their door-locks or windows are especially secure. (They can surely be bypassed in much less time than it would take to bypass a modern network firewall.) It is because of two reasons unrelated to perimeter home security.

First, we are counting on modern law enforcement to deter most criminals. Second, we all bank on the probability that if a criminal decides to take the risk, our home likely won’t be the one targeted. Cyber security enjoys little in the way of law enforcement deterrence nor does it have a particularly high cost to the adversary per attempt. In fact, if home invasion could be wholesale attempted with the same impunity that those who develop bot-nets enjoy, we’d all need armed guards.

And yet despite all these extenuating circumstances, I’m not willing to let all of us off the hook. I’m frankly embarrassed and certainly frustrated at the progress made by the industry I’ve worked in for 20 years. There’s a lot more to say about why this may be happening and what we can do about it but that’s for another essay.

Consider another industry from the physical world in comparison. (And yes, I know security folks love to use physical analogies, sorry to be so predictable.)

You can call Joseph Strauss and Charles Ellis experts. They designed the Golden Gate bridge. Their expertise stems not only from the finished product, but also because others can follow similar principals and create a bridge with similar safety, reliability, and beauty. Also, unlike the organization without a breach from our earlier example, though the bridge requires constant maintenance it isn’t of the sort that might result in total structural failure if the maintenance team calls in sick for a few days.

You might say we, as a modern civilization, have figured this bridge-building thing out. You or I couldn’t build a bridge of course, but if we needed one, I’m certain we could find the right experts to do it for us and we’d be confident in the checks and balances of safety inspections and the like.

Not so for securing IT systems.

There are clearly experts working within cyber security but they are focused on one specific discipline. Cryptography is the most obvious example. Cryptographers aren’t yet at the level of bridge designers in terms of stability and confidence but they are close. Standards are established, there is rigorous peer review, and the fastest computers in the world routinely try to cheat the mathematics at the foundation of everyone’s previously vetted work.

However, the moment a person enters the equation by entering a password to unlock his magnificently encrypted data, the perils of security as a system rear their head. Key loggers, weak passwords, and social engineering can subvert the mightiest algorithm. It is the same with all other aspects of security. Authentication, access control, alerting, detection… all fail primarily when they begin to depend on each other as a system. Back to our previous example, imagine the bridge covered with secret locations which if hit just right with a hammer would initiate a collapse. Joe and Chuck would be monsters, not marvels.

So the next time someone is introduced as a security expert, feel free to narrow your eyes a bit, and if you consider yourself among the most knowledgable folks in our fair corner of the cyber landscape, may I humbly suggest picking a new honorific: security practitioner perhaps. The fact that we’re all still practicing in this industry gives the title a nice honesty, don’t you think?

As for my book’s tagline, I’ll just beg forgiveness. It was 12 years ago and I’m smarter now to know that I am dumber.

“A New Kind of Warfare”

In this morning’s NYT, there was an illuminating article on cyberwarfare. In short, for both the Libyan attacks and the strike in Pakistan against Bin Laden, the U.S. considered–but ultimately rejected–the option to leverage cyberwarfare against the air defense systems in these countries. The entire article is filled with quotable phrases, here are just a couple:

“These cybercapabilities are still like the Ferrari that you keep in the garage and only take out for the big race and not just for a run around town, unless nothing else can get you there,” said one Obama administration official briefed on the discussions.

“They were seriously considered because they could cripple Libya’s air defense and lower the risk to pilots, but it just didn’t pan out,” said a senior Defense Department official.

Why did they decide not to leverage the attacks? Not because they would not be effective. In fact, the sources in the article acknowledge that it might have reduced the risk to U.S. forces. Instead, from the article we learn, “‘We don’t want to be the ones who break the glass on this new kind of warfare,’ said James Andrew Lewis, a senior fellow at the Center for Strategic and International Studies.” So essentially the worry is once the U.S. starts leveraging cyberwarfare it invites others to do the same. The article, by Eric Schmitt and Thom Shanker, did a great job of explaining some of the trade-offs with cyberwarfare and also how hard these sorts of attacks can be to execute. From a cybersecurity perspective, there are several things to consider.

First, the evidence now seems overwhelming that any country with sufficient resources (to say nothing of non-state actors) is actively researching techniques for cyber attacks against their targets of interest. Whether you consider Stuxnet (which this article suggests American-Israeli cooperation in launching that attack) or the recent Wired article on the drone fleet infection that I previously referenced, it seems clear that major governments have paid operatives figuring out how to break into networks.

Second, I am concerned with what this means for responsible disclosure. Gone are the halcyon days, if indeed they ever existed, of a vulnerability being discovered by an intrepid researcher and responsibly disclosed to the CERT. We’re far beyond debates about whether disclosing a zero-day on bugtraq is ethical or not. It seems quite likely, though this is clearly conjecture on my part, that zero-days are being stockpiled by governments around the world against commercial software, embedded digital control systems, and everything in-between.

The question is, from a policy perspective how is this treated by the organization that discovers it? Clearly weaponizing a vulnerability is an advantage to the entity that discovers it, but if the vulnerability is in commercial software, how do you protect yourself without telling the vendor about the issue to get a fix (and thus losing the advantage of your discovery)? It would be like conventional warfare where everyone was using the same exact tanks. Imagine a mechanic discovering a hard to exploit weakness in the armor, but the only way to fix it would be to get the parts supplier to offer the fix for everyone. What do you do: protect your own troops (and everyone else’s) by disclosing the weakness to the supplier or hope the other side hasn’t discovered it yet and use it to your own advantage?

This spills over into all sorts of questions about who has the advantage in this new arms race and what role commercial security tools play in the defense against or execution of, cyberattacks. I’m just beginning to think seriously about this space and I expect the answers to these, and a host of other questions won’t come quickly or definitively.