Security Expert? No Such Thing

expert
noun
1. a person who has a comprehensive and authoritative knowledge of or skill in a particular area.

authoritative
adjective
1. able to be trusted as being accurate or true; reliable.

Let me start by throwing myself under the bus. The tag line of my book “Network Security Architectures” from 2004 is “Expert guidance on designing secure networks”. Lately it seems there are security experts popping up everywhere. The more I think about it though, the more I think we, as a community, need to put down that title until we prove that the technology we build and the systems we implement can predictably and substantially address the problem.

By almost any empirical data, the cyber security best practices and technology we’ve built over the last 20 years are not meeting even this basic standard. Things have gotten worse since Robert Morris’s fateful original worm in 1988 started all the fun. It is comical that we often start presentations with a gloom and doom slide showing how bad things have gotten. Does anyone really not know this? Even my mom sees the headlines and can engage with me about my work in a way she never could before. Information is Beautiful has a wonderfully terrible-to-behold infographic.

There are plenty of large successful companies listed who can afford the latest tech and the smartest analysts.

And yes I know that all security measures can be circumvented… safes are rated on the number of hours they can withstand before being breached… I don’t care. What we’re dealing with is a systemic, decades long, inability to stop the bad guy in any consistent way.

I also know that there is probably an organization out there somewhere who has yet to be breached (or discover a breach) and that too, is missing the point. Such an organization, employing the best and the brightest, deploying the latest technology, and helmed by the most respected “expert” in the business, would still in the end fall back on something like, “now let’s hope for the best.” Furthermore, such an organization would need to employ a significant team to operationally respond to moment-by-moment attacks lest they become tomorrow’s breach.

Finally, there’s also a great excuse our industry trots out regularly, “This is really, really hard.” And no question, it is. I suspect it is several orders of magnitude harder than protecting your home. The reason most people in safe neighborhoods don’t live in constant fear of their belongings being taken from their homes isn’t because their door-locks or windows are especially secure. (They can surely be bypassed in much less time than it would take to bypass a modern network firewall.) It is because of two reasons unrelated to perimeter home security.

First, we are counting on modern law enforcement to deter most criminals. Second, we all bank on the probability that if a criminal decides to take the risk, our home likely won’t be the one targeted. Cyber security enjoys little in the way of law enforcement deterrence nor does it have a particularly high cost to the adversary per attempt. In fact, if home invasion could be wholesale attempted with the same impunity that those who develop bot-nets enjoy, we’d all need armed guards.

And yet despite all these extenuating circumstances, I’m not willing to let all of us off the hook. I’m frankly embarrassed and certainly frustrated at the progress made by the industry I’ve worked in for 20 years. There’s a lot more to say about why this may be happening and what we can do about it but that’s for another essay.

Consider another industry from the physical world in comparison. (And yes, I know security folks love to use physical analogies, sorry to be so predictable.)

You can call Joseph Strauss and Charles Ellis experts. They designed the Golden Gate bridge. Their expertise stems not only from the finished product, but also because others can follow similar principals and create a bridge with similar safety, reliability, and beauty. Also, unlike the organization without a breach from our earlier example, though the bridge requires constant maintenance it isn’t of the sort that might result in total structural failure if the maintenance team calls in sick for a few days.

You might say we, as a modern civilization, have figured this bridge-building thing out. You or I couldn’t build a bridge of course, but if we needed one, I’m certain we could find the right experts to do it for us and we’d be confident in the checks and balances of safety inspections and the like.

Not so for securing IT systems.

There are clearly experts working within cyber security but they are focused on one specific discipline. Cryptography is the most obvious example. Cryptographers aren’t yet at the level of bridge designers in terms of stability and confidence but they are close. Standards are established, there is rigorous peer review, and the fastest computers in the world routinely try to cheat the mathematics at the foundation of everyone’s previously vetted work.

However, the moment a person enters the equation by entering a password to unlock his magnificently encrypted data, the perils of security as a system rear their head. Key loggers, weak passwords, and social engineering can subvert the mightiest algorithm. It is the same with all other aspects of security. Authentication, access control, alerting, detection… all fail primarily when they begin to depend on each other as a system. Back to our previous example, imagine the bridge covered with secret locations which if hit just right with a hammer would initiate a collapse. Joe and Chuck would be monsters, not marvels.

So the next time someone is introduced as a security expert, feel free to narrow your eyes a bit, and if you consider yourself among the most knowledgable folks in our fair corner of the cyber landscape, may I humbly suggest picking a new honorific: security practitioner perhaps. The fact that we’re all still practicing in this industry gives the title a nice honesty, don’t you think?

As for my book’s tagline, I’ll just beg forgiveness. It was 12 years ago and I’m smarter now to know that I am dumber.

From Deperimeterization to Borderless Networks

I’ve been embarrassed to see that it has been over a year since my last post on this blog. So why the long delay? Quite honestly my work has been so internally focused within Cisco that there wouldn’t have been much I could say. But as I sit on a plane heading to Networkers (oops I mean Cisco Live!) it seems an appropriate time to reflect on what’s been going on in the land of IT and IT security. I’m spending a lot more time with customers now and I think there are a few conversations worth having on this blog.

When I returned to Cisco in the fall of 2008 I was asked to look into a trend that had troubled many folks: known at the time as “deperimeterization.” The Jericho Forum had coined the term and it struck fear into the hearts of many in the network security industry as it spelled a potential end to rich network security services and pointed towards a world of open and insecure networks interconnecting smart endpoints with security only at the application level.

My investigation into deperimeterization quickly expanded into a look at four interconnected trends: desktop virtualization, software-as-a-service, cloud computing, and IT consumerization. In the 18 months since my initial research these trends have gone from niche issues among a small group of strategists to mainstream concerns that need no explanation.

And what of deperimeterization? Cisco determined that the trend was real but instead of pointing towards open and dumb networks it actually pointed to even more sophisticated networks to enable the interconnection of the myriad devices that need to connect and collaborate. What are these devices’ sole point of commonality? Not their OS; Microsoft’s hegemony on the endpoint will continue to wane as traditional desktop PCs give way to a variety of different computing devices focused on all sorts of vertical applications and use cases. This new crop of devices will run different hardware, software, and not all devices will even have a human operator.

The only thing these devices have in common is that all will have a TCP/IP stack and will make use of a common network. This makes the network the natural architectural choice for the delivery of services across this diverse set of endpoints. Cisco has marshaled enormous resources behind this trend and has named it Borderless Networks. There is much more to say about all of this but I figured Cisco Live is as good a place as any to start the conversation.

Cisco SAFE 2.0

Just a quick note that the second version of Cisco SAFE came out this week at the RSA show. You can get it here. If you thought my original was long at 66 pages, prepare for a shock: the new one clocks in at over 300! I’ve not yet read it but I got an overview from some of the authors a couple weeks back and I liked what I heard. I guess I shouldn’t make too many jokes about its length, it is still less than half the length of my book on the same subject.

While security best practices don’t change quickly, we wrote the original SAFE back in 2000 and a lot has happened since then. Many of the foundation best practices remain very relevant but there are some new tools and techniques that can help protect networks against today’s threats.

John Markoff’s “Do We Need a New Internet?”

John Markoff has an op-ed in the New York Times where he makes the case for starting over on the Internet in order to improve security. Lots of others are talking about his piece all over the blogosphere–this discussion is clearly warranted. Markoff’s arguments are flimsy and supported by vague statements from experts. One of those experts, Gene Spafford, has already repudiated the implied conclusions of the piece.

My biggest complaint is that in an article entitled, “Do We Need a New Internet?,” the absence of quotes from anyone who would answer that question, “No” is irresponsible, even for an op-ed. “Starting over” is a very naive perspective in the engineering of in-production systems. I’ve been in meetings throughout my career where someone in the room said, “If only we started over.” That is a tantalizing thought, but ultimately impossible in the real world.

Yes, it was Nortel.

To the surprise of no one who read the comments to my earlier post, it is now official that Nortel was the purchaser of Identity Engines’ IP assets. They updated the IDE homepage with a short message and contact info for more information. Given that they are inviting IDE customers to contact Nortel’s account teams, I’m hopeful that they’ll be providing some ongoing support options to existing IDE customers. Have any IDE customers contacted Nortel yet? What was the result?

Google’s Security is not Unbreakable

Full Disclosure: I have never worked directly with, nor had the opportunity to review, Google’s security practices. My post applies equally to Google as it does to any large site aggregating private information in perpetuity.

Google’s security protections, though they are certainly extensive, can’t possibly stand the interminable test of time. As Oracle learned many years ago, nothing is unbreakable. Google themselves just fixed holes in the SAML implementation behind their single sign-on service. However, if you look at the core tenets of the way Google aggregates private consumer information, there exists the assumption that there won’t be such a breach. Take Gmail for example; users are told “you’ll never need to delete another message.” Turning on personalized search, as another example, causes Google to start saving your search and browsing histories. Google even recently ventured into the medical record business with their Google Health offering. On that homepage they proudly state, “We will never sell your data. You are in control. You choose what you want to share and what you want to keep private.”

This seems to be the basic thrust of privacy policies from Google and other websites. The data is yours, we won’t sell it, and if we mine it, we’ll keep you anonymous. As a consumer I think privacy policies are a great and necessary advance for the web, even though the vast majority of users probably ignore them. However, privacy policies have the assumption of a perfect system. They talk about what the company is obligated to consciously do or not do with your data. They often don’t say anything about what happens if their site is compromised. The reason, of course, is once compromised there’s nothing they can do.

This intersection of fallible security with infinite private data is perhaps most troubling. There is a good possibility that my children will never have a classic mail account with local mail storage on their computer. They may never need to store photos on their own machine, preferring instead to use online services (Google has one already, of course). They’ll likely write their documents, store their financial and medical data, and build and maintain contact with friends, all online. Google wants to be the provider of those services to my kids, but if they don’t, someone else will. What is striking is the permanence of this data. Facebook, for example, doesn’t delete your data when you leave their service preferring instead to simply “deactivate” your profile. In short, it isn’t unreasonable to suggest that some children being born today will give Google or someone else the keys to all the private digital data that they will ever generate in their entire lives. It isn’t paramount what Google will or won’t do with that data as many are arguing but rather what the future infamous hacker will do–Google’s privacy policy doesn’t apply to her.

Those of us who are older have our lifetime of data spread across outdated computer hard drives and software, sitting on backup CDs somewhere, or tucked away in an “old computer” directory on our current system. I’m not arguing that this data is any better protected but an adversary needs to single out an individual to get it or target systems running a particular OS or browser version. The online data, by contrast, might be more methodically protected but it is also more widely damaging if the protection fails.

So what can be done about it? From Google’s perspective they need to spend on security like the lives of their customers depend on it. As Cory Doctorow said, “Personal data is as hot as nuclear waste.” For consumers there are a few things you can do. However, I’m not sure avoiding all online services is one of them unless you like the mountains and don’t feel too attached to flush toilets. For starters:

  1. Choose companies that recognize the risk, recognize the trust you are placing in them, and most importantly are making the investment to back the talk up.
  2. Spread your data out among multiple services (i.e. Email at Google, photos at Yahoo). This is the classic all-your-eggs-in-one-basket argument. While it is conceivable that one provider could have a more vigilant security operation than all others, it is far less risky to assume there will be a compromise of your data somewhere and therefore try to mitigate the extent of the exposure.
  3. Select the data you are willing to share online carefully. The ‘net community used to say, “Never put anything in an email that you would be embarrassed to see posted on the office bulletin board.” This belief was woefully short-sighted with regard to the extent that the Internet has permeated all aspects of our lives. Consider storing things online that you must have access to from a wide variety of Internet devices or in situations where an online service offering is vastly better than an offline counterpart.

I must admit that this guidance is thin in comparison to the extent of the possible breach. What other ideas do folks have to reduce your risk?

Technorati Tags: ,

Snyder and Stiennon Debate NAC; ANA Makes Guest Appearance

A recent Network World article highlights a lengthy debate between Joel Snyder and Richard Stiennon on the merits of NAC. It is a good read overall and ANA even makes a brief appearance thanks to a mention by Joel (Thanks Joel!). Here’s the relevant exchange:

Joel_Snyder: I’ll jump in here too. Sean Convery just wrote a paper on NAC. (He doesn’t want to call it NAC, he calls it Authenticated Network Architecture — ANA). Anyway, the point he makes is that you don’t need to have super fine-grained ACLs to get a huge reduction in risk.

Richard_Stiennon: *My* point would be that you NEED to get to fine-grained access control to secure your enterprise.

Joel_Snyder: Fine-grained is a spectrum. Aren’t you the guy who just advocated VLANs? I’m saying that if you have coarse control, even go/no-go, that’s a reduction in risk.

Richard_Stiennon: We agree.

Joel brings out one of the central novel points of the paper. Here’s the relevant text (from section 7.3, page 14):

Organization architects that appreciate the capabilities that ANA provides often adopt a design that has many user roles. Larger organizations might have hundreds or thousands of groups in their user directory, and the natural conclusion is to define a network-access profile for each group. This approach, however, is very problematic, primarily because of the complexity involved in managing the large number of roles. In addition, the goal of ANA is not to supplant the application security infrastructure you have already built but rather to augment it. Instead of defining hundreds of roles for the network, a smaller number—likely much fewer than a dozen—can provide a huge boost in the sophistication of your network infrastructure, while remaining completely manageable.

If you think of your network now as essentially a network with one role (full access), then the rationale for adding more roles is to define the high-level separation of rights that provides the most significant security improvement at the most operationally insignificant cost. The roles most organizations should consider follow, beginning with the roles that should be created first. It is not important to deploy all the roles at once. Each additional role adds another layer of delineation to the existing definitions already deployed.

Standard access – This role is the default role that every user and device is currently a part of, whether through explicit authentication or implicit network connectivity. As you roll out ANA, you will gradually assign each user to a more specific role, with the goal of minimizing the number of users and devices that are a part of the standard access role.

Guest access – This role is the most significant role you can add, because it enables any sponsored visitor to connect to your network and gain authenticated access to the Internet at large. By providing easy-to-use guest access, you minimize occurrences of users trying to connect to your private internal network where they might have full access. Most individuals are just trying to get their work done, and if you give them an easy way to get to the Internet (and the network of their home location) everyone is better off. Section 11 details the specific design considerations and policy trade-offs of guest access.

Contractor access – Adding this role means that you no longer have to grant every contractor full access to your network. You can send contractors through a contractor VPN portal where they have access only to the specific systems that they need to fulfill their contract. This setup gives your organization the option to treat contractors more like guests and less like employees. You can grant specific access for only the defined duration of the contract. This solution also facilitates remote vendor troubleshooting or technical support in which an external support engineer needs, for example, 30 minutes of access to one specific system on your network.

Privileged access – When you introduce the privileged-access role, you curtail the rights of the standard-access role so that it no longer offers access to areas of the network deemed extremely sensitive, such as HR, finance, and R&D areas. Only the users who require access to such resources are placed in the privileged-access role.

In summary, with only four roles, you can significantly reduce unauthorized access to sensitive data. In most organizations, approximately 50% of the user base is part of the standard-access role, 10% has guest access, 20% has contractor access, and 20% has privileged access. With these four roles in place, sensitive systems remain exposed to a mere 20% of the user community.

The thing that often gets lost in these sorts of debates is that the network and the application security are cooperating to reduce risk. The network reduces the size of the funnel of potential attackers and attacks but the applications still provide their own–application specific–fine-grained access control. This isn’t an all or nothing proposition, defense-in-depth still applies.

Technorati Tags: ,

Introducing the Authenticated Network Architecture (ANA)

I’m thrilled to announce that my company just launched the Authenticated Network Architecture (ANA). ANA is a vendor-neutral framework that leverages industry standards for the design of an identity-centric security system. ANA was conceived as the next logical step from my earlier work with the Cisco SAFE Blueprint and builds on my textbook “Network Security Architectures“. The ANA white paper goes into significant detail and breaks out deployment in five phases, each of which is incrementally beneficial and none of which requires a forklift upgrade (or any particular network vendor’s gear). I recommend you check out the overview first but feel free to download the complete white paper.

As anyone who’s familiar with my approach to white papers will know, the document does not pitch my company’s products at all, in fact they are not even mentioned. Also, one of the nice things about working at a small company is I can revise the document and publish an update fairly easily. I’d love feedback from the community on information you’d like to see added, any errors you found, or just general comments. Here’s the executive summary:

Network security has been evolving since its inception, sometimes slowly, sometimes in larger increments. As technology has shifted, best practices have slowly matured. What was a good idea two years ago is still likely a good idea today, with minor variations based on the evolving threats and business requirements. However, we are currently at an inflection point in the use of network-based security controls. Whereas previous designs focused almost exclusively on static policies, filter rules, and enforcement controls, a newer approach has emerged that promises much more dynamic options to address the increased mobility and diversity of today’s network users.

This approach, called the Authenticated Network Architecture (ANA), is based on the notion of authentication of all users on a network and the association of each user with a particular set of network entitlements. For example, guests are granted access only to the Internet, contractors only to discrete network resources, employees only to the broader network as a whole, and privileged employees only to isolated enclaves of highly secured resources. Most of the capabilities described in the architecture have been available in shipping network infrastructure for many years. However, while the architecture itself does not mandate much in the way of equipment migration, it does require organizations to think differently with regard to their overall security framework. The cooperation of security and network architects with their more operationally inclined counterparts in IT is critical to ensure that the designs contained in this document evolve with the growing capabilities of your infrastructure.

This document outlines the ANA approach as a whole and describes how to migrate existing enterprise security designs to this more dynamic approach. In particular, it discusses the best practices that are emerging in ANA as well as the specific business requirements that influence deployment decisions.

Technorati Tags: , ,