Hard to get much detail from this story but if it is mostly true, we are mostly in real trouble with cyber-security.
Archive for the ‘General Security’ Category
I’ve begun spending more time on “cybersecurity” (quotes used because I’m not sure the industry has a standard definition of that term yet). This article in Foreign Policy is pretty high level but the parallels it draws between the coming age of bio weapons and cyber threats is pretty interesting.
I’ve been embarrassed to see that it has been over a year since my last post on this blog. So why the long delay? Quite honestly my work has been so internally focused within Cisco that there wouldn’t have been much I could say. But as I sit on a plane heading to Networkers (oops I mean Cisco Live!) it seems an appropriate time to reflect on what’s been going on in the land of IT and IT security. I’m spending a lot more time with customers now and I think there are a few conversations worth having on this blog.
When I returned to Cisco in the fall of 2008 I was asked to look into a trend that had troubled many folks: known at the time as “deperimeterization.” The Jericho Forum had coined the term and it struck fear into the hearts of many in the network security industry as it spelled a potential end to rich network security services and pointed towards a world of open and insecure networks interconnecting smart endpoints with security only at the application level.
My investigation into deperimeterization quickly expanded into a look at four interconnected trends: desktop virtualization, software-as-a-service, cloud computing, and IT consumerization. In the 18 months since my initial research these trends have gone from niche issues among a small group of strategists to mainstream concerns that need no explanation.
And what of deperimeterization? Cisco determined that the trend was real but instead of pointing towards open and dumb networks it actually pointed to even more sophisticated networks to enable the interconnection of the myriad devices that need to connect and collaborate. What are these devices’ sole point of commonality? Not their OS; Microsoft’s hegemony on the endpoint will continue to wane as traditional desktop PCs give way to a variety of different computing devices focused on all sorts of vertical applications and use cases. This new crop of devices will run different hardware, software, and not all devices will even have a human operator.
The only thing these devices have in common is that all will have a TCP/IP stack and will make use of a common network. This makes the network the natural architectural choice for the delivery of services across this diverse set of endpoints. Cisco has marshaled enormous resources behind this trend and has named it Borderless Networks. There is much more to say about all of this but I figured Cisco Live is as good a place as any to start the conversation.
Just a quick note that the second version of Cisco SAFE came out this week at the RSA show. You can get it here. If you thought my original was long at 66 pages, prepare for a shock: the new one clocks in at over 300! I’ve not yet read it but I got an overview from some of the authors a couple weeks back and I liked what I heard. I guess I shouldn’t make too many jokes about its length, it is still less than half the length of my book on the same subject.
While security best practices don’t change quickly, we wrote the original SAFE back in 2000 and a lot has happened since then. Many of the foundation best practices remain very relevant but there are some new tools and techniques that can help protect networks against today’s threats.
John Markoff has an op-ed in the New York Times where he makes the case for starting over on the Internet in order to improve security. Lots of others are talking about his piece all over the blogosphere–this discussion is clearly warranted. Markoff’s arguments are flimsy and supported by vague statements from experts. One of those experts, Gene Spafford, has already repudiated the implied conclusions of the piece.
My biggest complaint is that in an article entitled, “Do We Need a New Internet?,” the absence of quotes from anyone who would answer that question, “No” is irresponsible, even for an op-ed. “Starting over” is a very naive perspective in the engineering of in-production systems. I’ve been in meetings throughout my career where someone in the room said, “If only we started over.” That is a tantalizing thought, but ultimately impossible in the real world.
To the surprise of no one who read the comments to my earlier post, it is now official that Nortel was the purchaser of Identity Engines’ IP assets. They updated the IDE homepage with a short message and contact info for more information. Given that they are inviting IDE customers to contact Nortel’s account teams, I’m hopeful that they’ll be providing some ongoing support options to existing IDE customers. Have any IDE customers contacted Nortel yet? What was the result?
Full Disclosure: I have never worked directly with, nor had the opportunity to review, Google’s security practices. My post applies equally to Google as it does to any large site aggregating private information in perpetuity.
Google’s security protections, though they are certainly extensive, can’t possibly stand the interminable test of time. As Oracle learned many years ago, nothing is unbreakable. Google themselves just fixed holes in the SAML implementation behind their single sign-on service. However, if you look at the core tenets of the way Google aggregates private consumer information, there exists the assumption that there won’t be such a breach. Take Gmail for example; users are told “you’ll never need to delete another message.” Turning on personalized search, as another example, causes Google to start saving your search and browsing histories. Google even recently ventured into the medical record business with their Google Health offering. On that homepage they proudly state, “We will never sell your data. You are in control. You choose what you want to share and what you want to keep private.”
This seems to be the basic thrust of privacy policies from Google and other websites. The data is yours, we won’t sell it, and if we mine it, we’ll keep you anonymous. As a consumer I think privacy policies are a great and necessary advance for the web, even though the vast majority of users probably ignore them. However, privacy policies have the assumption of a perfect system. They talk about what the company is obligated to consciously do or not do with your data. They often don’t say anything about what happens if their site is compromised. The reason, of course, is once compromised there’s nothing they can do.
Those of us who are older have our lifetime of data spread across outdated computer hard drives and software, sitting on backup CDs somewhere, or tucked away in an “old computer” directory on our current system. I’m not arguing that this data is any better protected but an adversary needs to single out an individual to get it or target systems running a particular OS or browser version. The online data, by contrast, might be more methodically protected but it is also more widely damaging if the protection fails.
So what can be done about it? From Google’s perspective they need to spend on security like the lives of their customers depend on it. As Cory Doctorow said, “Personal data is as hot as nuclear waste.” For consumers there are a few things you can do. However, I’m not sure avoiding all online services is one of them unless you like the mountains and don’t feel too attached to flush toilets. For starters:
- Choose companies that recognize the risk, recognize the trust you are placing in them, and most importantly are making the investment to back the talk up.
- Spread your data out among multiple services (i.e. Email at Google, photos at Yahoo). This is the classic all-your-eggs-in-one-basket argument. While it is conceivable that one provider could have a more vigilant security operation than all others, it is far less risky to assume there will be a compromise of your data somewhere and therefore try to mitigate the extent of the exposure.
- Select the data you are willing to share online carefully. The ‘net community used to say, “Never put anything in an email that you would be embarrassed to see posted on the office bulletin board.” This belief was woefully short-sighted with regard to the extent that the Internet has permeated all aspects of our lives. Consider storing things online that you must have access to from a wide variety of Internet devices or in situations where an online service offering is vastly better than an offline counterpart.
I must admit that this guidance is thin in comparison to the extent of the possible breach. What other ideas do folks have to reduce your risk?
A recent Network World article highlights a lengthy debate between Joel Snyder and Richard Stiennon on the merits of NAC. It is a good read overall and ANA even makes a brief appearance thanks to a mention by Joel (Thanks Joel!). Here’s the relevant exchange:
Joel_Snyder: I’ll jump in here too. Sean Convery just wrote a paper on NAC. (He doesn’t want to call it NAC, he calls it Authenticated Network Architecture — ANA). Anyway, the point he makes is that you don’t need to have super fine-grained ACLs to get a huge reduction in risk.
Richard_Stiennon: *My* point would be that you NEED to get to fine-grained access control to secure your enterprise.
Joel_Snyder: Fine-grained is a spectrum. Aren’t you the guy who just advocated VLANs? I’m saying that if you have coarse control, even go/no-go, that’s a reduction in risk.
Richard_Stiennon: We agree.
Joel brings out one of the central novel points of the paper. Here’s the relevant text (from section 7.3, page 14):
Organization architects that appreciate the capabilities that ANA provides often adopt a design that has many user roles. Larger organizations might have hundreds or thousands of groups in their user directory, and the natural conclusion is to define a network-access profile for each group. This approach, however, is very problematic, primarily because of the complexity involved in managing the large number of roles. In addition, the goal of ANA is not to supplant the application security infrastructure you have already built but rather to augment it. Instead of defining hundreds of roles for the network, a smaller number—likely much fewer than a dozen—can provide a huge boost in the sophistication of your network infrastructure, while remaining completely manageable.
If you think of your network now as essentially a network with one role (full access), then the rationale for adding more roles is to define the high-level separation of rights that provides the most significant security improvement at the most operationally insignificant cost. The roles most organizations should consider follow, beginning with the roles that should be created first. It is not important to deploy all the roles at once. Each additional role adds another layer of delineation to the existing definitions already deployed.
Standard access – This role is the default role that every user and device is currently a part of, whether through explicit authentication or implicit network connectivity. As you roll out ANA, you will gradually assign each user to a more specific role, with the goal of minimizing the number of users and devices that are a part of the standard access role.
Guest access – This role is the most significant role you can add, because it enables any sponsored visitor to connect to your network and gain authenticated access to the Internet at large. By providing easy-to-use guest access, you minimize occurrences of users trying to connect to your private internal network where they might have full access. Most individuals are just trying to get their work done, and if you give them an easy way to get to the Internet (and the network of their home location) everyone is better off. Section 11 details the specific design considerations and policy trade-offs of guest access.
Contractor access – Adding this role means that you no longer have to grant every contractor full access to your network. You can send contractors through a contractor VPN portal where they have access only to the specific systems that they need to fulfill their contract. This setup gives your organization the option to treat contractors more like guests and less like employees. You can grant specific access for only the defined duration of the contract. This solution also facilitates remote vendor troubleshooting or technical support in which an external support engineer needs, for example, 30 minutes of access to one specific system on your network.
Privileged access – When you introduce the privileged-access role, you curtail the rights of the standard-access role so that it no longer offers access to areas of the network deemed extremely sensitive, such as HR, finance, and R&D areas. Only the users who require access to such resources are placed in the privileged-access role.
In summary, with only four roles, you can significantly reduce unauthorized access to sensitive data. In most organizations, approximately 50% of the user base is part of the standard-access role, 10% has guest access, 20% has contractor access, and 20% has privileged access. With these four roles in place, sensitive systems remain exposed to a mere 20% of the user community.
The thing that often gets lost in these sorts of debates is that the network and the application security are cooperating to reduce risk. The network reduces the size of the funnel of potential attackers and attacks but the applications still provide their own–application specific–fine-grained access control. This isn’t an all or nothing proposition, defense-in-depth still applies.
I’m thrilled to announce that my company just launched the Authenticated Network Architecture (ANA). ANA is a vendor-neutral framework that leverages industry standards for the design of an identity-centric security system. ANA was conceived as the next logical step from my earlier work with the Cisco SAFE Blueprint and builds on my textbook “Network Security Architectures“. The ANA white paper goes into significant detail and breaks out deployment in five phases, each of which is incrementally beneficial and none of which requires a forklift upgrade (or any particular network vendor’s gear). I recommend you check out the overview first but feel free to download the complete white paper.
As anyone who’s familiar with my approach to white papers will know, the document does not pitch my company’s products at all, in fact they are not even mentioned. Also, one of the nice things about working at a small company is I can revise the document and publish an update fairly easily. I’d love feedback from the community on information you’d like to see added, any errors you found, or just general comments. Here’s the executive summary:
Network security has been evolving since its inception, sometimes slowly, sometimes in larger increments. As technology has shifted, best practices have slowly matured. What was a good idea two years ago is still likely a good idea today, with minor variations based on the evolving threats and business requirements. However, we are currently at an inï¬‚ection point in the use of network-based security controls. Whereas previous designs focused almost exclusively on static policies, ï¬lter rules, and enforcement controls, a newer approach has emerged that promises much more dynamic options to address the increased mobility and diversity of todayâ€™s network users.
This approach, called the Authenticated Network Architecture (ANA), is based on the notion of authentication of all users on a network and the association of each user with a particular set of network entitlements. For example, guests are granted access only to the Internet, contractors only to discrete network resources, employees only to the broader network as a whole, and privileged employees only to isolated enclaves of highly secured resources. Most of the capabilities described in the architecture have been available in shipping network infrastructure for many years. However, while the architecture itself does not mandate much in the way of equipment migration, it does require organizations to think differently with regard to their overall security framework. The cooperation of security and network architects with their more operationally inclined counterparts in IT is critical to ensure that the designs contained in this document evolve with the growing capabilities of your infrastructure.
This document outlines the ANA approach as a whole and describes how to migrate existing enterprise security designs to this more dynamic approach. In particular, it discusses the best practices that are emerging in ANA as well as the speciï¬c business requirements that influence deployment decisions.
Zeus Kerravala at Yankee has a nice column at Network World on the opportunity around network, identity, and policy integration. He writes:
Ultimately, getting policy to reside in a central location is the key. Rather than many disparate systems with policy information, enterprises need to have a single policy store, intimately tied to the identity store, where the network infrastructure can apply and enforce policy on all traffic. Having policy management in the core-with control at the edge-is the only scalable model for pulling together network, identity, and policy.
It is great to see more folks in the industry coalescing around this idea. The only thing I might take issue with is his goal of a single policy store. While that might be the best-case design ideal, I think the real world will require a much more collaborative approach. This is part of the reason my company writes all its policies using XACML. We’re expecting the need to share policy over time.