My backup was from a few weeks ago so any of the more recent comments are gone but everything else seems to be good. Assuming this gets from Ecto where I’m writing it all the way to the Feedburner feed, I think we’re back to normal. In case anyone cares, I’m using Bluehost now; quite pleased so far.
John Markoff has an op-ed in the New York Times where he makes the case for starting over on the Internet in order to improve security. Lots of others are talking about his piece all over the blogosphere–this discussion is clearly warranted. Markoff’s arguments are flimsy and supported by vague statements from experts. One of those experts, Gene Spafford, has already repudiated the implied conclusions of the piece.
My biggest complaint is that in an article entitled, “Do We Need a New Internet?,” the absence of quotes from anyone who would answer that question, “No” is irresponsible, even for an op-ed. “Starting over” is a very naive perspective in the engineering of in-production systems. I’ve been in meetings throughout my career where someone in the room said, “If only we started over.” That is a tantalizing thought, but ultimately impossible in the real world.
To the surprise of no one who read the comments to my earlier post, it is now official that Nortel was the purchaser of Identity Engines’ IP assets. They updated the IDE homepage with a short message and contact info for more information. Given that they are inviting IDE customers to contact Nortel’s account teams, I’m hopeful that they’ll be providing some ongoing support options to existing IDE customers. Have any IDE customers contacted Nortel yet? What was the result?
Just a quick 802.1X update that some folks may have missed. There is a new IOS release for the 6500: 12.2.33 SXI (gotta love our naming scheme). It provides some very nice improvements for wired 802.1X rollouts. Network World wrote up the basics and even provides some config examples; take a look. When this hits the other Cisco switch platforms I’ll be sure to provide another update.
I’ve been back at Cisco for nearly two months and I’ve been doing my best to adapt from the startup culture I was living for a few years to the Cisco culture I was in for seven years. The good news is Cisco seems more agile than when I left; there is more collaboration and I see a lot of good things happening. It is great to meet so many new people and also work with many of my former colleagues. So while the first all-day meeting was certainly a shock, I’m not feeling disoriented.
As I mentioned, I’m not working strictly in Identity but will certainly be involved given my background. Lately I’ve been spending a lot of time thinking about some of the meta-trends in IT such as software-as-a-service (SaaS), cloud computing, virtualization, and the like. The impact on–and the value that can be derived from–the network regarding these trends is a very interesting space.
I did get a chance to get a dump from the ACS folks on their shipping 5.0 product. It looks and feels like a complete rewrite of ACS because it is. The UI is cleaner, the configuration steps more obvious, and the policy sophistication is far beyond ACS 4.X. There is plenty of work still to do but the gap between what we had at IDE and what ACS can do now is far more narrow. ACS also now does things that we didn’t do at IDE.
I am interested in talking to customers about their plans around SaaS, cloud, and virtualization (particularly desktop). If you are responsible for IT at your organization and can spare some time to talk with me live, I’d love to chat. My new email is my first name at cisco dot com.
Sorry for the long delay between posts. I was hoping by now there would be something public that could be discussed regarding Identity Engines’ fate but alas we don’t seem to be there yet. I’m sure I’ve signed all kinds of confidentiality agreements so I’m not going to be the one to spill the beans. I sincerely apologize to our customers. In the final days of the company–like every other day of the company’s history–you were our top priority. I am hopeful that the arrangement, once announced, will give you all a path forward.
Personally, I start a new job at Cisco soon. My role will broaden out a bit from security and identity but I expect to keep my fingers in both pies for the foreseeable future–I’m excited to get started. I don’t know what this means for my blog though. I need to give that some thought and discuss it with my new group.
Full Disclosure: I have never worked directly with, nor had the opportunity to review, Google’s security practices. My post applies equally to Google as it does to any large site aggregating private information in perpetuity.
Google’s security protections, though they are certainly extensive, can’t possibly stand the interminable test of time. As Oracle learned many years ago, nothing is unbreakable. Google themselves just fixed holes in the SAML implementation behind their single sign-on service. However, if you look at the core tenets of the way Google aggregates private consumer information, there exists the assumption that there won’t be such a breach. Take Gmail for example; users are told “you’ll never need to delete another message.” Turning on personalized search, as another example, causes Google to start saving your search and browsing histories. Google even recently ventured into the medical record business with their Google Health offering. On that homepage they proudly state, “We will never sell your data. You are in control. You choose what you want to share and what you want to keep private.”
This seems to be the basic thrust of privacy policies from Google and other websites. The data is yours, we won’t sell it, and if we mine it, we’ll keep you anonymous. As a consumer I think privacy policies are a great and necessary advance for the web, even though the vast majority of users probably ignore them. However, privacy policies have the assumption of a perfect system. They talk about what the company is obligated to consciously do or not do with your data. They often don’t say anything about what happens if their site is compromised. The reason, of course, is once compromised there’s nothing they can do.
Those of us who are older have our lifetime of data spread across outdated computer hard drives and software, sitting on backup CDs somewhere, or tucked away in an “old computer” directory on our current system. I’m not arguing that this data is any better protected but an adversary needs to single out an individual to get it or target systems running a particular OS or browser version. The online data, by contrast, might be more methodically protected but it is also more widely damaging if the protection fails.
So what can be done about it? From Google’s perspective they need to spend on security like the lives of their customers depend on it. As Cory Doctorow said, “Personal data is as hot as nuclear waste.” For consumers there are a few things you can do. However, I’m not sure avoiding all online services is one of them unless you like the mountains and don’t feel too attached to flush toilets. For starters:
- Choose companies that recognize the risk, recognize the trust you are placing in them, and most importantly are making the investment to back the talk up.
- Spread your data out among multiple services (i.e. Email at Google, photos at Yahoo). This is the classic all-your-eggs-in-one-basket argument. While it is conceivable that one provider could have a more vigilant security operation than all others, it is far less risky to assume there will be a compromise of your data somewhere and therefore try to mitigate the extent of the exposure.
- Select the data you are willing to share online carefully. The ‘net community used to say, “Never put anything in an email that you would be embarrassed to see posted on the office bulletin board.” This belief was woefully short-sighted with regard to the extent that the Internet has permeated all aspects of our lives. Consider storing things online that you must have access to from a wide variety of Internet devices or in situations where an online service offering is vastly better than an offline counterpart.
I must admit that this guidance is thin in comparison to the extent of the possible breach. What other ideas do folks have to reduce your risk?
Having a Google news alert on “802.1X” sometimes gives you some amusing stories. It seems the Turkish Ministry of Education is rolling out a new secure LAN using 802.1X and VLANs. The article goes on to say, “This deployment is considered to be one of the largest 802.1x application deployments in the Turkish market.” I found this interesting because 802.1X was such a focus of what they discussed. Like I’m seeing in North America, the government demands for secure audit and segmentation appear to be consistent in at least this portion of Southeastern Europe. Based on the discussions I’ve already had in Asia and the UK, 802.1X may be serving a global need.
Technorati Tags: 802.1X
Jon Oltsik on identity-based networking. As usual, he gets it right. No cringing from the long-time Cisco folks on the DEN reference later in the article. DEN was the right idea, just introduced way too early to survive.
Network access control (NAC) has certainly had a boisterous lifetime.
Cisco Systems first coined this term in 2005 when introducing an initiative to ensure that only “healthy” endpoints could access the network. In the intervening years, the NAC concept gained popularity, drove tremendous VC investment, and most recently came crashing down in a micro boom-to-bust cycle.
So what’s the future for NAC? Out of the ashes, NAC is slowly changing and moving in the right direction toward identity-based networking.
A recent Network World article highlights a lengthy debate between Joel Snyder and Richard Stiennon on the merits of NAC. It is a good read overall and ANA even makes a brief appearance thanks to a mention by Joel (Thanks Joel!). Here’s the relevant exchange:
Joel_Snyder: I’ll jump in here too. Sean Convery just wrote a paper on NAC. (He doesn’t want to call it NAC, he calls it Authenticated Network Architecture — ANA). Anyway, the point he makes is that you don’t need to have super fine-grained ACLs to get a huge reduction in risk.
Richard_Stiennon: *My* point would be that you NEED to get to fine-grained access control to secure your enterprise.
Joel_Snyder: Fine-grained is a spectrum. Aren’t you the guy who just advocated VLANs? I’m saying that if you have coarse control, even go/no-go, that’s a reduction in risk.
Richard_Stiennon: We agree.
Joel brings out one of the central novel points of the paper. Here’s the relevant text (from section 7.3, page 14):
Organization architects that appreciate the capabilities that ANA provides often adopt a design that has many user roles. Larger organizations might have hundreds or thousands of groups in their user directory, and the natural conclusion is to define a network-access profile for each group. This approach, however, is very problematic, primarily because of the complexity involved in managing the large number of roles. In addition, the goal of ANA is not to supplant the application security infrastructure you have already built but rather to augment it. Instead of defining hundreds of roles for the network, a smaller number—likely much fewer than a dozen—can provide a huge boost in the sophistication of your network infrastructure, while remaining completely manageable.
If you think of your network now as essentially a network with one role (full access), then the rationale for adding more roles is to define the high-level separation of rights that provides the most significant security improvement at the most operationally insignificant cost. The roles most organizations should consider follow, beginning with the roles that should be created first. It is not important to deploy all the roles at once. Each additional role adds another layer of delineation to the existing definitions already deployed.
Standard access – This role is the default role that every user and device is currently a part of, whether through explicit authentication or implicit network connectivity. As you roll out ANA, you will gradually assign each user to a more specific role, with the goal of minimizing the number of users and devices that are a part of the standard access role.
Guest access – This role is the most significant role you can add, because it enables any sponsored visitor to connect to your network and gain authenticated access to the Internet at large. By providing easy-to-use guest access, you minimize occurrences of users trying to connect to your private internal network where they might have full access. Most individuals are just trying to get their work done, and if you give them an easy way to get to the Internet (and the network of their home location) everyone is better off. Section 11 details the specific design considerations and policy trade-offs of guest access.
Contractor access – Adding this role means that you no longer have to grant every contractor full access to your network. You can send contractors through a contractor VPN portal where they have access only to the specific systems that they need to fulfill their contract. This setup gives your organization the option to treat contractors more like guests and less like employees. You can grant specific access for only the defined duration of the contract. This solution also facilitates remote vendor troubleshooting or technical support in which an external support engineer needs, for example, 30 minutes of access to one specific system on your network.
Privileged access – When you introduce the privileged-access role, you curtail the rights of the standard-access role so that it no longer offers access to areas of the network deemed extremely sensitive, such as HR, finance, and R&D areas. Only the users who require access to such resources are placed in the privileged-access role.
In summary, with only four roles, you can significantly reduce unauthorized access to sensitive data. In most organizations, approximately 50% of the user base is part of the standard-access role, 10% has guest access, 20% has contractor access, and 20% has privileged access. With these four roles in place, sensitive systems remain exposed to a mere 20% of the user community.
The thing that often gets lost in these sorts of debates is that the network and the application security are cooperating to reduce risk. The network reduces the size of the funnel of potential attackers and attacks but the applications still provide their own–application specific–fine-grained access control. This isn’t an all or nothing proposition, defense-in-depth still applies.