The Benefits of Building Software on an Enterprise Platform

It has been seven months since I started at ServiceNow and I think I’m just beginning to understand the advantage of delivering enterprise applications on a mature platform.

The advantages of building on a consumer platform are well understood and primarily about access to potential customers. If you are building an application, you naturally want to build it on the platform that provides access to the most customers. This is one of the reasons that the Macintosh has become more prominently used by consumers and enterprises over the last 15 years: As the computing platform slowly shifted from Windows apps to web apps, there simply wasn’t as much need to run Windows. This allowed the Macintosh’s superior user experience and reliability to drive buying behavior. (Of course, without Microsoft writing Office for the Mac, none of that would have happened but I digress.)

In contrast, enterprise applications built on an enterprise platform get you speed and robustness. Speed and robustness are critical for emerging enterprise products in a competitive market and they are often interrelated. Let’s get into why.

While working at several startups in product development, I realized that balancing building features to support the reason a venture capitalist wrote you a check and building features that ensure a successful deployment are often completely unrelated. This is because the foundational capabilities enterprises depend on to run a new piece of critical technology are the same capabilities they depend on to run everything else. Scalability, APIs, security, reporting, customization, and certifications are all expected attributes of any piece of software used by Global 2000 organizations. But while developing such capabilities is not novel, it is very time consuming.

So inevitably, you try to balance. This often means focusing as much as you can on the innovation the company was funded to create, while trying to ensure you meet a minimum bar in the requirements common to all enterprise software. Early adopters are often willing to sacrifice some of these foundations but the majority of the market is not. As you transition from these early customers to more established ones, the technology you built has likely morphed considerably to the point that better addressing some of these enterprise capabilities becomes a costly retrofit, robbing an organization of feature velocity often right at the time when your core business is under attack from new competition.

Planning for this eventuality from an architectural standpoint while maintaining growth-fueling innovation is arguably the most important skill an engineering leader needs outside of recruiting top talent. Approaching startup development from the opposite direction (building the foundation requirements first) is even worse as innovation stalls and you exhaust your initial funding before you are able to prove the business.

Which brings us to the point of this essay. Building your new enterprise application on an established platform cheats this Catch-22. Someone else builds the platform before you show up, and you come along and benefit from all their hard work.

A big part of why our new Security Operations capability at ServiceNow is successful in its early days is we have zero resources devoted to enterprise foundations. Those foundations are built by an entirely different and well funded team elsewhere in the company. Also, their output benefits many products, not just mine. Sure we talk to them and pass along requirements from time to time but almost everything we need is already there. Scalability, security, APIs, customization, reporting and so on are free features for us. Our entire business unit is focused on innovation in the security response space which is exactly where our board and investors wants it. This is an essential element of why the expansion of ServiceNow’s core technology into new markets is so exciting. By hiring talent in a new domain area and allowing them to innovate quickly on top of a mature platform, we can build robust new solutions to key business problems that most startups can only dream of.

The counterargument against an enterprise platform is that by building your own enterprise foundations you can build them exactly they way you want. And for certain technology problems, that makes sense; you need to start from scratch. For example, Apple could never have built iOS by starting with a requirement that the phone ran OS X applications. Buyer beware though. The problem I highlighted earlier still stands: while “built from the ground up” is a great tag-line on a data sheet, it often means spending the minimum amount of time possible on those foundations because early customers buy differentiation not robustness.

As enterprises increasingly go cloud and embrace technology the advantages of building on a mature enterprise platform will only grow. As mentioned earlier, when robustness comes almost for free it unlocks amazing feature velocity in your core area of focus. As for me, I’m excited to see how this plays out in my technology area and am curious to hear other’s experiences in building on a platform, or deciding to go it alone.

Security Expert? No Such Thing

expert
noun
1. a person who has a comprehensive and authoritative knowledge of or skill in a particular area.

authoritative
adjective
1. able to be trusted as being accurate or true; reliable.

Let me start by throwing myself under the bus. The tag line of my book “Network Security Architectures” from 2004 is “Expert guidance on designing secure networks”. Lately it seems there are security experts popping up everywhere. The more I think about it though, the more I think we, as a community, need to put down that title until we prove that the technology we build and the systems we implement can predictably and substantially address the problem.

By almost any empirical data, the cyber security best practices and technology we’ve built over the last 20 years are not meeting even this basic standard. Things have gotten worse since Robert Morris’s fateful original worm in 1988 started all the fun. It is comical that we often start presentations with a gloom and doom slide showing how bad things have gotten. Does anyone really not know this? Even my mom sees the headlines and can engage with me about my work in a way she never could before. Information is Beautiful has a wonderfully terrible-to-behold infographic.

There are plenty of large successful companies listed who can afford the latest tech and the smartest analysts.

And yes I know that all security measures can be circumvented… safes are rated on the number of hours they can withstand before being breached… I don’t care. What we’re dealing with is a systemic, decades long, inability to stop the bad guy in any consistent way.

I also know that there is probably an organization out there somewhere who has yet to be breached (or discover a breach) and that too, is missing the point. Such an organization, employing the best and the brightest, deploying the latest technology, and helmed by the most respected “expert” in the business, would still in the end fall back on something like, “now let’s hope for the best.” Furthermore, such an organization would need to employ a significant team to operationally respond to moment-by-moment attacks lest they become tomorrow’s breach.

Finally, there’s also a great excuse our industry trots out regularly, “This is really, really hard.” And no question, it is. I suspect it is several orders of magnitude harder than protecting your home. The reason most people in safe neighborhoods don’t live in constant fear of their belongings being taken from their homes isn’t because their door-locks or windows are especially secure. (They can surely be bypassed in much less time than it would take to bypass a modern network firewall.) It is because of two reasons unrelated to perimeter home security.

First, we are counting on modern law enforcement to deter most criminals. Second, we all bank on the probability that if a criminal decides to take the risk, our home likely won’t be the one targeted. Cyber security enjoys little in the way of law enforcement deterrence nor does it have a particularly high cost to the adversary per attempt. In fact, if home invasion could be wholesale attempted with the same impunity that those who develop bot-nets enjoy, we’d all need armed guards.

And yet despite all these extenuating circumstances, I’m not willing to let all of us off the hook. I’m frankly embarrassed and certainly frustrated at the progress made by the industry I’ve worked in for 20 years. There’s a lot more to say about why this may be happening and what we can do about it but that’s for another essay.

Consider another industry from the physical world in comparison. (And yes, I know security folks love to use physical analogies, sorry to be so predictable.)

You can call Joseph Strauss and Charles Ellis experts. They designed the Golden Gate bridge. Their expertise stems not only from the finished product, but also because others can follow similar principals and create a bridge with similar safety, reliability, and beauty. Also, unlike the organization without a breach from our earlier example, though the bridge requires constant maintenance it isn’t of the sort that might result in total structural failure if the maintenance team calls in sick for a few days.

You might say we, as a modern civilization, have figured this bridge-building thing out. You or I couldn’t build a bridge of course, but if we needed one, I’m certain we could find the right experts to do it for us and we’d be confident in the checks and balances of safety inspections and the like.

Not so for securing IT systems.

There are clearly experts working within cyber security but they are focused on one specific discipline. Cryptography is the most obvious example. Cryptographers aren’t yet at the level of bridge designers in terms of stability and confidence but they are close. Standards are established, there is rigorous peer review, and the fastest computers in the world routinely try to cheat the mathematics at the foundation of everyone’s previously vetted work.

However, the moment a person enters the equation by entering a password to unlock his magnificently encrypted data, the perils of security as a system rear their head. Key loggers, weak passwords, and social engineering can subvert the mightiest algorithm. It is the same with all other aspects of security. Authentication, access control, alerting, detection… all fail primarily when they begin to depend on each other as a system. Back to our previous example, imagine the bridge covered with secret locations which if hit just right with a hammer would initiate a collapse. Joe and Chuck would be monsters, not marvels.

So the next time someone is introduced as a security expert, feel free to narrow your eyes a bit, and if you consider yourself among the most knowledgable folks in our fair corner of the cyber landscape, may I humbly suggest picking a new honorific: security practitioner perhaps. The fact that we’re all still practicing in this industry gives the title a nice honesty, don’t you think?

As for my book’s tagline, I’ll just beg forgiveness. It was 12 years ago and I’m smarter now to know that I am dumber.

Resume Tips from a Hiring Manager

I just finished sorting the first round of applicants for some job openings in my business unit. These are tough roles to hire for as they require a very specific set of skills. That said, there were a number of common mistakes in the submitted resumes. After reading almost 30 resumes for two roles, some patterns emerged that any applicant should consider when writing their own CV.

  • Use less words – The long description of your qualifications, skills, and experience doesn’t make you look more seasoned, it makes you look less. It also makes it difficult to find your real talents.
  • Focus on results – What you were responsible for is usually irrelevant. Talk about what changed about the business due to your efforts. Be specific: don’t say “contributed significantly to the bottom line”, say grew billings 30% over 18 months.
  • Avoid weird fonts – Unless you are a trained designer, straying from the basics here will almost always give someone a negative impression.
  • Avoid weird layout – When looking at a lot of resumes, hiring managers start to train their eye to look for things in a specific place. Deciding to be different just starts the things off on the wrong foot as he or she now needs to hunt to find information. This is not a time to get creative.
  • Use “bold” sparingly – Company names or position titles may make sense to allow for quicker navigation. Please though, don’t pick out words that you think a hiring manager may find interesting. That’s a sure sign you are using too many words to begin with.
  • Don’t crowd the page – You have a lot of power in deciding how someone discovers you by carefully laying out your resume as a PDF. When following the guidelines above, make sure you lay out the information so that it is nicely spaced.
  • Make sure you are qualified – This is just about kindness to your fellow man. Finding a job can be difficult but this is not a process where you can sacrifice quality for quantity. In the rare case where you find the perfect role that seems like a stretch in terms of your experience, write a cover letter that acknowledges the gap and details why you are applying anyway.

TL:DR – Keep it short, format it normally, and focus on the results you’ve achieved in prior jobs that qualify you for your next job.

RSA Session Todos

So I’m here at RSA 2012 and I was able to snag a delegate pass and actually attend some sessions this year. It looks to be a pretty great year content-wise and there were scores of sessions I couldn’t attend but wished I could. Most of them are in the APT, cloud, mobility or risk space. Here’s the list that I’m going to slowly work through via recordings after the fact (apologies for all caps):

GRC-106 RISK MANAGEMENT
HT1-106 ADVANCED PERSISTENT THREATS
HOT-106 JOINING FORCES; PUBLIC-PRIVATE
PNG-106 GOOD SECURITY ON A GIVERNMENT BUDGET?
SECT-106 GIVE ME MY CLOUD BACK: PANEL DISCUSSION OF DATA PRIVACY CONCERNS
SP01-106: OPTIMIZING SECURITY FOR SITUATIONAL AWARENESS
STAR-106: FIREWALLS: SECURITY, ACCESS, THE CLOUD — PAST, PRESENT AND FUTURE
TECH-106: REVOCATION CHECKING FOR DIGITAL CERTIFICATES
DAS-107: THE FIRST 24
GRC-107: TAKING INFORMATION SECURITY RISK MANAGEMENT BEYOND SMOKE & MIRRORS
EXP-107: NEW THREATS TO THE INTERNET
TECH-107: STOP THE MALESTROM: USING ENDPOINT SENSOR DATA IN A SIEM TO ISOLATE THREATS
STAR-108: COMBATING ADVANCED PERSISTENT THREATS (APTS)
HT1-201: CYBER WAR: YOU’RE DOING IT WRONG!
HT2-201: THAT DOESN’T ACTUALLY WORK
EXP-201: CYBER BATTLEFIELD: THE FUTURE OF CONFLICT
PNG-201: SECURE THE SMART GRID
GRC-202: ADVERSARY ROI
PNG-202: NSA’S SECURE MOBILITY STRATEGY
STAR-202: CAN WE RECONSTRUCT HOW IDENTITY IS MANAGED ON THE INTERNET?
TECH-202: DEPLOYING IPV6 SECURELY
TECH-203: BUILDING A SECURITY OPERATIONS CENTER (SOC)
HT2-204: LIVE FORENSICS OF A MALWARE INFECTION
EXP-204: THE ROLE OF SECURITY IN COMPANY 2.0
P2P-201C: EVALUATING GARTNER
HT1-301: CODE RED TO ZBOT
SP01-301: MANAGING ADVANCED SECURITY PROBLEMS USING BIG DATA ANALYTICS
EXP-302: HACKING EXPOSED: EMBEDDED — THE DARK WORLD OF TINY SYSTEMS AND BIG HACKS
HT1-303: MODERN CYBER GANGS: WELL-ORGANIZED, WELL-PROTECTED, AND A SMART ADVERSARY
MBS-303: SECURING THE MOBILE DEVICE
PNG-303: CYBER INCIDENTS CENTERS
SECT-303: MAKING WORLD CLASS CLOUD SECURITY THE RULE
TECH-303: SECURITY DATA DELUGE — ZIONS BANK’S HADOOP BASED SECURITY DATA WAREHOUSE
GRC-304: COLLECTIVE DEFENSE: HOW THE DEFENDERS CAN PLAY TO WIN
EXP-304: GRILLING CLOUDICORNS
AST2-401: GETTING YOUR SESSION PROPOSAL ACCEPTED
LAW-401: FRAUD AND DATA EXFILTRATION
TECH-401: SCADA AND ICS SECURITY IN A POST-STUXNET WORLD
HT1-402: THE THREE MYTHS OF CYBERWAR
MBS-402: IOS SECURITY INTERNALS
EXP-402: ZERO DAY: A NON-FICTION VIEW
HT1-403: ESTIMATING THE LIKELIHOOD OF CYBER ATTACKS WHEN THERE’S “INSUFFICIENT DATA”

“A New Kind of Warfare”

In this morning’s NYT, there was an illuminating article on cyberwarfare. In short, for both the Libyan attacks and the strike in Pakistan against Bin Laden, the U.S. considered–but ultimately rejected–the option to leverage cyberwarfare against the air defense systems in these countries. The entire article is filled with quotable phrases, here are just a couple:

“These cybercapabilities are still like the Ferrari that you keep in the garage and only take out for the big race and not just for a run around town, unless nothing else can get you there,” said one Obama administration official briefed on the discussions.

“They were seriously considered because they could cripple Libya’s air defense and lower the risk to pilots, but it just didn’t pan out,” said a senior Defense Department official.

Why did they decide not to leverage the attacks? Not because they would not be effective. In fact, the sources in the article acknowledge that it might have reduced the risk to U.S. forces. Instead, from the article we learn, “‘We don’t want to be the ones who break the glass on this new kind of warfare,’ said James Andrew Lewis, a senior fellow at the Center for Strategic and International Studies.” So essentially the worry is once the U.S. starts leveraging cyberwarfare it invites others to do the same. The article, by Eric Schmitt and Thom Shanker, did a great job of explaining some of the trade-offs with cyberwarfare and also how hard these sorts of attacks can be to execute. From a cybersecurity perspective, there are several things to consider.

First, the evidence now seems overwhelming that any country with sufficient resources (to say nothing of non-state actors) is actively researching techniques for cyber attacks against their targets of interest. Whether you consider Stuxnet (which this article suggests American-Israeli cooperation in launching that attack) or the recent Wired article on the drone fleet infection that I previously referenced, it seems clear that major governments have paid operatives figuring out how to break into networks.

Second, I am concerned with what this means for responsible disclosure. Gone are the halcyon days, if indeed they ever existed, of a vulnerability being discovered by an intrepid researcher and responsibly disclosed to the CERT. We’re far beyond debates about whether disclosing a zero-day on bugtraq is ethical or not. It seems quite likely, though this is clearly conjecture on my part, that zero-days are being stockpiled by governments around the world against commercial software, embedded digital control systems, and everything in-between.

The question is, from a policy perspective how is this treated by the organization that discovers it? Clearly weaponizing a vulnerability is an advantage to the entity that discovers it, but if the vulnerability is in commercial software, how do you protect yourself without telling the vendor about the issue to get a fix (and thus losing the advantage of your discovery)? It would be like conventional warfare where everyone was using the same exact tanks. Imagine a mechanic discovering a hard to exploit weakness in the armor, but the only way to fix it would be to get the parts supplier to offer the fix for everyone. What do you do: protect your own troops (and everyone else’s) by disclosing the weakness to the supplier or hope the other side hasn’t discovered it yet and use it to your own advantage?

This spills over into all sorts of questions about who has the advantage in this new arms race and what role commercial security tools play in the defense against or execution of, cyberattacks. I’m just beginning to think seriously about this space and I expect the answers to these, and a host of other questions won’t come quickly or definitively.

From Deperimeterization to Borderless Networks

I’ve been embarrassed to see that it has been over a year since my last post on this blog. So why the long delay? Quite honestly my work has been so internally focused within Cisco that there wouldn’t have been much I could say. But as I sit on a plane heading to Networkers (oops I mean Cisco Live!) it seems an appropriate time to reflect on what’s been going on in the land of IT and IT security. I’m spending a lot more time with customers now and I think there are a few conversations worth having on this blog.

When I returned to Cisco in the fall of 2008 I was asked to look into a trend that had troubled many folks: known at the time as “deperimeterization.” The Jericho Forum had coined the term and it struck fear into the hearts of many in the network security industry as it spelled a potential end to rich network security services and pointed towards a world of open and insecure networks interconnecting smart endpoints with security only at the application level.

My investigation into deperimeterization quickly expanded into a look at four interconnected trends: desktop virtualization, software-as-a-service, cloud computing, and IT consumerization. In the 18 months since my initial research these trends have gone from niche issues among a small group of strategists to mainstream concerns that need no explanation.

And what of deperimeterization? Cisco determined that the trend was real but instead of pointing towards open and dumb networks it actually pointed to even more sophisticated networks to enable the interconnection of the myriad devices that need to connect and collaborate. What are these devices’ sole point of commonality? Not their OS; Microsoft’s hegemony on the endpoint will continue to wane as traditional desktop PCs give way to a variety of different computing devices focused on all sorts of vertical applications and use cases. This new crop of devices will run different hardware, software, and not all devices will even have a human operator.

The only thing these devices have in common is that all will have a TCP/IP stack and will make use of a common network. This makes the network the natural architectural choice for the delivery of services across this diverse set of endpoints. Cisco has marshaled enormous resources behind this trend and has named it Borderless Networks. There is much more to say about all of this but I figured Cisco Live is as good a place as any to start the conversation.

Cisco SAFE 2.0

Just a quick note that the second version of Cisco SAFE came out this week at the RSA show. You can get it here. If you thought my original was long at 66 pages, prepare for a shock: the new one clocks in at over 300! I’ve not yet read it but I got an overview from some of the authors a couple weeks back and I liked what I heard. I guess I shouldn’t make too many jokes about its length, it is still less than half the length of my book on the same subject.

While security best practices don’t change quickly, we wrote the original SAFE back in 2000 and a lot has happened since then. Many of the foundation best practices remain very relevant but there are some new tools and techniques that can help protect networks against today’s threats.