A new IETF draft was recently published describing an extension to RADIUS which supports a standard way to define access control lists at the rule level. Previous standard incarnations used the “filter-id” attribute which could only point to a pre-configured filter on the device. Though some VSAs can provide this functionality today a ubiquitous standard attribute is vastly preferable. This would provide a much better vehicle for describing authorization rules in a central location instead of managing them individually on each enforcement device.
Archive for the ‘RADIUS’ Category
When discussing 802.1X, authenticated networks, and RADIUS with customers I’m often confronted with a small bit on confusion with regard to the deployment options. The standard way of thinking about 802.1X is that it represents an all or nothing proposition. You either configure and install supplicants on all endpoints (with standard exceptions for printers and the like) or you use an overlay (non-802.1X) authentication technology which typically employs some variation of a captive portal. The captive portal is a device that forces traffic through itself and can force authentication at that step (just like hotel broadband). However, a nice feature on most switches enables smoother 802.1X migration as well as long term interoperability with non-802.1X systems (like guest or contractor machines). This feature is the default VLAN and what it basically says is if the client does not respond to the EAPoL challenge it is placed on a default VLAN. One viable deployment choice has this default VLAN routing traffic through a captive portal where web-based authentication can take the place of the 802.1X authentication step. Some network devices support this web authentication on their hardware directly which represents yet another choice.
Regardless of which technique you choose this gives IT departments the ability to roll-out authenticated networks without managing massive exception lists or maintaining support for 802.1X supplicants on outdated client OSs. It also gives a natural incentive in places like universities to encourage migration. Because the 802.1X route can be considered “fast-path” as it doesn’t involve sending traffic through an unnecessary intermediary device, students can be encouraged to deploy a supplicant to get faster network access.
Network World Magazine recently ran a review of Cisco’s new ASA 5500 SSL VPN technology. Nestled in the review are a couple great tidbits about why direct integration between user directories and network infrastructure devices is a bad idea, and why the flexibility of your authentication infrastructure is key to allowing user AAA to work as promised by the tenets of NAC.
In our authentication and authorization tests, we discovered that while the ASA claims to support Active directory and Sun’s Lightweight Directory Access Protocol server, it didn’t support our schema of the Sun LDAP server. When we tried switching over to our SecurID RADIUS server, we discovered that Cisco fully supports the additional RADIUS messages required to integrate with SecurID.
However, Cisco had no flexibility in mapping users to groups, and would have required us to change our existing RADIUS schema, breaking all the other applications plugged into SecurID.
The two critical bits of learning here are first that in real-world deployments, schemas are rarely designed to support the network infrastructure device and often they are non-standard. This is one of the many reasons why direct integration from the network infrastructure device to the directory is architecturally a bad idea. The fact that this was exposed even in a test environment simply magnifies the concern. Second, many current RADIUS servers simply lack the capability to integrate the directories and network devices with the flexibility required by today’s deployments. Interconnecting between ethernet switches, firewalls, VPN gateways, wireless APs, dial-up servers, and Microsoft Active Directory, LDAP, and token servers requires approaching the policy-based AAA problem in a new way.
My company, Identity Engines, just released version 3.0 of our policy decision / AAA platform. You can read the press release here. The major new feature is a user provisioning framework that allows Ignition to send a set of arbitrary VSAs to the enforcement point as a result of the policy decision. These VSAs can be tuned based on the vendor of the enforcement point, the location on the network, or any information we learn about the user from the back end directories. This is the next step in offering some ability to centralize authorization rules within the platform. We’re out at Interop this week showcasing the new functionality. If you are in town, feel free to stop by and say hello.
Joel Snyder at Network World has just posted a great article comparing the major offerings in the NAC space from Cisco, Microsoft, Juniper, and the TCG. If you are new to this space, this is an excellent primer.
I’m off to the IETF meeting in Dallas to attend the NEA BoF. NEA at this point exists as a problem statement with a goal to define a set of standards in the same vein as Cisco’s NAC, Microsoft’s NAP, and the TCG’s TNC. What’s unique about this effort is both Juniper and Cisco are participating (they are two of the coauthors of the problem statement ID). Cisco has not participated in the TNC effort in this same space but Juniper/Funk have. Hopefully this can bring some consistency and ideally interoperability to the two approaches. I reviewed the problem statement draft and it is a good summary of the issues and also the opportunity for standardization.
A recent IT Architect article goes into some of the IETF’s work in the AAA space focusing mainly on the continued viability of RADIUS and why the transition to Diameter isn’t moving very quickly. This is a good summary on the state of things though I might quibble with the author’s classification of RADIUS as sexy.
Those of you whoâ€™ve been in the IT industry for a long enough time have seen a remarkable evolution in connectivity. The evolution occurred so slowly that perhaps you missed it. Letâ€™s back up to the early 1990s before IP really caught on in the enterprise space. Network engineersâ€”myself includedâ€”were running around getting our Certified Netware Engineer (CNE) certifications and spending a lot of time building networks that ran IPX. IPX was a remarkably simple protocol that used a variation of the machine MAC address as the L3 IPX address. If this sounds a bit like IPv6, it should. Both derive the L3 address from the L2 address and make connectivity fairly easy, at least on a local level.
Skip forward to the mid 1990s and we were all enamored with the Internet and the possibilities of TCP/IP. Like me, perhaps you took part in the installation of your companyâ€™s first Internet firewall / bastion host. One thing you did, firewall or not, was configure static IP addresses. Lots of themâ€¦On every host that used TCP/IP. This is because DHCP didnâ€™t materialize first as RFC 1531 until 1993 (It later became RFC 2131) and didnâ€™t get deployed ubiquitously for a while after that. In that time between IPX and DHCP, configuring these static IP addresses was a pain. The chance of misconfiguration was high, and you tracked all your IP networks in an Excel spreadsheet. You knew that Bobâ€™s IP address was 192.0.2.45 because you hard coded it as such just the other day. If you wanted to write a security policy for Bob, for all practical purposes 192.0.2.45 = Bob.
Static configuration was a particular pain for the network admins themselves who were constantly moving machines around the network for testing and troubleshooting. Since the Excel spreadsheet was invariably a bit out of date and your time on any IP network was limited, you tended to guess an address for the last octet of your machineâ€™s IP address. Sometimes you guessed right, sometimes you guessed wrong and the gratuitous ARP caused your stack to shut down. Troubleshooting network connectivity had to do with making sure DNS was set right, the default gateway was set right, and you werenâ€™t conflicting with anotherâ€™s IP address. Of course the fact that TCP/IP stacks werenâ€™t built into Windows until 1995 presented other problems that I wonâ€™t go into here.
The point of this trip down IT memory-lane is that since the deployment of DHCP weâ€™ve been on a wave of more and more ubiquitous connectivity and less and less troubleshooting per connected station. You move your laptop to another segment, plug in, and get an address that works. IPsec VPNs gave us a small taste of problems as the filtering of ICMP Type 3 Code 4 (Fragmentation Needed and DF bit set) messages by overzealous firewall admins led to MTU issues from the IPsec headers. But for the most part, weâ€™ve experienced a decade or so of fairly consistent connectivity. WLANs just extend the reach of the places you can connect and when combined with home broadband, hotel access, etc. weâ€™ve been in IPv4 connectivity nirvana.
Now you may be wondering whom the villain is in this story and there isnâ€™t one, save progress in general. This time it was progress in security. You see, this ubiquitous connectivity combined with nearly all IP connected stations being Internet connected created a breeding ground for network attacks of all sorts. Thereâ€™s no need to rehash this except to say the last five years havenâ€™t exactly been in the â€œWinâ€ column for the good guys.
The latest wave of technology to combat the growth of Internet threats of all kinds is a technology type that Iâ€™ll call device posture assessment. Posture assessment products exist or are in development everywhere you look (Cisco, Juniper, Microsoft, TCG TNC, Symantec/Sygate, Consentry, etc.). The basic idea of these systems is to query the state of a connecting machine to see if it is an asset of the organization who runs the network, and if it is up to date with its anti-virus, patch levels, firewall policy, etc. These systems often make use of the RADIUS protocol and slightly less commonly the IEEE 802.1X port authentication standard. While these technologies offer great promise as they mature, they also have the unintended consequence of making troubleshooting basic network connectivity harder than I can ever remember it being. Letâ€™s review the list of things that could be wrong in a normal vanilla DHCP enabled IP LAN:
Iâ€™m sure Iâ€™m missing a couple here but I think Iâ€™ve got the big ones. Now letâ€™s look at the list of additional problems introduced in a RADIUS / 802.1X enabled posture checking system:
Companies who have been delivering these capabilities to the market have been focused on the basic functionality, which has left troubleshooting under developed. This reality has not escaped the notice of magazine reviews but the focus of these articles has been mostly evaluating the promise of the security capability and not taking a serious look at the realities of deploying this in production. Take another quick look at the list of potential problems above. Imagine a caller coming into your help desk complaining about something not working. Now imagine the steps youâ€™d go through trying to figure out what might be wrong.
The moral of this story, if there indeed even is one, is that while these posture checking technologies offer great promise, weâ€™re still in the land of the bleeding-edge with respect to production deployment. Organizations looking to deploy should first evaluate the robustness of their management infrastructure and the troubleshooting tools that the vendor will provide. Since most of these techniques make use of RADIUS and 802.1X perhaps trying to get those technologies deployed for simple user or device authentication provides a more tractable problem in the near term. As the posture offerings settle down and mature youâ€™ll have set the stage by building out a robust AAA infrastructure to handle whatever standards (de facto or de jure) eventually emerge. The next couple years should prove quite interesting for network connectivity, I look forward to seeing how it all shakes out.
Identity Engines launched today! Things have been busy around the office getting everything ready in all departments. Discussions with press and analysts have so far been quite positive. There’s much to do going forward but we’ve got a very interesting product that solves a real problem enterprises have today.