I recently read a "face-off" article in NetworkWorld magazine, which unfortunately on their website has posted the two sides of the argument as separate articles (here and here). The two articles deal with the question of whether the perimeter network is dead, that is, a concept that has served its purpose for network security, but has run its course and needs to be replaced with something else.
This is an interesting question and one that has got infosec professionals hopping onto bandwagons going in both directions. On the one hand, the arguments for killing off the perimeter are fairly compelling. First, firewalls used to be the gatekeepers for all outside access to corporate networks. They functioned pretty well, too. For example, by blocking port 119, administrators could be sure of blocking access to USENET newsgroups, until, of course, such newsgroups started becoming available on the web using web-based interfaces. It got even worse when peer-to-peer file sharing programs started appearing and these programs could configure themselves to use any port including ports 80 (web traffic) and 25 (email), which are both commonly open on corporate firewalls. Nowadays one can almost do anything using these two ports, so blocking all the other 65,533 ports on your firewall is still sensible, but doesn't really add much to security that disabling unneeded services on your server could do.
Second, the perimeter for corporate networks used to mean one router connecting to the internet. All traffic came and went via that router (plus possibly a modem bank attached to a remote access server for the few who needed remote access). If your home only has one door, it's easy to monitor who comes in and out--just park a video camera by the door and run the tape. Unfortunately, homes have lots of windows too, and today's corporate networks are even worse; they are starting to look like Swiss cheese. Besides the T1 line to your ISP, you've got WAN links to branch offices and business partners, VPNs to traveling salespeople with laptops, wireless access points both planned and on the sly, Blackberries, PDAs, even a few old 56K modems kicking around. You've got users who take their laptops home, browse the net, get infected with viruses, and then come to work and plug their machines into the LAN. You've even got teleworkers who work from home and VPN into the corporate LAN each morning. What's the result? The perimeter now reaches to your teleworker's bedroom and beyond--try securing that!
Third, while defense in depth does make sense, it also makes sense to move your defenses as close as possible to what needs to be defended--your servers, your applications, even your data. Remember, defense in depth doesn't just mean, "set up more layers of defense." Many lines of defense are not intrinsically more secure than fewer, though they can delay attackers and give you more time to plan your response. Remember Mission Impossible with Tom Cruise? If I remember correctly, Cruise's team had to break into a computer system at CIA headquarters in Langley, Virginia. As Cruise was outlining their mission, he explained that there were seven different layers of defense that needed to be breached: password-protection, card keys, laser beam alarms, temperature sensors, and the final defense: an impressive, thick-walled, walk-in safe where the target computer was situated. The first few layers were easy to crack (it only took Ving Rhames a few seconds to correctly guess the password, using his Cray-50 Pentium 4 laptop or whatever). The final layers were the biggest problem, and after deactivating the laser beams Cruise lowered himself from a ceiling duct into the safe and stole the data he needed since the computer in the safe still had a floppy drive installed. Whew!
The point is, why didn't the CIA just build a huge safe and put their whole building in it? That would have made the first defensive layer the strongest, but of course such an idea would be prohibitively expensive. Plus, such an action would end up securing a whole lot of things that really didn't need securing, such as the toilet paper supplies for all the bathrooms in CIA headquarters. All that really needed to be protected was that one computer and the information on it, so a vault was built to contain it and protect it. In other words, it makes sense to move your strongest defenses as close as possible to the things you are trying to protect. And not just economic sense, but technical sense, since the less stuff you have to protect, the easier it will be to monitor that stuff and ensure it's still safe.
In their excellent book Protect Your Windows Network, Jesper and Steve make the case for deperimeterization like this: "If the ultimate goal is to protect information, doesn't it make sense to move the protection as close to the information as possible? Indeed, the process becomes more streamlined (but not necessarily easier) as you start to secure hosts and especially data because that's where the information exists to best make the right security decisions" (page 211). The solutions they suggest for implementing this new "perimeterless" security model include authenticating all the time everywhere, always validating and authorizing access, auditing everything all the time, and encrypting whenever and wherever needed (probably always and everywhere).
The arguments are compelling, but are they convincing? Probably, but such a solution also runs up against an inescapable fact concerning network security that they point out earlier in their book--the fundamental tradeoff between security, usability, and cost. Imagine, for example, implementing IPSec everywhere on your internal network so that all network traffic is encrypted. This certainly makes the "soft, chewy center" of your network nearly as tough to crack as the hard exterior (perimeter) but it also adds a lot of complexity, and complexity means more difficult to manage, and probably more costly also (time to implement and troubleshoot). So perhaps, if you want your networks to be more secure you should pay less attention to the perimeter and implement some of the deperimeterization techniques they suggest. But do your risk analysis first to see whether it's worth the cost and effort of buying that safe with 12-inch steel walls and laser beams whizzing everywhere. After all, what do we have to fear from Tom Cruise?
Mitch Tulloch is the author of Windows 2000 Administration in a Nutshell, Windows Server 2003 in a Nutshell, and Windows Server Hacks.
Return to the Windows DevCenter.
Copyright © 2009 O'Reilly Media, Inc.