UTM vs Distributed Architecture


As this document is being written, the concept of UTM (unified threat management) has been around for more than ten years. Yet, even after these many years, and with so many  vendors now in the market, the UTM concept has yet to be as generally accepted as one would expect, considering the enormous benefits it provides in terms of simplification, manageability, and cost of network  gateway security.

The objections raised against UTM devices are the same as they were five years ago, mostly about potential performance issues and the fact that UTMs are a single point of failure. This document will attempt to provide an appropriate picture of how a UTM truly fares in any network, showing the benefits an organization can derive from adopting such a device. When considering a UTM device, one must always keep in mind that a network is not a closed environment with only one entry point – the Internet. That misconception is dangerous because the logical consequence is the assumption that protecting the gateway is enough to ensure protection of the entire network. But it is worthless to have an armored door when all the windows are unprotected. The concept of defense-in-depth applies to any network, large or small, and a UTM is simply the beginning of the protection, not the end. It may be important to first establish what a UTM really is and which functions it needs in order to be a true UTM. Industry experts agree that a UTM, at the very least, should include:

 
Many vendors add other functions, such as advanced routing and QoS (quality-of-service), and certainly many go to great lengths to provide advanced network-activity analysis tools. In many  cases you can choose to omit some functions, the most common being the VPN and content filtering.

In an enterprise-class UTM architecture, multiple security functions are capable of being delivered in a single box. However, these security functions need not be collectively delivered to the same location. In fact, a UTM box in the enterprise may deliver firewall, IPS and URL filtering at the perimeter while delivering Web application firewall and database security in the data centre. In both cases, multiple separate single-function devices and the inevitable network glue needed to tie them together are consolidated into the UTM device. If the UTM device uses a modular architecture, the customer can even select which vendors’ security applications they want to run. In the process, they gain operational efficiencies and a common foundation for the security delivery system across the enterprise.

The beauty of enterprise UTM is that organizations can implement best-of-breed applications on a multi-function device that delivers high performance, high availability and intelligent provisioning of security. It is a highly consolidated, flexible service layer that includes both the edge and network core, and allows service providers and enterprises to deploy the most appropriate security measures precisely where they are needed. This UTM is not tied to any one vendor’s view of the universe, and is not based on proprietary technology.

One of the common myths in security is that security hardware should be distributed across the enterprise so that there is no single point of failure. The fear is that if a single box should fail, the whole network from the core to the perimeter would be exposed to attacks. However, enterprise-class UTM devices must have very strong high availability characteristics. Modular chassis-based systems are ideal platforms as they provide not just dual-box redundancy configurations but much deeper levels of security that extend to all parts of the system. An increasing number of vendors are building single-box high availability systems in which there are no single points of failure, and services applied across blades can withstand multiple failures. In addition, expect to find sophisticated application monitoring functions that detect not just hardware failures but software failures that require redundancy protections. Interface and connectivity failover protections should also be present both on a single network blade as well as across multiple blades.

There is a more profound benefit, though, that is being realized by leading adopters of enterprise-class UTM devices. As in many technology markets, some customers have gotten ahead of the vendors by tying together previously disconnected processes. In the case of UTM, technology pioneers have begun to use their UTM devices as the execution layer in a “unified risk management” strategy.

As security becomes increasingly regulated by compliance, organizations are beginning to realize that there needs to be tighter integration between their security and risk management strategies. In other words, how can they build a sound risk management model that provides the required security coverage exactly where it is needed, when it is needed and at a cost that can be measured?

Because high-end UTM is deployed as a service layer in the enterprise, security applications can be deployed where they are needed most, in combinations that make sense, and turned on or off as needed. This flexibility enables organizations to easily connect the provisioning of security with their risk management policy. The policy requires the classification of assets into security zones that map to risk categories. Once these categories are defined, the UTM architecture allows the enterprise to apply security in combinations that map to these categories, and manage cross-boundary transitions between risk categories in accordance with the company’s security policy.

The ability to consolidate security applications in a UTM architecture that can be tied to a company’s unified risk management strategy results in great cost savings, while eliminating the requirement that all protections be applied all the time at all boundaries. Interestingly, these are the same benefits that can be realized by companies using small business UTM boxes. The enterprise benefit simply requires the extension of the UTM box concept to that of architecture. By making this subtle but important shift in thinking, enterprises can correctly identify enterprise-class UTM equipment and reap its important economic and security benefits.

As new threats emerge with the introduction of new technologies such as wireless, VoIP and instant messaging, the value that unified threat management can provide as security architecture will become increasingly important to organizations as a way to quickly and cost-effectively manage evolving security requirements – from smaller businesses and branch offices to the largest enterprises and service providers.

We will now analyze the most important advantages of UTMs.

Management

Managing a UTM device is not a complex task when compared with a bunch of devices running various services. But to put it more accurately – and inclusively – managing security is a complex task. Any function, whether it’s on the appliance or on a server of its own, needs to be managed. The Difference is that when dealing with one appliance, you are dealing with just a single device.

Let’s consider the other situation. You have a firewall/IPS device at the edge of your network, but your content filtering is done by a proxy device running a web filtering application installed on a Windows server. Now, not only do you still need to manage the content filtering function, but you also need to manage the server the function is running on, with all the complications this entails. And how many times do we see operating system updates breaking the services running on them? Moreover, if the solutions are from different vendors, as it is most likely to be, you also need to learn different ways of managing the various services, with all the inconsistencies this involves. Each individual box needs to be patched and maintained, to provide a secure platform upon on which to run the point solution software. Maintaining several devices, in many cases two of each type, adds so much to the complexity of the management that the possibility of more problems occurring and going unnoticed is very high – not to mention the issues that may arise with routing and with troubleshooting when something goes wrong.

By far, the best solution is the simpler solution – managing one appliance only.

Assuming that we agree that all the functions offered by a UTM appliance are necessary, if not indispensable, managing all of them requires knowledge of many different systems, a list of which would fill an entire page of someone’s resume. The administrator would need an in-depth understanding of networking and routing; know the firewall language and understand firewall security; and know how to configure the IPS, the antivirus and the policies, the proxy, the anti-spam and the content filtering. Each of these tasks is a job unto itself. And it’s not only a matter of knowing the firewall language; one can know a language well and still say things that make no sense. Understanding security is the most important point when managing security. Understanding how and why to configure something is far more important than knowing the syntax to achieve that configuration. These are the reasons why the second most common cause of a company’s networks becoming compromised is improper configuration of its defenses, because understanding and configuring all these functions is no easy task. But just because it’s difficult, doing without them is not an option. All these functions are necessary to properly protect a network, so the question is whether you want to keep them separate and distributed or accept the concept of UTM we recommend. Either way, we believe the best solution is to rely on a personnel best equipped to manage your gateway security.

Configurability

The ability to configure an application to your needs does not particularly depend on whether that application is running on its own specific device or is sharing the same hardware with other functions. The configurability of each different function depends on how the configuration interface has been written and what functions have been put into it, and it is our experience that single-application devices are not especially more versatile than UTMs when it comes to configuring each function. In reality, integrating multiple functions into a single device may even make it possible for new functionality otherwise unachievable on single-point devices; as a result, the actual configurability of a UTM may turn out to be greater indeed. When evaluating any application in your network, understanding how configurable it is may be a very important consideration for the final choice. That goes for any application, device or appliance in a network.

Single point of failure – or many points of failure?

Whether the network protection is concentrated into one appliance or not, there are situations when the network is better off without Internet connection should a certain function fail.  Specifically, firewall, IPS and antivirus are fundamental, and a network should not have connectivity to the Internet without them. Hence, whether these functions are all on the same appliance or not makes no difference. If the firewall runs on its own device and fails, the network will lose connectivity, and so it should. The only way to overcome such situations is by using a redundant configuration called high availability, which is supported by single devices and by UTMs. In this case, having all the functions in one place will actually make one’s life easier, because the redundancy is much more readily achieved by simply copying the configuration across the two devices.

On the other side, in a distributed architecture a redundant configuration can be created only by purchasing two of everything. Therefore, instead of managing five or six servers/Appliances, you will now need to manage 10 or 12. As the complexity increases with each server/appliance being added to the network, the possibility of failure and errors, as well as management difficulty, also increases. Even from a purely statistical standpoint, having more devices in line increases, not reduces, the possibility of failure. Putting in five devices that fail statistically every 100 days means a possible failure every 20 days
Because a UTM runs multiple services, one might suppose there is a higher chance that if one service fails it will take down the entire appliance. But if the UTM appliance is well built, this fear is unfounded.

The services offered on a UTM device can be divided into two categories critical and non-critical.

 
As previously noted, firewall, IPS and antivirus are critical. Anti-spam, web filtering and VPN are non-critical in the sense that no immediate danger can come to a network from a spam email. If one of the critical services fails, as mentioned above, it is correct that the specific protected service should fail as well. So emails will not flow if the antivirus fails.  But the UTMs runs two/three antivirus engines, and the likelihood that all three should fail at once is close to zero. If one of the non-critical services fails, since no immediate danger is brought to the network, there is no reason for everything else to stop. A non-critical service can fail open or fail closed, but that does not depend on whether the service runs on its own server or on the UTM appliance. UTM or distributed architecture has no bearing on the final result of what happens when a service fails.

The false assumption that the failure of a single service can take down the entire chain and bring Internet connectivity to a halt is based on a misconception or misunderstanding of how UTMs work. Beyond the above, the UTM is built with fail-safe mechanisms that provide service continuity should one of the services fail. It runs a health system that checks that everything is working properly and sends alerts to the management centers if something is not.

A UTM device simplifies the architecture, makes configuration easier and, therefore, actually reduces the risk of failures.


Performance and resource utilization

Performance depends on the hardware, on the way the software is written, and how well this software uses the available hardware resources. There is no doubt that to run more applications on the same hardware you will need to consider bigger hardware. But be aware that this does not mean only faster CPU or more memory. The configuration of the disks, if your device is doing lots of logging, is as important as everything else. You don’t want to have a very fast CPU idling 70% of the time because the disk I/O is running behind. In a distributed scenario, each service can, at best, use the hardware it is running on – but nothing more than that. So when that service is not running at full capacity, that specific hardware is not being used well.

However, when that service is running at peak capacity, the risk of overloading the hardware is all too real, and there is no trade-off. If the hardware is sized to provide good performance at peak time, then you may be spending a very large budget for hardware that most of the time you’re not fully utilizing. Conversely, if the hardware is sized for average use, you run the risk of overloading it (and possibly seeing that service crash) at peak time. The bottom line is that the total throughput is limited to the slowest device.

In a UTM scenario, you will certainly need more hardware than you would for each of the single services. But the nice trade-off is that all the services can share that capacity and take full advantage of the more powerful hardware all the times. So when one service is not fully utilized, another service may be, and the hardware resources are now shared dynamically. The odds that all services may run at peak capacity at the same time are low but real, so the UTM hardware must be sized to provide for this possibility, or indeed it will run into the same issues cited for the distributed architecture. But this does not mean that one architecture is better than the other; it simply means that the hardware needs to be sized properly in both cases. It is certainly cheaper to properly size a UTM and, in most cases, also simpler. Take VMware, for example; the use of this new technology is spreading rapidly. Why? Because consolidation simplifies maintenance and management while allowing resource sharing among different applications. Why not take advantage of the same trend on a UTM device?

First layer of your defense-in-depth

Defense-in-depth has to do with the reality that threats can come to your network from many sources; the Internet gateway is not the only point of entry, though we often would like to think it is. This topic is the subject of thousands of papers written about security, and it is a very important point. Defense must be multi-layered. This means that you cannot do without a gateway antivirus that scans any possible protocol, such as SMTP, POP3 and HTTP. You cannot allow the viruses to arrive at your server and then catch them there. It is important to try and stop as much as possible at the gateway, possibly everything coming from that direction.

You still need to defend your network internally because, again, threats can come from other sources. But it is unthinkable to allow malware inside your network from the Internet and catch it only after it’s already on the server. Again, defense must be multi-layered, and a UTM is the first line of defense from the Internet. Think of it as fortified walls and a moat around your castle but not as the only necessary defense, because that castle doesn’t just get attacked from the outside. To be sure, having six separate devices, each running an edge function – such as firewall, IPS and gateway antivirus – does not constitute defense-in-depth either. Deploying antivirus at the gateway, on the server and on all the workstations constitutes defense-in-depth. Having an IPS at the edge and several IDS devices inside the network constitutes defense in depth. No one proposing a UTM should ever claim that this will in any way be defense-in-depth, just one of the many layers necessary to achieve it. But defense-in-depth does not mean serialization either.  Putting five or six devices in a line, each running a separate service, does not achieve defense-in-depth. Defense-in-depth means, in simple words, performing the same checks at different layer levels.

Total cost of ownership

Total cost of ownership (TCO) is an important factor when deciding whether to adopt UTM technology or not. As it should be clear by now, running a UTM device is a lot cheaper than running five separate servers or appliances. This is a prime consideration in making the final architectural decision, especially for smaller organizations. Most often, in order to keep the distributed approach within a low budget, the customer ends up compromising and doing without important security components. UTMs offer a very cost-effective solution to this problem. And since management is easier, the TCO of a UTM device is certainly much lower than running everything on separate servers.

Integration

A UTM runs many necessary applications together. If the manufacturer has done a good job at integrating these functions as one, the final result is a device that has many features and functions that would not be otherwise achievable in a distributed architecture. It is also easier to determine whose problem it is if something is not working; you have just one phone number to call. As in every system-integration project, the integrator is responsible, and as a customer, all you have to know is that your integrator (in our case the manufacturer of the UTM device) will do all it can to solve your problem. Try taking the same approach when you have six different vendors – one for each application – and something goes wrong. The technical issue becomes the least of your problems while dealing with several vendors, none of which wants to take responsibility for your problem until it’s clear where the responsibility lies.

Best-of-breed approach

A very important aspect of UTM technology is the quality of the applications. In a distributed architecture, it is easy to choose the specific product for each function that best appears to fit your company’s needs. When acquiring a UTM, one should not compromise on the services vendors used by the UTM. Best-of-the-breed UTM can only be built using the best-of-the-breed services vendors for all services like Anti-X, IPS, Content Scanning etc.

Configurability is another consideration. Some UTM devices cannot be configured to perform complicated routings or certain particular firewall functions. Whether or not the underlying application is capable of those functions, the configuration interface might not have been written to allow configuration of that functionality. It is a practical impossibility to create a user interface that can provide for all the possible variations of configurations users will dream up. When developing a GUI, compromises must be made, so developers end up including those functions that will satisfy 80-90% of customers. And that is also why it’s very important, when choosing any UTM, to be sure that the command line is accessible, so that 100% configurability may be achieved. This goes also for single-function devices, but it’s especially important for UTMs because of the increased complexity of the GUI.

Conclusions

Optimum security is a trade-off against the needs of the business. These trade-offs are not related to the solution’s architecture but to the function itself. In a UTM scenario, you don’t have the complexity of getting different devices from different manufacturers talking to each other. Examples from Astaro include URL content-filtering inside emails, using IPS to provide directory-harvest protection for mail scanning services, web-proxy caching linked to antivirus (to avoid virus-scanning cached content), and many others. The world of computing has gone from mainframes to distributed technology, and this has multiplied the complexity of management. Now we are going back to virtual devices and cloud computing to simplify our world again. UTMs go in that same direction, and the trend is only going to continue. Those companies that try so hard to dismiss even the existence of UTM as a security solution are not looking properly at the reality of security and at how it has evolved in the past five years. As hardware becomes more powerful and less expensive, the architectures tend to become simplified, and integrating several functions on the same hardware allows for a great deal of interaction between the different modules. UTMs are a reality and are here to stay. They will only get more powerful and more all-inclusive. They are less expensive than distributed architectures, and the integration of all the security components in one place enables functionality that is otherwise unachievable in the distributed architecture.

We see many advantages with UTMs – in reduced complexity, reduced cost, and reduced overall maintenance. Astaro is a unique reality in the world of UTMs because it addresses the only potential drawback of this category of security devices: the complexity of management, which pertains to network security in general, not just to UTMs. Astaro lets you avoid this drawback by providing a managed solution.
------------------------------------------------------------------------

For all those who missed a UTM appliance from Cisco

Looking forward to check a Cloud Backup Solutions - Try this for free

Labels: , , , , , , ,