[This article originally appeared in Volume 2.2 of the ACM netWorker magazine. Copyright © ACM, http://www.acm.org/). This article may be reprinted as long as this copyright notice appears with it.]
The Internet has grown dramatically over the past 10 years — what used to be a small, low-key academic community has become an indispensable business tool involving millions of users in nearly every country on the planet. Business units cross national borders, business information travels at light-speed, and business time is measured in “Internet Time.”
As the Internet has matured, however, so have the threats to its safe use, and so must the security paradigms used to enable business use of the Internet. A single-dimensional security approach is no longer adequate; a multi-dimensional approach is mandatory these days to discourage ever-more-sophisticated threats to the network.
Originally, the main concern about connecting to the Internet was in the connection itself. Access was most important, and security was considered unimportant, unnecessary or already sufficient. To connect to the Internet, an organization needed an IP router (such as those manufactured by Cisco, Bay Networks, and Ascend), a connection point (provided today by Internet service providers) and little else. Access to the Internet meant access from the Internet, but that was OK when the Net was primarily a research network.
On Nov. 2, 1988, everything changed forever. A program, later to be known as the Morris Worm, attacked thousands of computers on the Internet. This “worm” tunneled its way from one computer to another, sliding through security holes in commonly used programs. A bug in the worm caused it to bore into some systems more than once, causing those computers to run slower and slower. Aside from giving users a healthy dose of paranoia about reconnecting to the Internet, this incident brought the need for network security into the spotlight. Suddenly, people were thinking and talking about vulnerabilities and related attacks from the bad guys in cyberspace.
These concerns were well justified. The password-capture attack, for instance, affected tens of thousands of user accounts during the winter of 1994. Using “sniffers” (devices that sift through information on a network to pick out interesting tidbits, in this case passwords), these eavesdroppers were able to capture user and password information for people accessing remote computers via the Internet — a vulnerability that was well-known but had never before been tested.
In the winter of 1995, a successful attack utilized the novel tactic of posing as another Internet computer. Called IP-Spoofing (as originally described by Bell Labs’ R.T. Morris in 1985 and publicized by Steve Bellovin in the April 1989 issue of Computer Communications Review), the attacking computer temporarily causes a trusted computer to go off-line while in the middle of a network transaction with another trusted computer. The attacker then poses as the trusted computer that has been taken down.
Others attacks took advantage of previously unknown system vulnerabilities. In 1996, denial-of-service attacks were in the security news; in this scenario, the attacker floods networked computers with information in an attempt to cause the computers to stop functioning or be too busy to service legitimate requests. The SYN Flood attack and the ominously named “Ping of Death” fall into this category.
Internet-based attacks on Web sites have also become commonplace. The CIA, the U.S. Department of Justice, the Air Force, NASA, ValuJet, MGM Studios and the Nation of Islam have all had their Web sites vandalized. Today, new kinds of attacks pop up frequently, and recurrences of past attack types are attempted constantly.
These changes in the Internet community and its security needs prompted the first bona fide defense measures in the early 1990s. Internet firewalls — gateways controlling access between one network and all the others — became a must-have for any organization connecting to the Net. The security policy behind these early firewalls was simple: “Allow anyone ‘in here’ to get out, but keep people ‘out there’ from getting in.” Today, however, the growth and increasing complexity of Internet business use are mandating a shift from defensive to enabling technologies, and from single-dimensional to multi-dimensional security techniques.
On the Internet today, single-layer is the most common type of security. The layer most often relied upon is the Internet firewall; until recently, firewalls were often the only security mechanism employed. Figure 1 pictures the typical connection to the Internet. A router connects the site to the Internet, and a firewall protects the private network from Internet-based attack.
As stated by the author and Internet firewall expert Marcus Ranum in a paper entitled “A Network Perimeter With Secure External Access” presented in February 1994, “The rationale for installing a firewall is almost always to protect a private network against intrusion. The purpose of an Internet firewall is to provide a single point of defense with controlled and audited access to services, both from within and without an organization’s private network.” Internet firewalls are controlled gateways between networks, or as Bell Labs’ Bellovin states, “Firewalls are barriers between ‘us’ and ‘them’ for arbitrary values of ‘them.’”
Way back when — about three years ago in Internet time — the services for which people used the Internet were simple and few in number: file transfer, remote terminal access, electronic mail and a couple of others. Today, we add to this the World Wide Web’s capabilities for information and commerce, news, weather, music, telephony, audio and video conferencing, database access and file sharing, with new features cropping up almost daily. Many of these become must-have services, and each has its own security concerns and weaknesses. With these changes, the frequency and sophistication of Internet attacks have increased. A firewall is a necessary part of the overall security of a corporate network, but alone it is insufficient to provide adequate network security for businesses connected to the Internet.
A risk analysis is an organization’s review of potential threats to its network and the probability of those threats occurring. Typically, a risk analysis attempts to answer such questions as “What am I trying to protect and what is it worth?” and “What are the threats, vulnerabilities and risks?” You ask a lot of “What if …?” and “What would happen if …?” questions. A risk analysis ensures that a security policy matches reality.
After the business-needs analysis and risk analysis are complete, a corporation can deploy an Internet security policy. This policy states what is permitted and what is denied when using the Internet, and stipulates which methods and mechanisms are used to protect the private network.
The methods and mechanisms employed usually point to commercial off-the-shelf products, but may require homegrown software. They probably will include Internet firewalls, audit tools, encryption products (for Virtual Private Networks and application-level privacy, such as for e-mail), and anti-virus software.
Research is often the bailiwick of the security professional. Researchers postulate new threats and invent counter-measures for them, while reacting to actual new attacks in the cyberspace battlefield.
An organization’s security policy should also prescribe analysis. Many security devices keep logs of events. If alarms or warnings about security events are ignored or turned off, they are useless — like propping open a locked door on a secure facility because it is too inconvenient to keep using an electronic passkey to unlock the door. So security audit logs and break-ins, both attempted and successful, must be analyzed. This analysis may reveal needed changes in the security policy and procedures, or in the devices deployed to protect a network.
Virtual Private Networks (VPNs) are used to prevent eavesdropping on communications. They have been on the scene for two or three years but are just recently coming into common business use. Encryption is employed in VPNs so that only the parties of a conversation (computers communicating with other computers, for example) can understand that conversation.
Application-level encryption has also been around for years, but only recently has found its way into products such file and e-mail encryption software. This type of encryption utilizes emerging industry standards such as PGP/MIME and S/MIME to allow users to “seal” their files and e-mail messages against snoopers.
User authentication — identification of an individual — can combine with access control mechanisms as part of an effective security scheme. User authentication tools have been in common use for the past five years in organizations that are serious about security. Tools such as these, using cryptographic-based authentication tokens and access control lists, provide protection against unauthorized access to services and data.
Content screening software, a relatively new area of commercial products, and the old standby anti-virus software are still other prevention mechanisms. On a desktop or a firewall, they prevent viruses from spreading and allow an organization to control what kinds of content can be brought in from the Internet. For example, a firewall with content screening can limit the downloading of Java or ActiveX code to only approved users and sites, or it can block viruses before they enter the network.
Dedicated computer and network intrusion detection devices have been used on government-intelligence and military R&D networks for five years, but commercial products have become available only recently. Typical systems read a source of data — network traffic, logging or system audit trail information — and take appropriate actions.
Network and system scanners are two other types of detection tools. Network scanners survey network interfaces such as firewalls and Web servers for insecure services or other known vulnerabilities. System scanners do the same for server systems, looking for accounts without passwords, system files that are writable by anyone, and dangerous services or practices (such as a system allowing all of its files to be accessible from the Internet). Such tools are usually run periodically, and produce reports that indicate the security health of a system or network.
Misuse and anomaly detectors often run in real time, constantly checking a network or system — a file server, Web server, or perhaps a database or Notes server — for patterns of misuse or other inconsistencies. For example, they can be set up to watch over a Web server to ensure that key Web pages are not modified, or they can be configured to oversee an internal private network for unauthorized or never-before-seen traffic. Like motion detectors in a building, they constitute a second line of defense sitting behind the locks on the doors and windows.
In addition to being prevention mechanisms, content screening and anti-virus software are also detection tools; they look at data and memory activity on a system, searching for known computer viruses and virus-like activity.
A perimeter defense is usually the easiest to administer. Like an alarmed fence and a guard shack around a gated community, it focuses the “zone of risk,” as Marcus Ranum calls it, from every host on a network to a few selected gateways. All the eggs are put into one basket, and that basket is watched very carefully.
Even if anti-virus software, for example, is put on every desktop in an organization, it may not be possible to ensure that every desktop is up-to-date. Security administrators can more easily monitor a small number of gateway machines than hundreds or thousands of desktop machines.
Typically, organizations start with desktop security such as anti-virus software. As they expand to Internet connectivity, perimeter defense mechanisms such as firewalls are deployed. As more sophisticated network access is needed, user authentication devices and VPNs are put in place. Intrusion and misuse detection devices are often next. Then, firewalls and intrusion detectors are spread across the internal network as access criteria become more granular.
The mushrooming growth of the Internet is resulting in an expansion of possibilities for corporations that are serious about global business. But these companies must be equally serious about a well-thought-out, multi-dimensional approach to network security.