ISPCON99 -- Trust No One Jordan Ritter, Netect/Bindview, Inc. o Forward Security begins with knowledge. The inverse of this is true, also; vulnerability begins ignorance. What you don't know puts you and your business in a precarious position, at odds with a type of person who makes it a point to capitalize on and exploit these very things. If security were a game of absolutes, the goal would be to know everything -- exactly how your systems work and interact, exactly what your software does and doesn't do, exactly what deficiencies lay dormant in your configurations and policies, exactly what your users are doing at every moment. But this goal is unrealistic, and thus security isn't about absolutes. In actuality, the game of computer security is essentially one of deterrence. It's about implementing policies and configuring hosts and networks so as to maximize the difficulty a would-be attacker would encounter, versus the overhead cost and inconvenience the measures would incur on its users. Realistically speaking, it's impossible to know every permission problem, buffer overflow, or other vulnerability a network has at any one point in time, but there are several, multilayered approaches for setting up a network infrastructure that will create and present formiddable obstacles to anyone desiring to subvert it. By and large, most of the current generation of crackers are opportunists; they won't spend any amount of time focused on a target unless they see an obvious opportunity. If you hide, block, or close these "opportunities", then chance is in your favour that the cracker will not be interested, and thus won't pose an immediate threat. The important thing to remember is that no singular conceptual understanding, or any single piece of software will make your network impervious to attack. Computer security begins with knowledge, so having knowledgable, competent administrators is really the first, necessary step in building a good line of defense. o Methodology So, the goal is to secure an ISP. For this we have two main approaches. One entails preventive measures, comprised of configurational changes and policy decisions, and the other takes more proactive measures, seeking to discover vulnerabilities through the use of security scanners and IDS software. It is very important to understand that the use of security scanners for periodical audits is useful, but should not be relied upon as a primary method for the detection of vulnerabilities within a system or network. Proper configuration, sensible policies, and keeping systems up-to-date with the latest patches should be the primary modus for keeping secure. o Preventive Measures There are two main areas to focus on when thinking about securing a network -- the network itself, targetting equipment related to the routing of network traffic, and host configuration, including any policies that govern usage or activity. Let's first address the network. o Network o Incoming Traffic -- protecting yourself and your customers From an ISP's perspective, it is very important to consider what capabilities you wish to give your customers. This should be done with the understanding that the origination of your potential cracker is not limited to people outside your network, but also includes those within. Note that 'incoming traffic' refers to traffic that crosses into the actual network from it's backbone provider. The design of a packet filtering methodology for internally routed traffic is fundamentally complex for ISPs, and entirely topology dependent. Since it is difficult to generalize about such things, it will not be discussed here. o Blocking packets with SYN flag set Depending on how strict the implemented policy is, it makes sense to block one or more of the following incoming connections: o NetBIOS (ports 135 and 139, tcp and udp) o rpc/portmapper (port 111, tcp and udp) o FTP (port 21, tcp), WWW (port 80, tcp), SMTP (port 25, tcp) o related services e.g. RoadRunner (TimeWarner) o Blocking malformed packets There are several well known Denial of Service attacks that can be detected and blocked, thereby protecting yourself and your customers from DoS attacks originating from outside the ISP. However, this process can be a very expensive operation for the router, since routers with this capability have to reconstruct every packet received, if fragmented, thereby decreasing the overall efficiency of the router significantly. o Outgoing Traffic -- protecting everyone else! o Source Address spoofing Another important and very often missed filter rule is one that eliminates source address spoofing. As an ISP, you should know which IP addresses are yours, at some level within your network. By dropping any traffic that doesn't originate from a known source within the ISP, you effectively block anonymous, difficult- if not impossible-to-track activity from leaving your network and causing harm to others. o Blocking malformed packets o Pros o Customers are protected from their own ignorance o Limits the range of anomalous activity that can happen o Monitoring traffic and detecting anomalous behaviour is simplified o Cons o Strict filter policies are sometimes difficult to devise, since they require a decisive understanding of the services to be provided o If a strict filter policy is implemented and anomalous activity is logged, the overhead involved in dealing with each event can be time consuming -> With the increasing number of probes occuring in this day and age, it is unrealistic to respond to all of them o When limiting anything in a blanket fashion, there will always be people who are unhappy about the limitation (power users) While it isn't possible to be prepared for everything, one can still limit the number of things that provide an avenue of attack. Therefore, it is certainly worth noting that the most successful filter policies operate under a deny all, allow some philosophy. Overall, packet filtering is a very good first defense against outside attack, and thus should be well thought out and thorough. However, packet filtering doesn't solve or remove vulnerabilities -- it only serves to block or hide them, and it won't protect you from attack internally. Therefore, relying on packet filtering alone to secure your network is a fundamentally bad idea. o Host Configuration and Policy Management Thorough and secure host configuration is by far the single most important preventive measure an ISP can undertake to protect its own systems from all sources of attack. Knowledge is key, again -- know the software you have and use. Take a functional point of view, and consider the differences between system services and informational services. Recall the 'deny all, allow some' philosophy, and apply it to system installation and configuration. Know what services you plan to provide, start with nothing (deny all), and put in only what you plan to make available (allow some). o Don't install software that isn't used The tendency for the vast majority of computer users, be they ISP, corporate, business, or personal, is to install the default configuration and software included with whichever OS they choose. This in and of itself is seriously problematic, since it typically indicates a person's reliance on someone else (OS Vendor, Software Vendor) to make decisions on important issues like security, decisions that the person may or may not agree with, but more likely may not be aware of. Bottom line: do not install something that doesn't have a legitimate use. If such software is installed, uninstall it. It could pose as a possible target for attack. Even if you are convinced it couldn't be used in such a manner, uninstall it anyway -- remember that you can't know everything, and can't be aware of all the implications of leaving the software installed. Besides, one extra software package lying around on your systems is one extra package you'll have to worry about if you do encounter cracker activity. Better safe than sorry. o Don't run services or software that isn't used If there is software already installed that isn't used and serves no purpose, but you can't uninstall it, then don't run it, and make sure it isn't run automatically on bootup by inspecting the OS's startup files. If it runs with privilege (setuid root, or setuid anything), defuse it by removing the setuid and executable bits. e.g. Linux: mountd/nfsd, IRIX: ttdb o Verify the configuration of the software that is used This touches back upon the precept of "know your software". Any software that is installed, whether default as part of the operating system or as an additional package, should have its configuration examined to verify that it actually does what is expected. Don't just install it and forget about it. e.g. sendmail (relaying) o Keeping up-to-date with Patches Keeping your systems patched with all the latest security updates is paramount to helping keep the system secure. OS Vendors frequently release patches, some functional, some security related. This is yet another defense against attackers taking advantage of the things you don't know. o Disable informational services Services like fingerd, rusersd, rwhod, etc. are installed and enabled by default on many operating systems. The information they provide is very useful, especially to would-be attackers probing your systems for information about yourself and your users. However these services are almost always unnecessary, and rarely contribute any value-add to an business. They should be disabled or removed if possible. Again, this primarily adresses the first two points: don't install software you don't need or use, and if you do, don't make them available or run them. To digress briefly, from a Windows perspective the ability to use unauthenticated, anonymous null sessions has serious security implications. It is possible to query an NT server anonymously for sensitive information, such as userlists and password policy information. If managing Windows NT boxes is an issue, removal of the Guest account and disabling null sessions is a key measure to take in preventing Windows from giving away everything it knows. o Filesystem security This section assumes UNIX variant operating systems. Few if any of these concepts will apply to other OSes. o File permissions Scan all filesystems for setuid binaries and address their usefulness, as well as their security implications (mechanisms for privilege elevation), if possible. If you're not sure what is installed on a computer and what isn't, this is a very important first step to take in risk assessment. o Filesystem permissions o Know which areas are world accessible and which aren't o Know which areas are world writable and which aren't o Make copious use of filesystem mount flags. Specifically: noexec, nosuid, nodev, ro. Remember 'deny all, allow some'; limit first, then allow what's necessary o NFS o consider the 'squash_root' mechanism o examine exports and export only to intended systems o Backups Worth a note -- full, frequent backups are essentially your last-ditch recourse when a system is subverted or destroyed. Any business that doesn't protect its investment by backing up critical systems will find itself in a seriously difficult position should their security be subverted and destruction occur. o Logging Policies The ability to effectively manage systems almost directly correlates to the ability to understand what's happening. Since realtime system monitoring and analysis is in almost every case impractical and unrealistic, log files represent the only real record of activity available, and thus integrity, availability, and efficiency all become very important factors to consider when devising a logging policy. This section assumes UNIX variant operating systems. Few if any of these concepts will apply to other OSes. o Process Accounting Some UNIX variants include a process accounting facility, which logs every process the system runs to a file. This becomes a very useful tool when attempting to reconstruct an order of events occurring on the system. o syslogd Depending on the particular variant of UNIX, some form of syslogd will be present. There are several measures to take that will greatly improve data integrity, data managability, and efficiency in analyzing them: o Familiarity with the logging services provided o Where are the log files located? Can they be protected? (read permissions) o Where is the configuration located? Can they be protected also? (read permissions, relocation) o Reconfiguration Generally speaking, syslogd and related services recieve and record a great deal of information, some of which will invariably be unimportant, at least in regards to security. Filtering through this information is always time consuming when done by a human being, and even more so for large and active sites such as those found in ISPs. It is important for the individual managing these logs to understand what logging facilities are available and how to use them to filter and divide up any important information. o Integrity Consider centralizing the logging by designating a single remote machine to receive and accumulate all logs (known as remote logging). This is highly recommended, since if a system is attacked and/or subverted, any logs generated in the process will be recorded elsewhere, hopefully in a place the attacker cannot reach. Also consider the security of the log box itself; one good idea for securing it is to remove ALL services and means of access except for console login. This is one end of the extreme of course, but the important point here is that this box should not do much of anything else that might interest or give leverage to a potential attacker. Again, this is another example of the 'deny all, allow some', functional approach being covered here. Normally when using remote logging, the recipient syslogd host trusts its source. This opens up a Denial of Service vulnerability, since an attacker could flood the remote syslogd and occupy its time or fill up the destination log files. However, there is an alternative called secure syslogd that augments the security of the standard syslog service by replacing it with a version that uses encryption when transmitting information across the wire. o Host ACLs Most modern UNIX variants include Wietse Wenema's tcp_wrappers package, featuring a program called 'tcpd' which matches access rules and decides whether or not to allow a host access to a specific service. Also included in the package is a library called libwrap, which adds this host ACL methodology to software that is compiled with it. This is another very useful tool for managing access control, and its use is highly recommended. -> Always check PGP/MD5/hash sums of packages you download off the net; especially those that you plan to trust for your own security! o Password policies Another cornerstone of security is the password policy. Theoretically, a good policy requires regular change of passwords (password age maximum) with a delay in between (password age minimum), and enforces some form of dictionary checking of new passwords to prevent the choice of bad phrases. While theoretically almost a necessity, is it realistic to require this? Determining useful as well as realistic policies for passwords proves to be a quandary for most ISPs, and is generally left unaddressed. Convenience and Quality of Service are two issues that generally make implementation of a strict password policy at an ISP inconvenient, impractical, at best annoying, and thus fairly undesirable. However, the ideal here is that implementation of a policy that forces users to not only choose good passwords, but to continue changing them regularly, is still, for all intents and purposes, a very good measure to undertake as it presents yet another obstacle a potential attacker would have to deal with. As explained before, security begins with knowledge, and the implementation of any of these suggestions requires a competent system administrator or security expert. Good security does not come without a cost, but it surely is not without obvious benefits. o Proactive Measures Beyond the standard preventive measures to be taken to improve overall security, there are also software packages available that provide a variety of functionality. Some detect vulnerabilities in systems automatically, generally referred to as security scanners, while others yet monitor networks and hosts for extraneous activity, looking for indications of attempts to probe or subvert a system. o Penetration/Vulnerability analysis tools o HackerShield o ISS o CyberCop o Packet analysis tools o NFR, BlackICE, etc o tcplogd, scanlogd, icmpinfo o Conclusion The cost of implementing good security policies often seems expensive on the surface, and market demand can push businesses to focus on the short term, passing over the subject of security. Sometimes compromises are made, and improvements done here and there, but other issues glossed over and forgotten. The most important realization that can come to you is that your opponent, your would-be attacker, he's looking for the one thing you forgot. And the number of his kind is growing. Be afraid. Be very afraid.