Slammer Worm: The Fastest Spreading Bombshell On The Internet
Slammer Worm: The Fastest Spreading Bombshell On The Internet
com
SEMINAR REPORT
www.Fullinterview.com
www.Fullinterview.com
www.Fullinterview.com
www.Fullinterview.com canceled airline flights, interference with elections, and ATM failures (see Figure).
Figure1. The geographical spread of Slammer in the 30 minutes after its release. The diameter of each circle is a function of the logarithm of the number of infected machines, so large circles visually under represent the number of infected cases in order to minimize overlap with adjacent locations. For some machines, we can determine only the country of origin rather than a specific city.
Slammer's most novel feature is its propagation speed. In approximately three minutes, the worm achieved its full scanning rate (more than 55 million scans per second), after which the growth rate slowed because significant portions of the network had insufficient bandwidth to accommodate more growth. Although Stuart Staniford, Vern Paxson, and Nicholas Weaver had predicted rapid-propagation worms on theoretical grounds, Slammer provided the first real-world demonstration of a high-speed worm's capabilities. By comparison, Slammer was two orders of magnitude faster than the Code Red worm, which infected more than 359,000 hosts on 19 July 2001, and had a leisurely 37 minutes of population doubling time. While Slammer had no malicious payload, it caused considerable harm by overloading networks and disabling database servers. Many sites lost connectivity as local copies of the worm saturated their access bandwidths. Although most backbone providers appeared to remain stable throughout the epidemic, there were several reports of Internet backbone disruption. If the worm had carried a malicious payload, attacked a more widespread vulnerability, or targeted a more popular service, its effects would likely have been far more severe.
www.Fullinterview.com
www.Fullinterview.com
Figure 2. The Code Red worm was a typical random-scanning worm. This graph shows Code Red's probe rate during its re-emergence on 1 August, 2001, as seen on one Internet sub network, matched against the random constant spread worm behavior model.
Slammer's spread initially conformed to the RCS model, but in the later stages it began to saturate networks with its scans, and bandwidth consumption and network outages caused site-specific variations in its observed spread. Figure 3 shows a data set from the Distributed Intrusion Detection System project (Dshield) compared to an RCS model. The model fits extremely well up to a point where the probe rate abruptly levels out. Bandwidth saturation and network failure (some networks shut down under the extreme load) produced this change in the probe's growth rate.
www.Fullinterview.com
www.Fullinterview.com
Figure 3. The early moments of the Distributed Intrusion Detection System data set, matched against the behavior of a random-scanning worm.
www.Fullinterview.com
www.Fullinterview.com
www.Fullinterview.com such as Code Red, also could use a bandwidth-limited scanner by sending TCP SYNs at maximum rate and responding automatically to any replies in another thread, this would require more effort to implement correctly, as it requires crafting raw TCP packets instead of simply using existing system calls.
www.Fullinterview.com These mistakes significantly reduce the generator's distribution quality. Because b is even and the register is always 32-bit aligned, the leastsignificant two bits are always zero. Interpreted as a big-endian IP address (the most significant value in the sequence is stored at the lowest storage address), this ensures that the 25th and 26th bits in the scan address (the upper octet) remain constant in any worm execution instance. Similar weaknesses extend to the 24th bit of the address, depending on the unclear register's value. Moreover, with the incorrectly chosen increment, any particular worm instance cycles through a list of addresses significantly smaller than the actual Internet address space. Thus, many worm instances will never probe our monitored addresses because none of them are contained in the worm's scan cycle. Combined with the size of monitored address space, these mistakes prevent from accurately measuring the number of infected hosts during the first minutes of the worm's spread. Slammer will or will not include entire /16 address blocks (large contiguous address ranges) in a cycle, because the last two bits of the first address byte never change. That enabled the assembling lists of the address blocks in each cycle for each value of the salt (cycle structure depends on salt value). Fortunately, the probability of choosing a particular cycle is directly proportional to the size of the cycle if the initial seed is selected uniformly at random. If we looked at many randomly seeded worms, it is likely that all Internet addresses would be probed equally. Thus, we can accurately estimate the worm's scanning rate during the infection process by monitoring relatively small address ranges. We can estimate the percentage of the Internet that is infected because the probing will cover all Internet addresses. If not for the higher-quality numbers in the initial seed, these flaws would prevent the worm from reaching large portions of the Internet address space, no matter how many hosts were infected. For the same reason, these flaws also could bias measurements because even though data come from several different networks, there is a small chance that those particular networks were disproportionately more or less likely to be scanned. However, the worm uses an operating system service, Get-Tick-Count, to seed each generator with the number of milliseconds since a system booted, which should provide sufficient randomization to ensure that across many instances of the worm, at least one host will probe each address at some point in time. We feel confident that the risk of bias in our measurements is similarly minimized. Additionally, the "best" random bits produced by Get-Tick-Count are in the least-significant bits, which determine which cycle is selected for a given salt. An interesting feature of this PRNG is that it makes it difficult for the Internet community to assemble a list of the compromised Internet addresses. With earlier worms, we could collect a list of all addresses that probed into a large network. With Slammer, we would need to monitor networks in every cycle of the random-number generator for each salt value to have confidence of good coverage.
www.Fullinterview.com
www.Fullinterview.com
Figure 4. Slammer's early progress as measured at the University of Wisconsin Advanced Internet Lab (WAIL) tarpit, an unused network that logs packet traffic. Scanning rate is scaled to estimate the Internet-wide scanning rate. (A transient data-collection failure temporarily interrupted this data set approximately two minutes and 40 seconds after Slammer began to spread.)
In general, people responded quickly to Slammer. Within an hour, many sites began filtering all UDP packets with a destination port of 1434 via router or firewall configuration changes. Slammer represents the idealized situation for network-based filtering: its signature easily distinguishes it, it is readily filterable on current hardware, and it attacked a port that is not generally used for critical Internet communication. Thus, almost all traffic blocked by these filters represents worm-scanning traffic. If the worm had exploited a commonly used service vulnerability (for example, DNS at UDP port 53 or HTTP at TCP port 80), filtering could have caused significant disruption to legitimate traffic,
www.Fullinterview.com
www.Fullinterview.com
with resulting denial of services(DoS) more harmful than the worm itself. Figure 5 illustrates the results of filtering.
Figure 5. The response to Slammer during the 12 hours after its release, measured in several locations. Scanning rate is scaled to estimate the Internet-wide scanning rate.
Regardless of the optimal filter efficacy conditions in this instance, we must recognize that while filtering controlled the unnecessary bandwidth consumption of infected hosts, it did nothing to limit the worm's spread. The earliest filtering began long after Slammer had infected almost all susceptible hosts. All distinct infected IP addresses recorded were seen by monitors in the first 30 minutes of worm spread. We noticed 74,856 distinct IP addresses, spread across a wide range of domains and geographic locations. Slammer infected most of these machines in the first few minutes, but monitoring limitations prevent us from telling precisely when they were infected. We cannot observe all infected machines due to the flaws in Slammer's PRNG, but we can document a lower bound on the number of compromised machines based on the IP addresses we have recorded--the actual infection is undoubtedly larger. Tables 1 and 2 summarize these distributions.
www.Fullinterview.com
www.Fullinterview.com
United States South Korea Unknown China Taiwan Canada Australia United Kingdom Japan Netherlands
42.87 11.82 6.96 6.29 3.98 2.88 2.38 2.02 1.72 1.53
Unknown .net .com .edu ..tw .au .ca .in .br .uk
59.49 14.37 10.75 2.79 1.29 0.71 0.71 0.65 0.57 0.57
www.Fullinterview.com Washington's 911 emergency's data-entry terminals and portions of Bank of America's ATM network. The 911 and ATM failures were widely reported: Inadvertent internal DoS attacks caused the large majority of these disruptions: as one or more infected machines sent out packets at their maximum possible rates. This traffic either saturated the first shared bottleneck or crashed some network equipment. The bottleneck effects are obvious, as a site's outgoing bandwidth is usually significantly less than a Slammer's instance can consume. Thus, the worm's packets saturated Internet links, effectively denying connectivity for all computers at many infected sites. Equipment failures tended to be a consequence of Slammer's traffic patterns generated by infected machines, although any given equipment's failure details varied. Slammer's scanner produced a heavy load in three ways: a large traffic volume, lots of packets, and a large number of new destinations (including multicast addresses). We feel this combination probably caused most network-equipment failures by exhausting CPU or memory resources. If attackers can control a few machines on a target network, they can perform a DoS attack on the entire local network by using a program that mimics Slammer's behavior. Because these are "normal" UDP packets, special privileges (such as root or system administrator abilities) are not required. Instead, they need only the ability to execute the attacker's program. Thus, critical networks should employ traffic shaping, fair queuing, or similar techniques, to prevent a few machines from monopolizing network resources. Although some had predicted the possibility of high-speed worms, Slammer represents the first super-fast worm released into the wild. Microsoft's SQL Server vulnerability was particularly well suited for constructing a fast worm (because the exploit could be contained in a single UDP packet). However, techniques exist to craft any worm with a reasonably small payload into a bandwidth-limited worm similar to Slammer. Thus, we must not view Slammer's extreme speed as an aberration of the exploit's nature or the particular protocol used (UDP versus TCP). One implication of this worm's emergence is that smaller susceptible populations are now more attractive to attacks. Formerly, attackers did not view small populations (20,000 machines or less on the Internet) as particularly vulnerable to scanning worms because the probability of finding a susceptible machine in any given scan is quite low. However, a worm that can infect 75,000 hosts in 10 minutes could infect 20,000 hosts in under an hour. Thus, vulnerabilities of less popular software present a viable breeding ground for new worms. Because high-speed worms are no longer a theoretical threat, we need to automate worm defenses; there is no conceivable way for system administrators to respond to threats of this speed. Human-mediated filtering provides no benefit for actually limiting the number of infected machines. While the filtering could mitigate the overhead of the worm's continuing scan traffic, a more sophisticated worm might stop scanning once the entire www.Fullinterview.com
www.Fullinterview.com susceptible population was infected, leaving itself dormant on more than 75,000 machines to do harm at some future point. Had the worm's propagation lasted only 10 minutes, it would likely have taken hours or days to identify the attack, and many compromised machines might never have been identified. Thus, it is critical that we develop new techniques and tools that automatically detect and respond to worms. Though very simple, Slammer represents a significant milestone in the evolution of computer worms. It is sobering that worms such as Slammer preclude human-time response and, to date, deter attribution and prosecution. It clearly demonstrates that fast worms are a reality and we should consider them as standard tools in an attacker's arsenal.
Conclusion
The above points do suggest that even if we take giant leap in the technological advancement and make virtually everything secure its always possible for human mind to explore beyond the unobvious. The firewalls and other security measures can be got away from; the safest site can be hacked; and even the most intricate encryption can be decrypted. Thus in this world of fictitious reality making anything invulnerable is not viable. The need however is to anticipate anything. The most sites hacked are government owned. Despite all this the facilities they provide are unquestionable. Security or no security the Internet will perpetuate and its users responsibility to fill his own system with added measures. The time demands to learn from past mistakes because most of the malicious programmers use those codes that have been used previously; so to provide functions against them will undoubtedly be of help. Lastly, to live in this crude world is impossible without being aware. To know that there can be thwarts and there have been thwarts certainly help. Only the www.Fullinterview.com
www.Fullinterview.com curiosity toward knowledge can bring knowledge and only a beforehand preparation can make the threats less precarious. For, the quest for wisdom will never end and nor will bow down the thus produced monster. Attention is only needed for warding them off.
www.Fullinterview.com