March 19, 2018

We need to talk about IDS signatures


The names Snort and Suricata are known to all who work in the field of network security. WAF and IDS are two classes of security systems that analyze network traffic, parse top-level protocols, and signal the presence of malicious or unwanted network activity. Whereas WAF helps web servers detect and avoid attacks targeted only at them, IDS detects attacks in all network traffic.

Many companies install an IDS to control traffic inside the corporate network. The DPI mechanism lets them collect traffic streams, peer inside packets at the IP, HTTP, DCE/RPC, and other levels, and identify both the exploitation of vulnerabilities and network activity by malware.

At the heart of both systems are signature sets used for detecting known attacks, developed by network security experts and companies worldwide.
We at the @attackdetection team also develop signatures to detect network attacks and malicious activity. Later on in the article, we'll discuss a new approach we discovered that disrupts the operation of Suricata IDS systems, and then hides all trace of such activity.

How does IDS work

Before plunging into the technical details of this IDS bypass technique and the stage at which it is applied, let's refresh our concept of the operating principle behind IDS.



First of all, incoming traffic is divided into TCP, UDP, or other traffic streams, after which the parsers mark and break them down into high-level protocols and their related fields, normalizing them, if required. The decoded, decompressed, and normalized protocol fields are then checked against the signature sets that detect network attack attempts or malicious packets in the network traffic.

Incidentally, the signature sets are the product of numerous individual researchers and companies. Among the vendors are such names as Cisco Talos and Emerging Threats, and the open set of rules currently counts more than 20,000 active signatures.

Common IDS evasion methods

IDS flaws and software errors sometimes mean that attacks go unspotted in network traffic. The following are fairly well-known bypass techniques at the stream-parsing stage:
  • Non-standard fragmentation of packets, including at the IP, TCP, and DCERPC levels, which the IDS is sometimes unable to cope with.
  • Packages with boundary or invalid TTL or MTU values ​​can also be incorrectly processed by the IDS.
  • Ambiguous overlapping of TCP fragments (TCP SYN numbers) can be handled differently by the IDS than on the server or client for which the TCP traffic was intended.
  • For instance, instead of ignoring it, a TCP FIN dummy packet with an invalid checksum (so-called TCP un-sync) can be interpreted as the end of the session.
  • A different timeout time for the TCP session between the IDS and the client can also serve as a tool for hiding attacks.

As for the protocol-parsing and field-normalization stage, many WAF bypass techniques can be applied to an IDS. Here are just some of them:
  • HTTP double-encoding.
  • A Gzip-compressed HTTP packet without a corresponding Content-Encoding header might remain uncompressed at the normalization stage; this technique can sometimes be detected in malware traffic.
  • The use of rare encodings, such as Quoted-Printable for POP3/IMAP, can also render some signatures useless.

And don't forget about bugs specific to every vendor of IDS systems or third-party libraries inside them, which are available on public bug trackers.
One of these specific bugs used to disable signature checks in certain conditions was discovered by the @attackdetection team in Suricata; this error could be exploited to conceal attacks like BadTunnel.

During this attack, the vulnerable client opens an HTML page generated by the attacker, establishing a UDP tunnel through the network perimeter to the attacker's server for ports 137 on both sides. Once the tunnel is established, the attacker is able to spoof names inside the network of the vulnerable client by sending fake responses to NBNS requests. Although three packets went to the attacker's server, it was sufficient to respond to just one of them to establish the tunnel.

The error was due to the fact that since the response to the first UDP packet from the client was an ICMP packet, for example ICMP Destination Unreachable, the imprecise algorithm meant that the stream was verified with signatures only for ICMP. Any further attacks, including name spoofing, remained unspotted by the IDS, as they were carried out on top of the UDP tunnel. Despite the lack of a CVE identifier for this vulnerability, it led to the evasion of IDS security functions.


The above-mentioned bypass techniques are well known and have been eliminated in modern and long-developed IDS systems, while specific bugs and vulnerabilities work only for unpatched versions.

Since our team investigates network security and network attacks, and develops and tests network signatures first hand, we couldn't fail to notice bypassing techniques linked to the signatures themselves and their flaws.

Bypassing signatures

Wait a sec, how can signatures be a problem?

Researchers study emerging threats and form an understanding of how an attack can be detected at the network level on the basis of operational features or other network artifacts, and then translate the resulting picture into one or more signatures in an IDS-friendly language. Due to the limited capabilities of the system or researcher error, some methods of exploiting vulnerabilities remain undetected.

If the protocol and message format of a particular family or generation of malware remain unchanged, and the signatures for them work just fine, then when it comes to exploiting vulnerabilities, the more complex the protocol and its variability, the simpler it is for the attacker to change the exploit with no loss of functionality—and bypass the signatures.
Although you can find many decent signatures from different vendors for the most dangerous and high-profile vulnerabilities, other signatures can be evaded by simple methods. Here's an example of a very common signature error for HTTP: at times it's enough just to change the order of the HTTP GET arguments to bypass a signature check.


And you'd be right to think that substring checks with a fixed order of arguments are encountered in signatures—for example, "?action=checkPort" or 'action=checkPort&port=". All that's needed is to carefully study the signature and check whether it contains such hardcode.

Some other equally complex checking protocols and formats are DNS, HTML, and DCERPC, which all have extremely high variability. Therefore, to cover the signatures of all attack variations and develop not only high-quality but speedy signatures, the developer must possess wide-ranging skills and solid knowledge of network protocols.

The inadequacy of IDS signatures is old hat, and you can find plenty of other opinions in various reports: 1, 2, 3.

How much does a signature weigh

As already mentioned, signature speed is the developer's responsibility, and naturally the more signatures, the more scanning resources are required. The "golden mean" rule recommends adding one CPU per thousand signatures or 500 Mbps network traffic in the case of Suricata.




It depends on the number of signatures and volume of network traffic. Although this formula looks good, it leaves out the fact that signatures can be fast or slow, and traffic can be extremely diverse. So what happens if a slow signature encounters bad traffic?

Suricata is able to log data on the performance of signatures. The log gathers data on the slowest signatures, and generates a list specifying execution time in ticks—CPU time and number of checks performed. The slowest signatures are at the top.


The highlighted signatures are described as slow. The list is constantly updated; different traffic profiles would be sure to list other signatures. This is because signatures generally consist of a subset of simple checks, such as searching for a substring or regular expression arranged in a certain order. When checking a network packet or stream, the signature checks its entire contents for all valid combinations. As such, the tree of checks for one and the same signature can have more or fewer branches, and the execution time will vary depending on the traffic analyzed. One of the developer's tasks, therefore, is to optimize the signature to operate on any kind of traffic.

What happens if the IDS is not properly implemented and not capable of checking all network traffic? Generally, if the load on CPU cores is on average more than 80%, it means the IDS is already starting to skip some packet checks. The higher the load on the cores, the more network traffic checks are skipped, and the greater the chances that malicious activity will go unnoticed.

What if an attempt is made to increase this effect when the signature spends too much time checking network packets? Such an operating scheme would sideline the IDS by forcing it to skip packets and attacks. For starters, we already have a top list of hot signatures on live traffic, and we'll try to amplify the effect on them.

Let's operate

One of these signatures reveals an attempt in the traffic to exploit the vulnerability CVE-2013-0156 RoR YAML Deserialization Code Execution.


All HTTP traffic directed to corporate web servers is checked for the presence of three strings in the strict sequence—"type", 'yaml", '!Ruby"—and checked with a regular expression.

Before we set about generating "bad" traffic, I'll present some hypotheses that might help our investigation:

  • It's easier to find a matching substring than prove there is no such match.
  • For Suricata, checking with a regular expression is slower than searching for a substring.

This means that if we want long checks from a signature, these checks should be unsuccessful and use regular expressions.


In order to get to the regex check, there must be three substrings in the packet one after the other.

Let's try combining them in this order and running the IDS to perform a check. To construct files with HTTP traffic in Pcap format from the text, I used the Cisco Talos file2pcap tool:


Another log, keyword_perf.log, helps us see that the chain of checks successfully made it (content matches—3) to the regular expression (PCRE) and then failed (PCRE matches—0). If later we want to benefit from resource-intensive PCRE checks, we need to completely parse it and pick out some effective traffic.


The task of reverse parsing a regular expression, although easy to do manually, is poorly automated due to such constructions as backreferences or named capture groups: I didn't find any methods at all to automatically select a string for successfully passing a regular expression.


The following construction was the minimum string required for such an expression. To test the theory that an unsuccessful search is more resource-intensive than a successful one, we'll trim the rightmost character from the string and run the regex again.


It turns out that the same principle also applies to regular expressions: the unsuccessful check took more steps than its successful counterpart. In this case, the difference was greater than 50%. You can see this for yourself.

Further study of this regular expression produced another eye-opener. If we repeatedly duplicate the minimum required string without the last character, it is reasonable to expect an increase in the number of steps taken to complete the check, but the growth curve is explosive:


The scan time for several dozen such strings is already around one second, and increasing their number risks a timeout error. This effect in regular expressions is called catastrophic backtracking, and there are many articles devoted to it. Such errors are still encountered in common products; for example, one was recently found in the Apache Struts framework.

Let's take the strings obtained and check them with Suricata:

  Keyword Ticks  Checks    Matches
  --------   --------   -------   --------
  content 19135  4         3   
  pcre    1180797 1         0       

However, instead of catastrophic backtracking, the IDS barely notices the load—only 1 million ticks. This is the story of how after debugging and examining the Suricata IDS source code and the libpcre library used inside it, I stumbled upon these PCRE limits:

  • MATCH_LIMIT DEFAULT = 3500
  • MATCH_LIMIT_RECURSION_DEFAULT = 1500

These limits save regular expressions from catastrophic backtracking in many regex libraries. The same limits can be found in WAF, where regex checks predominate. Sure, these limits can be changed in the IDS configuration, but they are propagated by default and changing them isn't recommended.

Using only a regular expression won't help us achieve the desired result. But what if we use the IDS to check a network packet with this content?


In this case, we get the following log values:

Keyword Avg. Ticks  Checks Matches
--------   ----------  -------  --------
content    3338        7      6   
pcre       12052       3         

There were 4 checks, which became 7 only because of duplication of the initial string. Although the mechanism remains unclear, we should expect the number of checks to snowball if we further duplicate the strings. In the end, I got the following values:

content            1508  1507      
pcre                1492  0       

In total, the number of checks of substrings and regular expressions does not exceed 3000, no matter what content is checked by the signature. Clearly, the IDS itself also has an internal limiter, which goes by the name of inspection-recursion-limit, set by default to that same figure of 3000. With all the PCRE and IDS limits and restrictions on the one-time size of content being checked, by modifying the content and using snowballing regex checks, you get the result you're after:

content 3626    1508  1507      
pcre    1587144   1492 

Although the complexity of one regex check has not changed, the number of such checks has shot up to the 1500 mark. Multiplying the number of checks by the average number of clock cycles spent on each check, we get the coveted figure of 3 billion ticks.

 Num  Rule      Avg Ticks
-------- ------------ -----------
1    2016204  3302218139

That's more than a thousand-fold increase! The operation requires only the curl utility for generating the minimum HTTP POST request. It looks something like this:



The minimum set of HTTP fields and HTTP body with a repeating pattern.
Such content cannot be infinitely large so as to cause the IDS to spend vast resources on checking it, since although inside it the TCP segments are joined in a single stream, the stream and the collected HTTP packets are not checked entirely, no matter how big they are. Instead, they are checked in small chunks about 3-4 kilobytes in size. The size of the segments to be checked, as well as the depth of the checks, is set in config (like everything in the IDS). The segment size "wobbles" slightly from launch to launch to avoid fragmentation attacks on such segments—when the attacker, knowing the default segment size, splits the network packets so that the attack is divided into two neighboring segments and cannot be detected by the signature.

So, we just got our hands on a powerful weapon that loads the IDS in excess of 3,000,000,000 CPU ticks per utilization. What does that even mean?

The actual figure obtained is roughly 1 second of average CPU operation. Basically, by sending an HTTP request of size 3 KB, we load the IDS for a full second. The more cores in the IDS, the more data streams it can process simultaneously.


Remember that the IDS does not sit idle and generally spends some resources on monitoring background network traffic, thereby lowering the attack threshold.

Taking metrics on a working IDS configuration with 8/40 Intel Xeon E5-2650 v3 CPU cores (2.30 GHz) without background traffic, where all 8 CPU cores are 100% loaded, the threshold value turns out to be only 250 Kbps. And that's for a system designed to process a multi-gigabit network stream, i.e. thousands of times greater.

To exploit this particular signature, the attacker need only send about 10 HTTP requests per second to the protected web server to gradually fill the network packet queue of the IDS. When the buffer is full up, the packets start to bypass the IDS, which is when the attacker can use any tools or carry out arbitrary attacks while remaining unnoticed by the detection systems. The constant flow of malicious traffic can disable the IDS until the traffic stops bombarding the internal network, while for short-term attacks the attacker can send a short spike from such packets and also blind the detection system for a brief period.

Current mechanisms are unable to detect slow signatures: although IDS has a profiling code, the system cannot distinguish a signature that is merely slow from one that is catastrophically slow, and automatically signal it. Note that signature triggering is not signaled either, due to the lack of relevant content.

Do you remember the unexplained rise in the number of checks? There was indeed an IDS error that led to an increase in the number of superfluous checks. The vulnerability was given the name CVE-2017-15377 and has now been fixed in Suricata IDS 3.2 and 4.0.

The above approach works well for one specific instance of the signature. It is distributed as part of an open signature set and usually is enabled by default, but at the top of the list of hot signatures new ones keep emerging, while others continue waiting for their traffic. The signature description language for Snort and Suricata supplies the developer with many handy tools, such as base64 decoding, content jumping, and mathematical operations. Other combinations of checks can also cause explosive growth in the consumption of resources. Careful monitoring of performance data can be a springboard for exploitation. After the CVE-2017-15377 problem was remedied, we again launched Suricata to check our network traffic and saw exactly the same picture: a list of the hottest signatures at the top of the log, but this time with different numbers. This suggests that such signatures—and ways to exploit them—are numerous.

Not only IDS, but also anti-viruses, WAF, and many other systems are based on signature-search methods. As a result, this approach can be applied to search for weaknesses in their operation. It can stealthily prevent detection systems from doing their job of detecting malicious activity. Related network activity cannot be detected by security tools or anomaly detectors. As an experiment, enable the profiling setting in your detection system—and keep an eye on the top of the performance log.



Author: Kirill Shipulin of the @attackdetection team, Twitter | Telegram

No comments:

Post a Comment