Skip to content

Out of bounds heap read in shred / coreutils

The GNU Coreutils project has just released the new version 8.25 which fixes an out of bounds heap read bug in the shred tool that I reported. It is a nice example of the subtle bugs one can find by testing code with address sanitizer.

shred is a tool to overwrite files with random data before deleting them. It generates a random memory pattern and in this pattern generation there was a heap overread. Due to the random pattern generation this bug is not deterministic and one has to run shred with certain parameters (for example -n 20) multiple times to trigger it.

Upstream bug report
Git commit / fix
Coreutils 8.25 release notes

Talk / Session at 32C3

I'm currently at the 32C3. Tomorrow (Day 3, 28th December) I will give a small talk about the Fuzzing Project. This will be hosted by the Free Software Foundation Assembly at the congress.

Where? Room A.1
When? 2015-12-28, 19:00

The talk will give a short introduction to Fuzzing and the motivation of the Fuzzing Project. I'll also cover Address Sanitizer and my current efforts in creating a Gentoo Linux system with Address Sanitizer. And finally I'll talk a bit about fuzzing bignum libraries to find crypto vulnerabilities.

There's also a wiki page for the talk, but the 32c3 wiki is currently down.

Update: I've made the slides available as a PDF and on Slideshare.

I will repeat the talk on January 5th, 7 p.m., in the Hackerspace AFRA in Berlin.

Out of bounds read in OpenVPN

OpenVPN versions before 2.3.9 contain an out of bounds read error. The bug happens in the function resolve_remote() in the file socket.c.

I reported this bug to the OpenVPN security team on December 6th. I was informed that this was already reported to them previously and fixed in the repository. The new release 2.3.9 fixes this. The current git head code of OpenVPN has this code part completely reworked, it is thus not affected.

The reason for this bug is that for both IPv4 and IPv6 connections OpenVPN will read a struct sockaddr_in6, but in the IPv4 case the data structure is smaller than in the IPv6 case. The bug was found by trying to run OpenVPN with Address Sanitizer.

I don't know whether this is in any way exploitable, but as OpenVPN is a security sensitive software I found it worthy to make it public.

Fuzzing Math - miscalculations in OpenSSL's BN_mod_exp (CVE-2015-3193)

Today OpenSSL released a security advisory and updates for a carry propagation bug that I discovered in the BN_mod_exp() function. The bug is in the 1.0.2 branch of OpenSSL and is fixed in 1.0.2e. It only affects the x86_64 assembly optimizations. Other architectures and older versions are not affected.

The bug was introduced in commit this commit and fixed in this one. It got CVE-2015-0860 assigned. A simple proof of concept test can be found here.

Fuzzing Bignum libraries

This is not the first time a miscalculation bug was found in the bignum library of OpenSSL. In January OpenSSL already had to fix a bug in the squaring function BN_sqr(). Back then I already asked myself if it would be worthwhile to use fuzzing to find such bugs. The BN_sqr() bug was special that it only occurred on very rare occasions. Only one out of 2^128 inputs would produce a wrong result. That effectively means random testing will never find such a bug. However american fuzzy lop has shown to be surprisingly successful in finding hard to find bugs. In a talk given at the Black Hat conference Ralph-Philipp Weinmann showed that with a very simple test tool he was able to re-find the BN_sqr() bug in OpenSSL with american fuzzy lop.

Finding bugs we already know may give interesting insights, but what we really want to do is to find new bugs. I tried various strategies to fuzz bignum libraries. There are two basic options to do so:

1. Do a calculation with one bignum library and check it for consistencies. This depends on the calculation you do. An example would be a division function. If you divide a by b, store the result in r and the remainder in s then r*b+s must be a again. In case of the BN_sqr() bug a possibility is simply to compare the result of the squaring with a multiplication of a number by itself. They should produce the same result.
2. Do differential testing with two different implementations. You simply take two different bignum libraries, do the same operation and compare the results.

One small challenge is how you structure the input data. When you have a single input value it is easy: Just take the whole file and interpret it as a number. But for most functions you will have different input values. What I did was that I simply took the first two bytes and used them to decide how to split the rest of the file in pieces. To compare the results I used a simple assert call. In case an assert failure happens american fuzzy lop will detect that.

The BN_mod_exp() bug was found by comparing libgcrypt with OpenSSL. Unfortunately I have been sloppy with archiving my code and I lost the exact code that I used to fuzz the bug. But I think I recreated an almost functionally equivalent example. (I should mention that libfuzzer might be the better tool for this job, but I still haven't gotten around trying it out.)

Fuzzing is usually associated with typical memory corruption bugs. What these examples show is that you can use fuzzing to target entirely different classes of bugs. Essentially fuzz testing can target any kind of bug class that depends on an input and that has a testable failure state. For mathematics the failure state is pretty obvious: If the result of a calculation is wrong then there is a bug.

Fuzzing versus branch-free code

After reporting the bug I was asked by the OpenSSL developers if I could do a similar test on their HMAC implementation. I did that and the result is interesting. At first I was confused: A while after the fuzzing started american fuzzy lop was only reporting two code paths. Usually it finds dozends of code paths within seconds.

This happens because cryptographic code is often implemented in a branch-free way. That means that there are no if-blocks that will execute different parts of the code depending on the input. The reason this is done is to protect against all sorts of sidechannel attacks. This conflicts with the way modern fuzzers like american fuzzy lop or libfuzzer work. They use the detection of new code paths as a way to be smart about their inputs.

I don't want to suggest here that branch-free code is bad. I think the advantages of branch-free code are undisputed, but it's interesting to see that it can make fuzz testing harder.

In case you wonder why american fuzzy lop still found two code paths: The reason is likely the input length. The HMAC code is branch-free for each block, but if the block number changes you will get a different code path.

What's the impact?

Finally you may ask what the impact of the BN_mod_exp() bug is. This is in part still unknown and I can only offer a preliminary analysis.

The BN_mod_exp() function is used to exponentiate a number in a modulus (a^b mod m) and is used in many algorithms. It is the core of both RSA and Diffie Hellman. In the case of RSA I think it's unlikely that there is a vulnerability. A potential attacker has basically no control over the input values. The base is either random (RSA exchange) or a hash (DHE/ECDHE exchange). The exponent and the modulus are part of the key. I haven't looked into DSA, because nobody uses it.

Diffie Hellman looks more interesting. I first thought it's not interesting, because usually in a Diffie Hellman key exchange the secret key is only used for one connection. Therefore the only thing an attacker could do is attacking a connection that he himself is part of. That is unlikely to give him anything interesting. But Juraj Somorovsky pointed out to me that OpenSSL caches and reuses the ephemeral key for several Diffie Hellman exchanges until the application restarts. So it might be possible to construct an oracle that will extract this cached ephemeral key. I leave it to people who know more about cryptography and x64 assembly to decide whether that is the case.

The conclusions of the OpenSSl team in the advisory are similar to mine.

OpenSSL has an option to disable this key caching. This can be done by passing the SSL_OP_SINGLE_DH_USE (for classic Diffie Hellman) and SSL_OP_SINGLE_ECDH_USE (for Elliptic Curve Diffie Hellman) values to SSL_CTX_set_options(). In my opinion this should be the default, reusing the ephemeral key seems quite dangerous. Many popular applications, including the Apache web server, already set this option.

I invite everyone to analyze this further and try to come up with a practical attack.

Thanks to Tom Ritter, Ralph-Philipp Weinmann and Juraj Somorovsky for valuable discussions on the topic.

Stack overflows and out of bounds read in dpkg (Debian)

Two stack overflows and one stack out of bounds access were fixed in dpkg, the package management tool from Debian.

A call to the function read_line didn't consider a trailing zero byte in the target buffer and thus could cause a one byte stack overflow with a zero byte. This issue was already fixed in the testing code when I reported it, but the fix wasn't backported to stable yet.
Git commit / fix
Minimal PoC file
The Debian developers consider this as non-exploitable, therefore no CVE got assigned.

A second almost identical stack overflow due to a call to the function read_line was in the same file.
Minimal PoC file
This issues got the id CVE-2015-0860.

A stack out of bounds read can happen in the function dpkg_ar_normalize_name. There is a read access to an array where the index can have the value -1. A check if the index is a positive value fixes this.
Minimal PoC file

All issues were found with the help of american fuzzy lop and address sanitizer.

Debian has published the advisory DSA 3407-1. Fixes packages for both stable (Jessie) and oldstable (Wheezy) have been published.

Ubuntu has published the advisory USN-2820-1. Fixed packages for Ubuntu 15.10, 15.04 and the LTS versions 14.04 and 12.04 have been published.

The updates fix all three issues. All users of Ubuntu, Debian and other dpkg/apt-based distributions should update.

Heap Overflow in PCRE

The Perl Compatible Regular Expressions (PCRE) library has just released a new version which fixes a number of security issues.

Fuzzing the pcretest tool uncovered an input leading to a heap overflow in the function pcre_exec. This bug was found with the help of american fuzzy lop and address sanitizer.
Upstream bug #1637

This is fixed in PCRE 8.38. There are two variants of PCRE, the classic one and PCRE2. PCRE2 is not affected.

Appart from that a couple of other vulnerabilities found by other people have been fixed in this release:
Stack overflow in compile_regex (bug #1503)
Heap overflow in compile_regex (bug #1672)
Stack overflow in compile_regex (bug #1515)
Heap overflow in compile_regex (bug #1636, CVE-2015-3210)
Stack overflow in match (bug #1638, CVE-2015-3217)
Heap overflow in compile_regex (bug #1667)
(this list may be incomplete)

If you use PCRE with potentially untrusted regular expressions you should update immediately. There is no immediate risk if you use regular expressions from a trusted source with an untrusted input.

Libxml2: Several out of bounds reads

I discovered several out of bounds read issues in Libxml2. The upstream developers have just released version 2.9.3, which fixes all relevant issues.

A malformed XML file can cause a heap out of bounds read access in the function xmlParseXMLDecl.
Upstream bug #751603 (sample input attached)
Git commit / fix

A second, very similar issue in the same function xmlParseXMLDecl.
Upstream bug #751631 (sample input attached)
Git commit / fix

A malformed XML file can cause a global out of bounds read access in the function xmlNextChar. This only affected the git code and was never an issue in any release version.
Upstream bug #751643 (sample input attached)

All three issues above were found with american fuzzy lop and address sanitizer.

Some inputs can cause a stack out of bounds read. This was found by running the test suite with Address Sanitizer (make check). The issue was re-found by fuzzing independently by Hugh Davenport:
Upstream bug #752191
Upstream bug #756372 (duplicate)
Git commit / fix
CVE-2015-8242

Unfortunately there is another issue affecting the test suite (also documented in upstream bug #752191) that isn't fixed yet, but the bug is in the code of the test itself, therefore it's not affecting the use of Libxml2.

A large number of other issues have been fixed, many of them found with american fuzzy lop and libfuzzer. The release notes of 2.9.3 mention 10 CVEs. If you use Libxml2 please update as soon as possible.

Network fuzzing with american fuzzy lop

American fuzzy lop is a remarkable tool, but it always had a big limitation: It only worked for file inputs.

There had been different attempts to adapt networking to afl. There's a tool called preeny that works by preloading a library. I created a similar approach myself, however I never published it, the approach was very error-prone and only worked on very few applications.

Now there is a new attempts for fuzzing network input with afl and based on my first experiences it seems to work much better. Doug Birdwell created a modified version of afl that allows to fuzz networking inputs. It's relatively simple to use, just check out the documentation. For example I fuzzed wget with this command line:
afl-fuzz -i in -o out -t 30+ -D 7 -m none -L -Ntcp://localhost:8082 ./wget -O - -t 1 http://localhost:8082/test.htm

This doesn't just work in theory, Doug Birdwell reported on the afl-users mailing list that one of the bugs fixed with the latest release of ntp (CVE-2015-7855) was found with this new afl variant.

Having a networking variant of afl is a huge step to make it even more useful.

Two out of bounds reads in Zstandard / zstd

Zstandard or short zstd is a new compression algorithm and tool developed by Yann Collet. Fuzzing zstd with american fuzzy lop and address sanitizer uncovered two out of bounds reads.

Heap out of bounds read in function ZSTD_copy8:
Input sample
Upstream bug report
Git commit / fix

Stack out of bounds read in function HUF_readStats:
Input sample
Upstream bug report
Git commit / fix

The new zstd version 0.2.1 fixes both issues.

Heap overflow and endless loop in exfatfsck / exfat-utils

exfat-utils is a collection of tools to work with the exFAT filesystem. Fuzzing the exfatfsck with american fuzzy lop led to the discovery of a write heap overflow and an endless loop.

Especially at risk are systems that are configured to run filesystem checks automatically on external devices like USB flash drives.

A malformed input can cause a write heap overflow in the function verify_vbr_checksum. It might be possible to use this for code execution.
Upstream bug report
Sample file triggering the bug
Git commit for fix
CVE-2015-8026

Another malformed input can cause an endless loop, leading to a possible denial of service.
Upstream bug report
Sample file triggering the bug
Git commit of fix

Both issues have been fixed in the latest release 1.2.1 of exfat-utils.

September report of the Fuzzing Project

I create quarterly reports for the Core Infrastructure Initiative about the progress of the Fuzzing Project. The September 2015 report can now be downloaded from their webpage.

It includes some notes about work I've been doing on creating a full Gentoo Linux system built with Address Sanitizer, some information about Undefined Behavior Sanitizer and Kernel Address Sanitizer, fuzzing of filesystem tools and about the recent BIND Denial of Service vulnerability.

Kernel Address Sanitizer (KASAN)

Address Sanitizer is a remarkable feature that is part of the compilers gcc and clang. I make heavy use of it and it can uncover many memory access bugs that would otherwise be hard to find.

What may be lesser known is that there is also the possibility to use Address Sanitizer for the Linux Kernel. It is available as an option since version 4.0.

I recently tried for the first time to boot a kernel with Kernel Addres Sanitizer (KASAN). There are a few things to consider. Appart from the option CONFIG_KASAN one should also set CONFIG_STACKTRACE. There are two variants how KASAN can be enabled, CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. The latter seems to be preferrable, it's faster, but it creates a larger kernel binary and requires a relatively recent gcc version (5.0 or later).

To my surprise just booting a kernel with KASAN already showed a number of warnings about out of bounds errors. Unlike normal ASAN these errors don't cause a crash (that would be quite inconvenient in the kernel). Instead they just print a warning in the dmesg log. Also by itself the kernel is not able to print an error message with line numbers. One needs to pass the output to a script that is available here.

Most of the warnings came from a preprocessor macro in the Intel GPU driver. I spend some time fixing the issue and had a working patch ready. Then I found out that it was already fixed in the current git code... (Remember: Always check if code is already fixed in git if you try to fix a bug.)

Anyway, having fixed that issue silenced most of the warnings, but a few remained. I could track them down to a wrong use of a counter variable in a nested loop. This loop was supposed to check the correct sorting of a table of commands. However it turned out that the tables weren't properly sorted and the fix made the kernel unbootable. After some discussions with the Intel driver developers I was able to finally fix the issue with two patches which have just been merged into the main kernel tree (they are in 4.3-rc1).

Given that just booting the kernel with KASAN enabled was enough to uncover some bugs indicates that not enough people have tested it yet. I also tried some kernel fuzzing tools with KASAN enabled (perf_fuzzer and trinity) and tried to mount a couple of corrupted filesystem images generated by american fuzzy lop. That didn't turn up any further bugs.

BIND Denial of Service via malformed DNSSEC key (CVE-2015-5722)

The latest update for the BIND DNS server fixes a bug that could be used to crash a DNS server that is verifying DNSSEC records. Parsing a malformed DNSSEC key can lead to an assert in the file buffer.c.

Usually DoS-issues are considered relatively minor. In this case however I consider the impact relatively severe as it is quite easy to trigger crashes of a large number of DNS servers. It is often easy to force a system to resolve a certain domain name, e. g. through a website (clients) or an e-mail (servers). Although DNSSEC is not widely deployed a lot of DNS resolvers have DNSSEC validation enabled.

This issue was found with american fuzzy lop. BIND ships several command line tools that parse DNSSEC keys, e. g. dnssec-importkey, dnssec-dsfromkey or dnssec-revoke that can trigger this bug. Given that quite recently another vulnerability in BIND was also found with american fuzzy lop it is quite surprising to me that this issue wasn't found earlier. There was almost nothing special in fuzzing the BIND tools, the only thing to consider was that they expect keys in a certain file name scheme (they need to start with a K). afl-fuzz can guarantee certain filenames with the -f parameter. This tells us that even in highly critical software like BIND one can sometimes still find vulnerabilities with afl easily.

This issue was reported to ISC (the company developing BIND) on August 1st. After exchanging a couple of mails the BIND developers provided me a test patch on August 5th. The releases with the fixes were released on September 2nd. The fix is contained in the releases 9.0.7-P3 and 9.10.2-P4. These releases also fix another security issue (CVE-2015-5986). If you're running an affected BIND version you should update immediately.

I want to thank Florian Weimer from Red Hat who confirmed that this issue is remotely exploitable through a DNSSEC zone.

To give people some time to patch their servers I will wait a few days until I publish the proof of concept crasher. This should however be no excuse not to update your servers. It is quite possible that others will reproduce this work and create a working exploit very soon.

ISC advisory: Parsing malformed keys may cause BIND to exit due to a failed assertion in buffer.c
CVE-2015-5986

Update (2015-09-11):

The proof of concept is now public. This can be tested with several command line tools from bind, e. g. with dnssec-dsfromkey.

The key record looks like this:
0 DNSKEY 0 0 2 ADN00000000000000000000000000000000000000000000000000000000000000000000AC000

I have personally not tested to create a live DNSSEC record crashing BIND installations (I don't use DNSSEC myself). It should be enough to add that as a DNSKEY record in your setup and try to resolve the domain with a DNSSEC-validating, vulnerable BIND resolver.

Update 2 (2015-09-15):

I was asked whether I can also provide a proof of concept for CVE-2015-5986, which was fixed in the same release. This issue wasn't discovered by me, but I was easily able to fuzz that one as well, here is the proof of concept file. It can be tested with named-checkzone -f raw -o - a [infile].

Several out of bounds reads in bash

Bash just released new patchlevels that fix among other things several out of bounds reads I discovered with Address Sanitizer. These happened during normal use of bash (triggered by the completion functionality). These are not security issues, however they could cause malfunction.

These are not security fixes, because they don't involve any externally controlled input. But it's a nice example showing that Address Sanitizer should be used more to test software. (I'm actually currently trying to build a whole system based on Gentoo Linux with everything except a few core packages compiled with Address Sanitizer - I will make that work public soon.)

Bash 4.3 patch 041 fixing out of bounds reads
Report 1 on bash mailing list
Report 2 on bash mailing list

I'm currently at the Chaos Communication Camp and will have a small lightning talk about Address Sanitizer today.