GrammaTalk

Instruction Location Randomization (ILR) and Just-In-Time Code Reuse

Posted on

by

As part of the PEASOUP (Preventing Exploits Against Software of Uncertain Provenance) contract1, my colleagues at the University of Virginia (who are partners with us on this project) developed an interesting technique called Instruction Location Randomization (ILR)2. We believe ILR is effective against arc-injection attacks, including return-oriented programming attacks. We have noticed, however, that there seems to be some confusion around ILR. For example, Snow et al. make some debatable claims about ILR in their paper on “Just-In-Time Code Reuse: On the effectiveness of Fine-Grained Address Space Layout Randomization” — which is an otherwise excellent piece of work3. So I thought I’d write a few words to restate and clarify ILR, and to provide additional security analysis4.

What is Instruction Location Randomization (ILR)?

Conceptually, ILR employs a novel program representation, which I’ll call the ILR representation. Prior to program execution (e.g., at load time), it transforms a subject program into the ILR representation. It then uses software dynamic translation (a.k.a. dynamic binary translation) to efficiently execute from the ILR representation.

The ILR program representation consists of two maps:

  • InstructionMap — maps an address to an instruction. We refer to an address in the domain of this map as an ILR address.
  • FallThruMap — maps from an ILR address to the ILR address of the next, fall-through instruction5.

The advantage of this representation is that the “locations” of instructions can be completely randomized, in that instructions that are neighbors according to the FallThruMap can be given (random) non-adjacent ILR addresses. So, for instance, we could have:

InstructionMap[3377756] → add eax, ecx

InstructionMap[8899973] → add eax, 3

FallThruMap[3377756] → 8899973

It is fairly straightforward to write an interpreter that executes a program in the ILR representation, although performance is a serious concern. ILR addresses performance by using a modified software dynamic translator. The first time the translator reaches a given ILR address, it copies the instruction at that address in the Instruction Map into a code cache. Instructions are placed in the code cache so that they are adjacent (in the virtual address space used by the processor), and in the long run, instructions are executed directly out of the code cache, without reference to the ILR representation. The security implication here is that instructions in the code cache are less randomized than in the original program representation, which I’ll discuss in more detail below.

ILR Security Facts

There are two important observations about the ILR representation and the way that ILR uses it.

First, ILR only looks up instructions via the InstructionMap. For a given address A, a process’s memory contents at address A is unrelated and may be completely different from the contents of InstructionMap[A]. For example, ILR can choose to use addresses that are illegal and/or unmapped as addresses in the process’s virtual address space, such as non-canonical addresses on x86-64 processors. Even if an attacker knows a legitimate ILR address A, they must also know the location and encoding of the InstructionMap in order to lookup the instruction at InstructionMap[A].

Second, knowing an ILR address A is insufficient, by itself, to compute the ILR address of a different instruction in the InstructionMap. ILR addresses are not adjacent, and in fact are randomized. The attacker needs to know the location and encoding of the FallThruMap in order to go from the ILR address of one instruction to the ILR address of another instruction. (Without ILR, an attacker can often use simple arithmetic to go from the address of one instruction in memory to the address of another instruction in memory.)

Example: Just-In-Time Code Reuse

To follow the approach used in “Just-In-Time Code Reuse,” an attacker must be able to “discover a single code pointer, e.g., as typically found via function pointers described by a heap or stack-allocated object” [or pushed on the runtime stack by a call instruction]. From there, the attack repeatedly invokes a DiscloseByte primitive that reveals one byte of the process’s memory. By repeatedly invoking DiscloseByte, the attack maps code pages and discovers gadgets.

The success of the just-in-time code reuse attack against ILR depends entirely on what initial code address can be leaked. Under ILR, the program executes using ILR addresses as determined by the ILR representation. Critically, all of the application data that represents code addresses will hold an ILR address (and when the program uses the address, ILR will interpret it using the InstructionMap and FallThruMap tables).

Often an attacker may be able to leak an ILR address from the application data, since information leak vulnerabilities are common. However, the amount of damage that can be done by knowing a single ILR address in quite limited: the attacker cannot look up the instruction at that address (although in many cases they might be able to infer it based on known application logic) and they cannot learn about other instructions or other instruction addresses. The effectiveness of the just-in-time code reuse attack is seriously undermined. To qualify it, I believe ILR is at least equivalent in defensive strength to enforcement of a coarse control-flow integrity6 policy with one class of allowed targets for return instructions and another for indirect call and jmp instructions7.

If, instead of an ILR address, the attacker could leak other addresses, they might be able to do more damage. In particular, if the attacker could obtain the address of the InstructionMap and/or FallThruMap, they might be able to map the ILR program representation and discover gadgets using a modified version of the just-in-time code reuse attack. If the attacker were to discover an address in the software dynamic translator and/or in the code cache, they would be able to successfully launch the just-in-time code reuse attack, as the code in these locations is executed natively and not under translation. (The code in these locations is initially protected by ASLR.)

Information Leaks and Software Dynamic Translation

The addresses needed to mount a successful attack against ILR do not occur in the application data. Attacks such as the just-in-time code reuse attack and the Blind ROP8 attack often appeal to the general prevalence of information-leak vulnerabilities. When attacking ILR, the general prevalence of information leaks in applications is irrelevant; what matters is information leaks from the software dynamic translator of the dangerous addresses discussed above. With careful engineering, such leaks can be eliminated, at which point many of the purported attacks against ILR become ineffective.

Under PEASOUP, we are also developing a novel architecture for software dynamic translators that precludes access to much of the sensitive translator code and data described above. In general, I believe that software dynamic translation can be used to effectively “hide” security-relevant data from exposure by exploits of weaknesses in the subject program.

To take another example, we could construct a heap allocator that (a) stores its metadata externally to the allocated blocks, and (b) uses the software dynamic translator to randomly locate the metadata and ensure it is not reachable from the program’s (original) code or runtime data.

Often, Address-Space Layout Randomization (ASLR) can be undermined by (ubiquitous) information leaks. As long as your software dynamic translator does not have information leaks, it can be used with ASLR to protect (meta) data from information leaks in the translated program.


1. This material is based upon work supported by the United States Air Force under Contract No. FA8650-10-C-7025. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force.
2. Hiser, Jason, Anh Nguyen-Tuong, Michele Co, Matthew Hall, and Jack W. Davidson. “ILR: Where’d My Gadgets Go?.” In Security and Privacy (SP), 2012 IEEE Symposium on, pp. 571-585. IEEE, 2012.
3. Snow, Kevin Z., Fabian Monrose, Lucas Davi, Alexandra Dmitrienko, Christopher Liebchen, and Ahmad-Reza Sadeghi. “Just-in-time code reuse: On the effectiveness of fine-grained address space layout randomization.” InSecurity and Privacy (SP), 2013 IEEE Symposium on, pp. 574-588. IEEE, 2013.
4. I will take a few shortcuts for expository clarity.
5. This is one of the places where I’m taking liberties: IIRC, the actual implementation maps from the “natural” fall-through address of an instruction to a new address; I hope the approach I’m describing is equivalent and slightly easier to understand.
6. Abadi, Martín, Mihai Budiu, Ulfar Erlingsson, and Jay Ligatti. “Control-flow integrity.” In Proceedings of the 12th ACM conference on Computer and communications security, pp. 340-353. ACM, 2005.
7. The work of Göktas et al. on overcoming control-flow integrity defenses is also a concern for ILR: Göktas, Enes, Elias Athanasopoulos, Herbert Bos, and Gerogios Portokalidis. “Out of control: Overcoming control-flow integrity.” In IEEE S&P. 2014.
8. Bittau, Andrea, Adam Belay, Ali Mashtizadeh, David Mazieres, and Dan Boneh. “Hacking blind.” In Proceedings of the 35th IEEE Symposium on Security and Privacy. 2014.

Related Posts

Check out all of GrammaTech’s resources and stay informed.

view all posts

Contact Us

Get a personally guided tour of our solution offerings. 

Contact US