Monday, March 26, 2007

The Game Is Over!

People often say that once an attacker gets access to the kernel the game is over! That’s true indeed these days, and most of the research I have done over the past two years or so, was about proofing just that. Some people, however, go a bit further and say, that thus there is no point in researching ways to detect system compromises and, once an attacker got in, you should simply assume everything has been compromised and replace all the components, i.e. buy new machine (as the attacker might have modified the BIOS or re-flashed PCI EEPROMs), reinstall OS, all applications, etc.



However, they miss one little detail – how can they actually know that the attacker got access to the system and that the game is over indeed and we need to reinstall just now?

Well, we simply assume that the attacker had to make some mistake and that we, sooner or later, will find out. But what if she didn’t make a mistake?

There are several trends of how this problem should be addressed in a more general and elegant way though. Most of them are based on a proactive approach. Let’s have a quick look at them…
  1. One generic solution is to build in a prevention technology into the OS. That includes all the anti-exploitation mechanisms, like e.g. ASLR, Non Executable memory, Stack Guard/GS, and others, as well as some little design changes into OS, like e.g. implementation of least-privilege principle (think e.g. UAC in Vista) and some sort of kernel protection (e.g. securelevel in BSD, grsecurity on Linux, signed drivers in Vista, etc).

    This has been undoubtedly the most popular approach for the last couple of years and recently it gets even more popular, as Microsoft implemented most of those techniques in Vista.

    However, everybody who follows the security research for at least several years should know that all those clever mechanisms have all been bypassed at least once in their history. That includes attacks against Stack Guard protection presented back in 2000 by Bulba and Kil3r, several ways to bypass PaX ASLR, like those described by Nergal in 2001 and by others several months later as well as exploiting the privilege elevation bug in PaX discovered by its author in 2005. Also the Microsoft's Hardware DEP (AKA NX) has been demonstrated to be bypassable by skape and Skywing in 2005.

    Similarly, kernel protection mechanisms have also been bypassed over the past years, starting e.g. with this nice attack against grsecurity /dev/(k)mem protection presented by Guillaume Pelat in 2002. In 2006 Loic Duflot demonstrated that BSD's famous securelevel mechanism can also be bypassed. And, also last year, I showed that Vista x64 kernel protection is not foolproof either.

    The point is – all those hardening techniques are designed to make exploitation harder or to limit the damage after a successful exploitation, but not to be 100% foolproof. On the other hand, it must be said, that they probably represent the best prevention solutions available for us these days.

  2. Another approach is to dramatically redesign the whole OS in such a way that all components (like e.g. drivers and serves) are compartmentalized, e.g. run as separate processes in usermode, and consequently are isolated not only from each other but also from the OS kernel (micro kernel). The idea here is that the most critical components, i.e. the micro kernel, is very small and can be easily verified. Example of such OS is Minix3 which is still under development though.

    Undoubtedly this is a very good approach to minimize impact from system or driver faults, but does not protect us against malicious system compromises. After all if an attacker exploits a bug in a web browser, she may only be interested in modifying the browser’s code. Sure, she probably would not be able to get access to the micro kernel, but why would she really need it?

    Imagine, for example, the following common scenario: many online banking systems require users to use smart cards to sign all transaction requests (e.g. money transfers). This usually works by having a browser (more specifically an ActiveX control or Firefox’s plugin) to display a message to a user that he or she is about to make e.g. a wire transfer to a given account number for a given amount of money. If the user confirms that action, they should press an ‘Accept’ button, which instructs browser to send the message to the smart card for signing. The message itself is usually just some kind of formatted text message specifying the source and destination account numbers, amount of money, date and time stamp etc. Then the user is asked to insert the smart card, which contains his or her private key (issued by the bank) and to also enter the PIN code. The latter can be done either by using the same browser applet or, in slightly more secure implementations, by the smart card reader itself, if it has a pad for entering PINs.

    Obviously the point here is that malware should not be able to forge the digital signature and only the legitimate user has access to the smart card and also knows the card’s PIN, so nobody else will be able to sign that message with the user’s key.

    However, it’s just enough for the attacker to replace the message while it’s being send to the card, while displaying the original message in the browser’s window. This all can be done by just modifying (“hooking”) the browser’s in-memory code and/or data. No need for kernel malware, yet the system (the browser more specifically) is compromised!


    Still, one good thing about such a system design is that if we don’t allow an attacker to compromise the microkernel, then, at least in theory, we can write a detector capable of finding that some (malicious) changes to the browsers memory have been introduced indeed. However, in practice, we would have to know how exactly the browser’s memory should look like, e.g. which function pointers in Firefox’s code should be verified in order to find out whether such a compromise has indeed occurred. Unfortunately we can not do that today.

  3. Alternative approach to the above two, which does not require any dramatic changes into OS, is to make use of so called sound static code analyzers to verify all sensitive code in OS and applications. The soundness property assures that the analyzer has been mathematically proven not to miss even a single potential run time error, which includes e.g. unintentional execution flow modifications. The catch here is that soundness doesn’t mean that the analyzer doesn’t generate false positives. It’s actually mathematically proven that we can’t have such an ideal tool (i.e. with zero false positive rate), as the problem of analyzing all possible program execution paths is incomputable. Thus, the practical analyzers always consider some superset of all possible execution flows, which is easy to compute, yet may introduce some false alarms and the whole trick is how to choose that superset so that the number of false positives is minimal.

    ASTREE is an example of a sound static code analyzer for the C language (although it doesn’t support programs which make use of dynamic memory allocation) and it apparently has been used to verify the primary flight control software for Airbus A340 and A380. Unfortunately, there doesn’t seem to be any publicly available sound binary code static analyzers… (if anybody knows any links, you’re more then welcome to paste the links under this post – just please make sure you’re referring to sound analyzers).

    If we had such sound and precise (i.e. with minimal rate of false alarms) binary static code analyzer that could be a big breakthrough in the OS and application security.

    We could imagine, for example, a special authority for signing device drivers for various OSes and that they would first perform such a formal static validation on submitted drivers and, once passed the test, the drivers would be digitally signed. Plus, the OS kernel itself would be validated itself by the vendor and would accept only those drivers which were signed by the driver verification authority. The authority could be an OS vendor itself or a separate 3rd party organization. Additionally we could also require that the code of all security critical applications, like e.g. web browser be also signed by such an authority and set a special policy in our OS to allow e.g. only signed applications to access network.

    The only one week point here is, that if the private key used by the certification authority gets compromised, then the game is over and nobody really knows that… For this reason it would be good, to have more then one certification authority and require that each driver/application be signed by at least two independent authorities.

From the above three approaches only the last one can guarantee that our system will not get compromised ever. The only problem here is that… there are no tools today for static binary code analysis that would be proved to be sound and also precise enough to be used in practice…

So, today, as far as proactive solutions are considered, we’re left only with solutions #1 and #2, which, as discussed above, can not protect OS and applications from compromises in 100%. And, to make it worse, do not offer any clue, whether the compromise actually occurred.

That’s why I’m trying so much to promote the idea of Verifiable Operating Systems, which should allow to at least find out (in a systematic way) whether the system in question has been compromised or not (but, unfortunately not to find whether the single-shot incident occurred). The point is that the number of required design changes should be fairly small. There are some problems with it too, like e.g. verifying JIT-like code, but hopefully they can be solved in the near feature. Expect me to write more on this topic in the near feature.

Special thanks to Halvar Flake for eye-opening discussions about sound code analyzers and OS security in general.

Monday, March 05, 2007

Handy Tool To Play with Windows Integrity Levels

Mark Minasi wrote to me recently to point out that his new tool, chml, is capable of setting NoReadUp and NoExecuteUp policy on file objects, in addition to the standard NoWriteUp policy, which is used by default on Vista.

As I wrote before the default IL policy used on Vista assumes only the NoWriteUp policy. That means that all objects which do not have assigned any IL explicitly (and consequently are treated as if they were marked with Medium IL) can be read by low integrity processes (only writes are prevented). Also, the standard Windows icacls command, which allows to set IL for file objects, assumes always the NoWriteUp policy only (unless I’m missing some secret switch).

However, it’s possible, for each object, to define not only the integrity level but also the policy which will be used to access it. All this information is stored in the same SACE which also defines the IL.

There doesn’t seem to be too much documentation from Microsoft about how to set those permissions, except this paper about Protected Mode IE and the sddl.h file itself.

Anyway, it’s good to see a tool like chml as it allows to do some cool things in a very simple way. E.g. consider that you have some secret documents in the folder c:\secretes and that you don’t feel like sharing those files with anybody who can exploit your Protected Mode IE. As I pointed out in my previous article, by default all your personal files are accessible to your Protected Mode IE low integrity process, so in the event of successful exploitation the attacker is free to steal them all. However now, using Mark Minasi’s tool, you can just do this:
chml.exe c:\secrets -i:m -nr -nx
This should prevent all the low IL processes, like e.g. Protected Mode IE, from reading the contents of your secret directory.

BTW, you can use chml to also examine the SACE which was created:
chml.exe c:\secrets -showsddl
and you should get something like that as a result:
SDDL string for c:\secrets's integrity label=S:(ML;OICI;NRNX;;;ME)
Where S means that it’s an SACE (in contrast to e.g. DACE), ML says that this ACE defines mandatory label, OICI means “Object Inherit” and “Container Inherit”, NRNX defines that to access this object the NoReadUp and NoExecuteUp policies should be used (which also implies the NoWriteUp BTW) and finally the ME stands for Medium Integrity Level.

All credits go to Mark Minasi and the Windows IL team :)

As a side note: the updated slides for my recent Black Hat DC talk about cheating hardware based memory acquisition can be found here. You can also get the demo movies here.