Friday, November 24, 2006

Introducing Stealth Malware Taxonomy

At the beginning of this year, at Black Hat Federal Conference, I proposed a simple taxonomy that could be used to classify stealth malware according to how it interacts with the operating system. Since that time I have often referred to this classification as I think it is very useful in designing system integrity verification tools and to talk about malware in general. Now I decided to explain this classification a bit more as well as extend it of a new type of malware - the type III malware.

The article is available as a PDF document here.



Thursday, October 19, 2006

Vista RC2 vs. pagefile attack (and some thoughts about Patch Guard)

Eventually, after I got back home from some traveling, I had a chance to download Vista RC2 x64 and test it against the pagefile attack...

It quickly turned out that our exploit doesn’t work anymore! The reason: Vista RC2 now blocks write-access to raw disk sectors for user mode applications, even if they are executed with elevated administrative rights.

In my Subverting Vista Kernel speech, which I gave at several major conferences over the past few months, I discussed three possible solutions to mitigate the pagefile attack. Just to remind you, the solutions mentioned were the following:
1. Block raw disk access from usermode.
2. Encrypt pagefile (alternatively, use hashing to ensure the integrity of paged out pages, as it was suggested by Elad Efrat from NetBSD).
3. Disable kernel mode paging (sacrificing probably around 80MB of memory in the worst case).

And I also made a clear statement that solution #1 is actually something which is a bad idea. I explained that if MS decided to disable write-access to raw disk sectors from usermode, not only that might cause some incompatibility problems (think about all those disk editors, un-deleters, etc…), but also that would not be a real solution to the problem…

Imagine a company wanting to release e.g. a disk editor. Now, with the blocked write access to raw disk sectors from usermode, the company would have to provide their own custom, but 100% legal, kernel driver for allowing their, again 100% legal, application (disk editor), to access those disk sectors, right? Of course, the disk editor's auxiliary driver would have to be signed – after all it’s a legal driver, designed for legal purposes and ideally having neither implementation nor design bugs! But, on the other hand, there is nothing which could stop an attacker from “borrowing” such a signed driver and using it to perform the pagefile attack. The point here is, again, there is no bug in the driver, so there is no reason for revoking a signature of the driver. Even if we discovered that such driver is actually used by some people to conduct the attack!

But it seems that MS actually decided to ignore those suggestions and implemented the easiest solution, ignoring the fact that it really doesn’t solve the problem…

Actually, if we weren't such nice guys, we could develop a disk editor together with a raw-disk-access kernel driver, then sign it and post it on COSEINC's website. But we're the good guys, so I guess somebody else will have to do that instead ;)

Kernel Protection vs. Kernel Patch Protection (Patch Guard)


Another thing - lots of people confuse kernel protection (i.e. the policy for allowing only digitally signed kernel drivers to be loaded) with Kernel Patch Protection, also known as Patch Guard.

In short, pagefile attack, which I demoed at SyScan/BackHat is a way to load unsigned code into kernel, thus it’s a way to bypass Vista kernel protection. Bypassing kernel patch protection (Patch Guard) is a different story. E.g. Blue Pill, a piece of malware which abuses AMD Pacifica hardware virtualization, which I also demoed during my talk, “bypasses” PG. The word “bypass” is a little bit misleading here though, as the BP does not make any special effort to disable or bypass PG explicitly, it simply doesn’t care about PG, because it’s located above (or below, depending on where your eyes are located) the whole operating system, including PG. Yes, it’s that simple :)

Also, almost any malware of type II (see my BH Federal talk for details about this malware classification) is capable of “bypassing” PG, simply because PG is not designed to detect changes introduced by type II malware. So, e.g. deepdoor, backdoor which I demonstrated in January at BH Federal, is undetectable by PG. Again, not a big deal – it’s just that PG was not designed to detect type II malware (nor type III, like BP). So, I'm a little bit surprised to hear people talking about "how hard would it be to bypass PG...", as that is something which has been done already (and I'm not referring to Metasploit's explicit technique here) - you just need to design your malware as type II or type III and your done!

But even that all being said, I still think that PG is actually a very good idea. PG should not be thought as of a direct security feature. PG's main task is to keep legal programs from acting like popular rootkits. Keeping malware away is not it's main task. However, by ensuring that legal applications do not introduce rootkit-like tricks, PG makes it easier and more effective to create robust malware detection tools.

I spent a few years developing various rootkit detection tools and one of the biggest problems I came across was how to distinguish between a hooking introduced by a real malware and... a hooking introduced by some A/V products like personal firewalls and Host IDS/IPS programs. Many of the well known A/V products do use exactly the same hooking techniques as some popular malware, like rootkits! This is not good, not only because it may have potential impact on system stability, but, and this is the most important thing IMO, it confuses malware detection tools.

Patch Guard, the technology introduced in 64 bit versions of Windows XP and 2003 (yes, PG is not a new thing in Vista!) is a radical, but probably the only one, way to force software vendors to not use undocumented hooking in their products. Needles to say, there are other, documented ways to implement e.g. a personal firewall or an A/V monitor, without using those undocumented hooking techniques.

Just my 2 cents to the ongoing battle for Vista kernel...

Wednesday, September 13, 2006

Vista RC1 still vulnerable to the pagefile attack

Everybody talks now about the latest Vista RC1 and how ready it is for being shipped to customers. So, I downloaded Vista RC1, Build 5600, x64 edition from MSDN a couple of days ago and gave it a try... To my surprise, it turned out that it's still vulnerable to the signature check bypass attack which I demonstrated nearly 2 months ago at the SyScan conference...

This is not good, because, on the one hand, Vista requires all kernel drivers to be digitally signed (for security reasons), which, in turn requires that all driver developers get (read: buy) an appropriate signing certificate, but on the other hand, malware authors can load their code into kernel for free (without reboot, as I demoed during the talk).

The requirement for having all kernel drivers digitally signed raised a lot of controversy when it was announced by Microsoft in January. People argued not only about the fact that paying for a certificate might be unacceptable for e.g. students or open source authors, but also about more "philosophical" aspect that it should be the user's (administrator's) right to load whatever she wants on her own computer, regardless whether somebody has signed it or not.

Personally, I think that it's worth to sacrifice a little bit of "freedom" and to spend a few hundred bucks on a certificate in case you're a kernel developer, if this can stop kernel malware from loading. Even though kernel protection can be implemented without PKI, as we can see in case of BSD systems and their securelevel mechanism (although an attack has been presented against it a few months ago), I still think that a scheme based on digital signatures is the best solution for end-users. However, it's definitely not worth to sacrifice that all, if there is a known way for bypassing this mechanism... :(

It's quite surprising for me that MS still hasn't fixed that problem, especially that the best solution here is also the simplest one to implement. As I described during my talk, it's just enough to... disable kernel mode memory paging. Surly, it would cause a little waste of memory, but according to some Microsoft engineers I spoke to, it would be only around 80MB. This seems very little these days, doesn't it? After all, are people going to run Vista with 256MB or even 512MB of RAM? I'm not ;)

Another good solution (and I think it was Brad Spengler of grsecurity who pointed that out to me) would be to calculate a hash for each page which is going to be paged out and then check this hash again on each page which is about to be loaded into memory again. Not that simple as the previous solution, but at least we're saving those 80MB of physical memory :)

Saturday, August 12, 2006

Blue Pill Detection!

So, after I presented the idea behind Blue Pill at SyScan and Black Hat, some people started talking how *easy* it should be do detect Blue Pill using timing analysis. Interestingly they must have missed the fact that I already discussed this way of detection during my presentations and I also gave the reasons why I don't think it could be used in practice...

But anyway, let's look at the problem again...

Obviously, Blue Pill, like any other hardware based VMM, needs to intercept some events and instructions. One intercept which we need to take care of (in case of SVM technology), is the RDMSR EFER instruction - just because the bit 12th in EFER register signalizes weather processor works in SVM mode or not. So, we need to cheat about it to the guest.

Now, we can measure how many processor 'tics' the given instruction took to execute - all we need to do is to use a RDTSC instruction, which returns the processor's time stamp counter. So I did the measuring and it turned out that normally it takes around 90 ticks to execute RDMSR, while on a 'bluepilled' system it takes about 2100 tics. What a big difference you will say!

But SVM technology (and Intel VT-x also) offers a nice way to cheat the guest about those extra tics, by adjusting a special variable in VMCB, called TSC_OFFSET (in that case we would set this value to something around -2010, just before returning to the guest). As a result, guest can not realize that the RDMSR instruction took extra ticks, by using RDTSC instruction.

So, here's what we need to do: we need to prepare a test piece of code, which would involve calling e.g. RDMSR instruction something like a few millions of times and observer the timing using *external* clock (yes, VMM can also cheat about the internal real time clock). This external clock can also be a human being (=user). For example the detector could display a message to the user:

"Dear user, I'm going to run a test now; if this test took more then 1 minute, that would mean your computer is probably compromised with a VMM based malware. Press OK to continue..."

The first problem with this approach is how to generate a piece of code, which would execute for exactly 1 minute (or any other given amount of time) on a native machine, taking into account that we may have many different processor models, working with many different speeds, etc... One can say, that we can probe the processor speed, using some test instructions which we know for sure that are not intercepted (most of the instructions). But this is problematic, since the hypervisor can cheat as to how many tics those instructions took to execute (as RDTSC can be intercepted itself). Of course this is trivial, when we assume that we can run our detector before and after infection, but this is not the case in most practical scenarios. So, I don't know how to solve this problem (which doesn't mean it can't be solved though)... Any suggestions welcomed.

Without solving the above problem, we're facing a problem of false positives and negatives. Consider that the test above took 5 minutes (instead of one) - now does that mean that we took a too big testing code (because guest were cheated during calibration) and that the very processor just had to spent 5 minutes executing it or was this a sign of an infection - it's just that on a new processor model maybe the RDMSR interception slowdown would be of a factor of 5 instead of 20 as it's with the processor I have right now. And if it was 15 minutes?

Currently this is not a big problem, just because there are only two models of AMD processors supporting SVM on the market and each is available with few different clock speeds. So, we can probably hardcode the testing code into our detector (because the slowdown is so big). But how the situation will change during the next two years, when there will be much more processors supporting hardware virtualization on the market? We would have to have a database of processor models and how much test code we need to use on each of them. (oh btw, and how detector could detect on which model it's running? You bet, using a CPUID instruction, which can be intercepted...)

And still, even if we solved this problem, still this kind of detection would be annoying to users (imagine a user being forced to do this kind of "1-minute test", or even 10sec test, every 15 minutes or so) unless we used some kind of infrastructure providing external time measurement (can't be just public NTP, because NTP packets could be easily intercepted by the malware). So, we would need to setup encrypted NTP servers in each company... Ah great!

So, I find it quite surprising that some people diminish the threat introduced by hardware virtualization based malware. I would like to point out that it's somewhat ridicules situation, when the malware can be reliably written using perfectly documented features of the processor, while we need to do some timing based tricks to detect it :) Are we switching roles with malware writers?

What we need is a reliable detector, something which would return 0 or 1 depending whether we're inside a VM or not. And I really don't see how we can create such a program (i.e. a standalone generic detector).

For completeness, I should also mention, just as I did during my talks, that we're aware of another attack against Blue Pill which should be very reliable and that can be implemented as a standalone program, but unfortunately it seems to allow only for crashing the system when it's 'bluepilled'. This nice attack has been independently proposed by Alex Tereshkin and Oded Horowitz, BTW.

Some people talked about prevention... Can we disable virtualization in BIOS? I can't do it on my AMD machine - but I heard that vendors are going to release updates to allow for that. But, come on, this is not a good way to address this threat! It's better not to buy the processors supporting hardware virtualization!

One more thing - as I'm being continually asked about this - yes, it is possible to create a similar malware to Blue Pill using Intel VT-x, just like it was demonstrated by Dino Dai Zovi at Black Hat a week ago.

Saturday, July 01, 2006

The Blue Pill Hype

All the hype started from this article in eWeek by Ryan Naraine... The article is mostly accurate, despite one detail - the tile, which is a little misleading... It suggests that I already implemented "a prototype of Blue Pill which creates 100% undetectable malware", which is not true. Should this be true, I would not call my implementation "a prototype", which suggests some early stage of product.

That being said, I sincerely believe that Blue Pill technology will (very soon) allow for creating 100% undetectable malware, which is not based on obscurity of the concept. And I already stressed this in the description of my talk here and here. The working prototype I have (and which I will be demonstrating at SyScan and Black Hat) implements the most important step towards creating such malware, namely it allows to move the underlying operating system, on the fly, into a secure virtual machine.

The phrase "on the fly" is the most important thing about Blue Pill - it makes it possible to install a blue pill based malware without restarting the system and without any BIOS or boot sector modifications. I wish all those people who were posting about how easy it would be to detect Blue Pill by booting a system from a clean CD, spent more time on reading my original blog article, instead creating useless posts... (just a little wish).

The Blue Pill prototype I currently have is not yet complete, but this is not that important, because having successfully moved the OS into a virtual machine, implementing all the other features is just a matter of following the Pacifica specification. And I will repeat my statement again: I believe the malware based on a fully implemented Blue Pill will be 100% undetectable, provided that Pacifica is not "buggy". 100% undetectable in practice - I should add - as I'm aware of some theoretical brute force attacks, which I however do not consider as being practical and that they could be used in the future anywhere outside the lab. It should be undetectable, even if the malware code was made available to the opponent (e.g. AV company).

There are number of ways of how Blue Pill could be exploited to create the actual malware (Blue Pill itself is just a "hijacking technology", not a malware) and I will be showing a simple example of how it could be used to create a network backdoor on Vista x64.

What happens when you install Blue Pill on a machine which is already Blue Pilled? Should future OS come with own, preinstalled hypervisor to prevent Blue Pill installation? What about timing analysis? All those questions will be answered during my presentation - please do not send or post the same questions again and again...

That all being said, I don't think the title in the eWeek article was too much exaggerated, but I just wanted to clarify the things. After all, it was very positive, IMO, that the article attracted lots of attention, because I believe that hardware virtualization technology could become one of the biggest threat in the coming years (i.e. when more people will use processors with hardware virtualization support) and if we do not do anything about it. Can we do anything? I believe we can, but first we need to understand the threat.

One more thing should be commented. Some people suggested that my work is sponsored by Intel as I focused on AMD virtualization technolgy only. They should know then, that my work was sponsored exclusively by COSEINC Research and not by Intel. I implemented Blue Pill on AMD64 just because my previous research (also done for COSEINC) were focusing on Vista x64 and the natural choice of the processor for this was AMD64. And, although I wish I had more time to also try implementing Blue Pill on Intel VT, unfortunately I don't :( Accusing myslef of doing this on one processor only, instead on both AMD and Intel, is like saying that all vulnerability researches who find holes inside open source programs are paid by Microsoft ;) This is just ridicules!

Thursday, June 22, 2006

Introducing Blue Pill

All the current rootkits and backdoors, which I am aware of, are based on a concept. For example: FU was based on an idea of unlinking EPROCESS blocks from the kernel list of active processes, Shadow Walker was based on a concept of hooking the page fault handler and marking some pages as invalid, deepdoor on changing some fields in NDIS data structure, etc... Once you know the concept you can (at least theoretically) detect the given rootkit.

Now, imagine a malware (e.g. a network backdoor, keylogger, etc...) whose capabilities to remain undetectable do not rely on obscurity of the concept. Malware, which could not be detected even though its algorithm (concept) is publicly known. Let's go further and imagine that even its code could be made public, but still there would be no way for detecting that this creature is running on our machines...

Over the past few months I have been working on a technology code-named Blue Pill, which is just about that - creating 100% undetectable malware, which is not based on an obscure concept.

The idea behind Blue Pill is simple: your operating system swallows the Blue Pill and it awakes inside the Matrix controlled by the ultra thin Blue Pill hypervisor. This all happens on-the-fly (i.e. without restarting the system) and there is no performance penalty and all the devices, like graphics card, are fully accessible to the operating system, which is now executing inside virtual machine. This is all possible thanks to the latest virtualization technology from AMD called SVM/Pacifica.

How does the Blue Pill-based malware relates to SubVirt rootkit, presented a few months ago by Microsoft Research and University of Michigan? Well, there are couple of important differences:
  1. SubVirt is a permanent (i.e. restart surviving) rootkit. And it has to be, because the SubVirt's installation process requires that it takes control before the original operating system boots. Consequently, in contrast to Blue Pill, SubVirt can not be installed 'on-the-fly'. It also means that SubVirt must introduce some modifications to hard disk, which allows for the 'off line' detection.

  2. SubVirt was implemented on x86 hardware, which doesn't allow to achieve 100% virtualization, because there are number of sensitive instructions, which are not privileged, like the famous SIDT/SGDT/SLDT. This allows for trivial detection of the virtual mode - see e.g. my little Red Pill program. This however, doesn't apply to Blue Pill, as it relies on AMD SVM technology.

  3. SubVirt is based on one of the commercial VMM: Virtual PC and/or VMWare. Both of these applications create virtual devices to be used by the operating system, which are different from the real underlying hardware (e.g. network cards, graphic cards, etc.), which allows for easy detection of the virtual machine.

I would like to make it clear, that the Blue Pill technology does not rely on any bug of the underlying operating system. I have implemented a working prototype for Vista x64, but I see no reasons why it should not be possible to port it to other operating systems, like Linux or BSD which can be run on x64 platform.

I will be talking about Blue Pill and demonstrating a working prototype for Vista x64 at the end of July at SyScan Conference in Singapore.

Also, I will present a generic method (i.e. not relaying on any implementation bug) of how to insert arbitrary code into the Vista Beta 2 kernel (x64 edition), thus effectively bypassing the (in)famous Vista policy for allowing only digitally singed code to be loaded into kernel. Of course, the presented attack does not require system reboot.

Wednesday, May 17, 2006

CONFidence 2006 - trip report

I've just come back from a small conference in Krakow, CONFidence 2006. It was the second edition of this security conference, which is organized by a non-profit organization, PROIDEA, whose primary goal is to promote education in computer science. Apart from CONFidence, they also organize conferences focused on BSD systems and various training courses.

Below I describe some of the talks that I found particularly interesting among those which I managed to see...

Pawel Pokrywka gave a very interesting talk about security issues with DSL infrastructure as used by one of the biggest polish ISP. He discovered the auto-configuration protocol which is used to setup every single DSL modem of that company. He then prepared modem-emulator script which allowed him to get the configuration data (including username and password) for any modem in Poland he wanted. This could allowed an attacker to actually 0wn all the DSL modems belonging to this operator! It was the best presentation in my opinion, not only it was technically interesting, but also was very well presented.

Lukasz Bromirski is a system engineer in Cisco Poland and is a very popular speaker at polish conferences. He gave three (!) lectures there, about BGP Blackholing, Dynamic Routing Protocols (OSPF and BGP) and one more about network attacks on L2 and L3 levels. Lukasz turned out as a very knowledgeable and experienced network engineer who is also a good presenter.

Przemyslaw Frasunek is another frequent speaker at Polish conferences. He is a well known BSD expert, but his talk was about Bluetooth security. Although it wasn't kind of an '0day talk', I think it was a good introduction to Blootooth stack and several basic attacks and was very professionally presented.

I also liked the two talks presented by the members of the Security Team of Supercomputer-Network Center in Poznan. Blazej Miga talked very deeply about Apache architecture and internals, while Jaroslaw Sajko demoed how to write extension modules for IPTables. This team got lots of media attention in Poland last year, after they found several critical bugs in Gadu-Gadu, the most popular Polish IM communicator.

Overall, level of the talks was pretty good. Similarly like on other small conferences, the atmosphere was very cozy and friendly. Organizers took very good care of the speakers, taking us to various nice restaurants and entertaining all the time (even the day after the conference). Krakow is actually a very nice city, probably one of the nicest in Poland. It is a little bit like Prague - it has a very large old town, with lots of nicely decorated restaurants (in an 'old polish' style) serving very tasty food :)

It was also very positive to see how enthusiastic those people are and it was clear to me that they really do this conference for fun and not for profit. I wish them success with the next edition in 2007!

Friday, May 12, 2006

SVV Source Code Made Public!

I decided to publish the full source code of my System Virginity Verifier. The license grants you to do anything with the code, including using it in a commercial product.

Unfortunately I don't have time to further develop SVV, but I still believe that this is the right approach for system compromise detection (which still requires lots of work to be put into it though). It's actually very surprising for me to see only one another product which uses similar idea for detecting system compromises, that is Microsoft's Patch Guard.

I hope that publishing SVV source code might be useful in two situations:

First, it should help to reduce implementation specific attacks, as used by malware against rootkit detectors (remember holly_father's shop?). Having the sources allows anybody to compile his or her own private detector, a little bit different from the one which is targeted by malware's anti-detection engine. This might include changing I/O interface between usermode and kernel mode component of the detector, changing the order of certain actions, etc...

The above statement applies actually not only to SVV, but to any other rootkit/malware detector with open sources.

Second, I hope that having SVV sources opened can encourage people to extend the subset of the sensitive OS elements which are verified by SVV, thus minimizing the "hooking space" which can be used by malware. This should consequently eliminate simple, yet annoying malware from the market...

SVV sources and some presentations about its design can be found here.