image Home       image Fowles,       image Fitzgerald,       image r04 06 (9)       image R 22MP (3)       image 45 (3)       

Linki

[ Pobierz całość w formacie PDF ]
Introducing Stealth Malware Taxonomy
Joanna Rutkowska
COSEINC Advanced Malware Labs
November 2006
Version 1.01
Introduction
At the beginning of this year, at Black Hat Federal Conference, I proposed a simple
taxonomy that could be used to classify stealth malware according to how it interacts
with the operating system. Since that time I have often referred to this classification as I
think it is very useful in designing system integrity verification tools and talking about
malware in general. Now I decided to explain this classification a bit more as well as
extend it of a new type of malware - the type III malware.
1
Malware Definition
Before I start describing various types of malware, I would like to first define what I
understand by the term
malware
:
Malware is a piece of code which changes the behavior of either the operating system
kernel or some security sensitive applications, without a user consent and in such a way
that it is then impossible to detect those changes using a documented features of the
operating system or the application (e.g. API).
The above definition is actually different from the definition used by A/V industry (read
most other people), as e.g. the simple botnet agent, coded as a standalone application,
which does not
hook
OS kernel nor any other application, but just listens for commands
on a legally opened (i.e. opened using documented API functions) TCP port, would not
be classified as
malware
by the above definition. However, for completeness, I decided to
also include such programs in my taxonomy and classify them as
type 0
malware.
Below I describe each of the four classes of malware – type 0, type I, type II and finally
type III and comment on the detection approach needed for each of these classes.
Type 0 Malware
As it can be seen in the picture below the malware of type 0, which, as we just agreed, is
not to be considered as a
malware
from the system compromise detection point of view,
does not interact with any part of the operating system (nor other processes) using any
undocumented methods.
2
Of course, such an application (process) still could be
malicious
, e.g. it could delete all
the personal files from the user’s directory, or open a TCP port and become part of the
botnet, possibly taking part in a DDoS attack (but again using a valid API to establish
connections to the victim machines), etc.
However, looking from the
system compromise detection
point of view, all of the above
behaviors are just
features
of the application and do not
compromise
the operating system
nor they change (compromise) the behavior of other applications (processes) running in
the system.
The A/V industry has developed lots of mechanisms to determine whether a given
executable is “bad” or “good”, such as behavior monitoring, sandboxing, emulation, AI-
based heuristics and not to mention all the signature based approaches. Some would like
to say that this is all to protect users against their own "stupidity", but it’s not that simple,
of course. After all, even if we assumed that we can trust some software vendors, which
is, in most cases, a reasonable assumption in my opinion, and that we are smart enough to
know which vendors to trust, still we download most of the applications from the internet
over plain HTTP and not over HTTPS.
My favorite example is Firefox, whose binaries are available only via HTTP.
Interestingly when Firefox downloads updates, it uses a secure HTTPS connection to
obtain a hash value of the new binary and uses it for verification of that new update
before it gets installed. However, we can never be sure that our original Firefox binary
has not been compromised (as we had to download it over unsecured HTTP) so the fact
the updates are "signed" doesn't help much...
So, detecting type 0 malware is undoubtedly an important thing, especially for Jane
Smith and her family, but as it is not related to system compromise detection, thus I’m
ignoring this problem in my research and leave it to the A/V industry.
Type I Malware
When we look at various operating system resources, we can divide them to those which
are (or at least should be) relatively constant (“read-only”) and to those which are
changing all the time. The examples of the former include e.g.: executable files, in-
memory code sections (inside running processes and in kernel), BIOS code, PCI devices
expansion EEPROMs, etc… The examples of the latter are e.g. some configuration files,
some registry keys, but most importantly data sections of running processes and kernel.
The malware which modifies those resources which were designed to be constant, like
e.g. in-memory code sections of the running kernel and/or processes, is something which
I classify as
type I malware
. Consequently, malware which does not modify any of those
constant resources, but only the resources which are dynamic by nature, like e.g. data
sections, is to be classified as
type II malware
.
3
On the picture below an exemplary infection with type I malware has been presented:
It should be clear by now, for anybody familiar with assembler language, that there are
virtually infinite ways to create type I malware of any given kind. E.g. if we consider
creation of a key stroke logger, then there will be incredibly many ways of doing that by
modifying (
hooking
) code at many different levels (starting from keyboard interrupt
handler's code and ending at some high level functions inside applications) and in many
different ways (from simple JMPs to complicated, obfuscated, execution transfers or even
“code integration on place”)…
So, it should also be clear that approaching type I malware detection using any kind of
"find the bad" approach, like e.g. scanning for known patterns of code subversions, is an
insufficient solution and is prone to the endless arm-race.
The detection of type I malware should be based, in my opinion, on verifying
integrity
of
all those constant resources. In other words, on verifying that the given resource, like e.g.
a code section in memory, has not been modified in
any
way. That, of course, implies that
we need some
baseline
to compare with and fortunately in many case we have such a
baseline. E.g. all Windows system executable files (EXE, DLL, SYS, etc.) are digitally
signed. This allows us not only to verify file system integrity, but also to verify that all
in-memory code sections of all system processes and kernel are intact! So, this allows us
to find
any
kind of code hooking, no matter how sophisticated the hooking and
obfuscating techniques have been used. This is, in fact, how my System Virginity
Verifier (SVV) works
4
However, life is not that beautiful, and we sometimes see legal programs introducing
modifications into e.g. code sections of kernel. Examples of such applications include
e.g. some Host IPS products and some personal firewalls (see e.g. my BH Federal
presentation for more details
. That disallows us to design a proper system integrity
verification tool, because such a tool sometimes is not able to distinguish between a
malware-like hooking and e.g. a HIPS-like-hooking, as sometimes virtually the same
techniques are used by A/V vendors as by the malware authors! Needles to say this is
very wrong! Probably the best way to solve this problem is the Patch Guard technology
introduced in 64-bit versions of Windows. I wrote about it recently
Also, there are lots of applications which are not digitally signed, so we basically can
never know whether their code has been altered or not. Thus, I think that it's crucial to
convince more application developers (at least the developers of the security-sensitive
applications) to sign their executables with digital certificates.
Examples of the type I malware: Hacker Defender, Apropos, Sony Rootkit, Adore for
Linux, Suckit for Linux, etc...
Type II Malware
In contrast to type I, malware of
type II
does not change any of the constant resources,
like e.g. code sections. Type II malware operates only on dynamic resources, like data
sections, e.g. by modifying some function pointers in some kernel data structures, so that
the attacker's code gets executed instead of the original system or application code.
5
[ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • zolka.keep.pl