[olug] example intrusion detection

Adam Haeder adamh at omaha.org
Mon Oct 4 19:55:24 UTC 2004


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Recently, I had to do some investigation on a server for a company I do some consulting with that was experiencing some odd
behavior. The only initial indication was that some processes weren't running that should be, and attempting to start them
gave an error (Socket already in use). I did some initial analysis and discovered that the system had been compromised and a
rootkit installed. This will warrant a much longer writeup later, but for the time being I thought I'd share some initial
steps I took to determine exactly what was wrong with the system.

The first thing that usually happens in a breakin like this is that the log files get wiped. However, that's not always the
case (especially if you're dealing with a script-kiddie) so it's always worth while to check there first. `last -a` will give
you the last logins and where they came from. Also, if you fear a root compromise, check the .bash_history file in the /root
directory. This will log every command that the root user types. This ended up helping me out quite a bit, as the intruder
apparently didn't think about this file or didn't know about it.

Anyway, since this system did not have any kind of file system integrity tool like tripwire or AIDE, I needed to determine what
(if anything) had been changed on the system. A common occurence in these cases is to install some trojaned binaries, that either
hide certain information or have some sort of back door. Since this was an rpm-based distribution (Redhat 9.0), I used the -V
option to rpm to verify all the installed packages on the system:

# rpm -Va

This will report on any file that belongs to an rpm package that was been modified since the package was installed. Modified
can mean different checksum, different owner or group, or different permissions. Obviously, some things like configuration
files are going to show up here, but what we're really concerned about is binaries. This search showed that /bin/ps and
/usr/bin/top where different from the original rpm version. These programs are usually trojaned in order to hide the
existence of certain processes. This system had the 'apt' command installed from www.freshrpms.net, so it was a simple
matter to fix this. First, we find out what rpm package gave us /bin/ps:

# rpm -q --whatprovides /bin/ps
procps-3.2.0-1.1

The we use the apt command to reinstall procps:
# apt-get install procps --reinstall

Repeat the above steps for /usr/bin/top.

A side note: initially, this process failed, because the /bin/ps file was not able to be overwritten. It had nothing to do with
regular filesystem permissions, because I was root and root owned the file. However, the filesystem is ext3, and there are some
ext3 specific flags that can be set. You can view these flags with the 'lsattr' command and change them with the 'chattr' command.
Read the man pages on each command to find out what the flags are. In this case, /bin/ps had been set to 'undeletable'. I had to
first issue this command:

# chattr -suSiadAc /bin/ps

to turn all of the ext3 options off. Then I could overwrite it.

Now that we have verified all of the rpm packages on the system, it's time to approach the problem from the other way: we have
verified that all the files we know SHOULD be on the system are good. How about the files on the system that AREN'T part of an
rpm package? In the absence of a file system integrity tool, this is the only way to find out what else might have been installed
on the system.

A note on timestamps: some of you may be saying "Just use the find command and search for files with a creation date of the last
week or so". That doesn't work, because most of the rootkits and trojans that get installed have a creation date months or years
in the past. So you can't trust timestamps.

The '--whatprovides' option to rpm will return "File /bin/whatever is not owned by any package" if that file isn't associated
with an rpm. So we can run this command against every file on the system to determine which files are not owned by an rpm package.
Now, before we start this, we have to know a little bit more about the system. Obviously, if it has a large amount of data on it,
none of those files are going to owned by an rpm package. So we're not going to test every single file. We're going to look in the
most likely places, and ignore the stuff we know is data. Looking back on this, I really need to make this command smarter.
If we're just looking for binaries that don't belong, we can narrow our search to that and not waste so much of rpm's time.
Anyway, here is the command I used:

# for file in `find usr/ -print`; { rpm -q --whatprovides /$file; } | grep "not owned by any package"

I did this for the directories /usr, /sbin, /bin, /boot, and /lib. It told me what was not associated with any rpm package.
I ended up finding a number of files that weren't supposed to be there, including a completely different ssh server.

Did that find every potential malicious file on the system? Nope. Something nasty could be lurking in /home or /dev or /etc.
Now that I trust /bin/ps, I can examine the process list and ensure that I know exactly what each of the running processes is for.

So I've verified all the rpms, and I've done my best to examine the files on the system that aren't in an rpm package. The final
step? The kernel. `lsmod` didn't show me any modules that I was unsure of, but I'm not familiar with the various rootkits, so I
can no longer trust this kernel. It's due for an upgrade anyway, it's 2.4.20-8, which has a local root vulnerability (which is how
I'm assuming the attacker got root in the first place). So I use apt-get to upgrade to the latest kernel for RH9 (because `apt-get
upgrade` will do all packages EXCEPT kernel packages) and then I reboot to the new kernel.

Is this good enough? No way. The only option here is to reload the system from scratch. However, these steps at least gave me
some confidence that nothing is currently running on the system that I don't know about.

So exactly what happened? I'm not 100% sure. I know that originally the system was running an old version of apache, which is how
I assume the initial attacker got in. Or just simple password guessing; there are a lot of accounts on this box and ssh was
allowed from everywhere. The logs didn't show anything of this nature, but we already discussed the trustworthiness of the logs.
Once in, a local user could download an exploit for the older kernel and use that to get root.

Steps I took:
- - upgraded apache to the latest version
- - verified the rpms
- - made a stab at searching for non-rpm-owned files
- - restricted ssh to their local network and one of my external IPs (so I could still log in)
- - upgraded the kernel

The system was only down for the time that I rebooted the kernel. Unless the attacker is using something other than ssh to log
in, I don't know how else they could get in now, seeing as all packages are up to date. All in all, an interesting experience.
I've scheduled a complete reload of this box for this week.

Anyone else have interesting forensic analysis stories to share?

- --
Adam Haeder
Vice President of Information Technology
AIM Institute
adamh at omaha.org
(402) 345-5025 x115
PGP Public key: http://www.haederfamily.org/pgp.html

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBYaqubHC3IXlHqBQRAs3gAJ0ct6r8gANXGeSZpRbDBvVlCDzzEgCfeSiZ
sAkRYjzfm4uE4bPDnlHwRMc=
=93CG
-----END PGP SIGNATURE-----



More information about the OLUG mailing list