Change Detection
Computer
Security Lecture, Dr.
Lawlor
One of the hardest things about post-intrusion recovery is figuring
out how much access the attackers obtained, and what they might have
changed. For example, on UNIX if I have the current directory
. in my PATH (like PATH=".:$PATH"), and there's an executable named
"ls" in the current directory, running "ls" runs that executable
instead of /bin/ls. It gets particularly confusing if "./ls"
forwards your request on to the real ls, but then redacts the
results to hide itself and anything else it wants to hide:
#!/bin/sh
# Rootkit style file hider.
/bin/ls "$@" | grep -v "ls" | grep -v "redacted"
This is of course much worse if the attackers replaced /bin/ls with
a modified version along these lines; in that case the best you can
hope for detecting this sort of self-hiding rootkit is to find
inconsistencies, for example in file sizes, or by doing
unconventional operations like "echo *" instead of "ls". (Ken
Thompson's 1984 paper "Reflections on Trusting Trust" gives an
explicit example of how a compiler could be backdoored to not only
backdoor an executable, but backdoor the compiler itself, a sort of
self-propagating backdoor with no sign in the source code.)
After an intrusion, the only way to be 100% confident you've truly
fixed the machine is to reinstall the OS and set up services again,
but this means much more downtime than just deleting the bad stuff
from your existing machine. If we could detect changes to
critical OS files, it would be much easier to verify that we're OK
again, although there is the chicken-and-egg trust problem that a
sophisticated attacker could have also subverted the change detector
program so it lies and claims everything is OK.
Windows File Protection
The basic operation of Windows
File Protection (WFP) from Windows XP / 2000 is:
- An ancient game installer decides it needs to overwrite your
version of the network access library c:\Windows\System32\wininet.dll
with an "updated" version, which is actually a decade out of
date.
- The installer overwrites this DLL.
- WFP gets a directory change notification on system32, and sees
the new DLL.
- If the installer killed off WFP first, you'll need to run
"sfc /scannow" to force an immediate re-scan.
- WFP tries to automatically overwrite the new DLL with a copy
saved in c:\Windows\System32\dllcache. It should be there
unless the disk was low on space, or Windows was installed over
the network.
- If WFP succeeds, it logs an event to the System event log
where nobody will ever see it (unless they happen to run
"eventvwr")
- If WFP fails, it tries to fetch the DLL from the network,
which won't work because that DLL is how you access the
network. It will then ask you to insert a Windows
install CD, which is often impossible because the machine does
not have a CD drive, and machines don't come with install CDs,
and even if you have a CD drive and installer WFP often enters
an autoimmune rejection.
I personally feel like step 2 is ridiculous: if Windows doesn't want
you to overwrite a DLL, it should stop you from overwriting the DLL,
instead of trying to re-overwrite the DLL with the old
version. Evidently the choice was made to break-and-fix so
that ancient installers wouldn't get an error when they tried to
copy DLL files into your system. (Many of the worst features
in Windows are only there for backward compatibility.)
Windows Vista and later use Windows
Resource Protection to prevent critical files from being
overwritten in the first place, by using access control lists,
although these only work on the NTFS filesystem, and since an
Administrator can change those ACLs, it's still not 100% reliable.
Windows 10 support Device
Protection, which uses a hypervisor type approach to put at
least kernel memory out of reach of all programs, including admin.
OS X System Integrity Protection
Mac OS X 10.11 and higher use System Integrity
Protection to prevent even root from modifying critical files
in /System, /bin, and /usr. If you want to modify those files,
you either need an official Apple signature on the new files, or
reboot into recovery mode and disable SIP.
Tripwire
Most UNIX systems have no default file protection scheme, but
Tripwire works on most systems, and is satisfyingly paranoid.
There's a good step-by-step on installing
tripwire at digital ocean.
sudo apt install tripwire
We need to edit the /etc/tripwire/twpol.txt file to fix the policy
(remove /proc, and /root nonexistent files). We rebuild the
signed binary policy file /etc/tripwire/tw.pol using:
sudo twadmin -m P /etc/tripwire/twpol.txt
You then check for changes using:
sudo tripwire --init
We can then check if things have changed using:
sudo tripwire --check
Tripwire's results are cryptographically signed to make it harder
for an attacker to fabricate them. It does suffer from the
usual shortcomings of a "detect changes" tool, rather than a
"prevent changes" tool.
Git
My personal favorite tool for easy change detection is the version
control system git.
sudo su
cd /
git init
chmod 700 .git
git add etc usr/bin usr/lib
git commit .
(That chmod is quite important, since git will make copies of all
the files you commit, so sensitive files like /etc/shadow need to be
protected.)
The features I really like include "git diff /etc/hosts" so I can
see exactly how I broke the hosts file, or "git checkout /etc/hosts"
to restore the original version. git also makes it easy to
push everything up to a remote machine for more reliable checking or
disaster recovery.
git is in some ways not ideal for this: it makes no attempt to
protect .git against manual modification, and it uses the SHA-1 hash
algorithm with known collisions so an undetected file replacement is
possible.
An easy way to prevent changes on EXT filesystems is "chattr +i
foo.txt", which makes the file "foo.txt" be immutable. No writes
will succeed, even as root, until you do a "chattr -i foo.txt".