Archive

Archive for the ‘Incident Response’ Category

Daily Paranoia

2011/01/06 1 comment

As a security guy I find my paranoia levels are slightly high than most, a little something inside me picks up on things that general users miss that indicate that something isn’t right. This morning was no exception….

After acquiring coffee, morning inbox was opened which presented the following:

Nagios Email Alerts

These are email alerts sent by the monitoring system Nagios, running within InfoSanity’ networks. The NRPE-check_users parameters have been modified from the Debian defaults to be more paranoid; triggering a warning alert with a single user logged into the server, and go critical if more than one. So; from this, someone is logged into the web server, and it isn’t me.

Feeling the onrush of panic, I log into server and chuck commands at a shell to see who is violating my system. Last, who and /var/log/auth all showed that no one had accessed the server at the time of my alerts. Everything good? Not if your paranoid, starting to smell a rootkit causing the system to lie to me.

There are a couple of anti-rootkit utilities that have served me well in the past, chkrootkit and rkhunter. Wondering which one to run? As we’ve already discovered I’m paranoid, answer was both. And both gave a clean bill of health to the system. Now I’m really getting paranoid.

Wanting to see if I’ve missed a trend, or if the issue is present with other servers in the environment I log into Nagios interface for a more detailed look, and find the answer:

Nagios webUI notifications

Nagios webUI notifications

Anyone spotted it? Yep, the service alert went critical when the check’s socket timed out (network issue), and then dropped to warning showing a single user when the connectivity returned; which was correct, I’d forgotten to log out of the console on my last VMWare connection. Stepping down from high alert…..

Morale of the story?

Don’t respond to system alerts before finishing first coffee of the morning. The events weren’t a total loss though, besides getting my heart-rate up and blood flowing, it was a good(ish) refresher for incident response (can’t beat the adrenaline rush of responding to an incident, real or imagined) and rkhunter uncovered a potential weakness in the server configuration which has since been corrected (no, I’m not telling you what).

–Andrew Waite

Oh yeah, had I just opened the email, could have avoid the whole situation:Nagios Email Details

Categories: Incident Response, InfoSec

Fuzzy hashing, memory carving and malware identification

I’ve recently been involved in a couple of discussions for different ways for identifying malware. One of the possibilities that has been brought up a couple of times is fuzzy hashing, intended to locate files based on similarities to known files. I must admit that I don’t fully understand the maths and logic behind creating fuzzy hash signatures or comparing them. If you’re curious Dustin Hurlbut has released a paper on the subject, Hurlbut’s abstract does a better job of explaining the general idea behind fuzzy hashing.

Fuzzy hashing allows the discovery of potentially incriminating documents that may not be located using traditional hashing methods. The use of the fuzzy hash is much like the fuzzy logic search; it is looking for documents that are similar but not exactly the same, called homologous files. Homologous files have identical strings of binary data; however they are not exact duplicates. An example would be two identical word processor documents, with a new paragraph added in the middle of one. To locate homologous files, they must be hashed traditionally in segments to identify the strings of identical data.

I have previously experimented with a tool called ssdeep, which implements the theory behind fuzzy hashing. To use ssdeep to find files similar to known malicious files you can run ssdeep against the known samples to generate a signature hash, then run ssdeep against the files you are searching, comparing with the previously generated sample.

One scenarios I’ve used ssdeep for in the past is to try and group malware samples collected by malware honeypot systems based on functionality. In my attempts I haven’t found this to be a promising line of research, as different malware can typically have the same and similar functionality most of the samples showed a high level of comparison whether actually related or not.

Another scenario that I had developed was running ssdeep against a clean WinXP install with a malicious binary. In the tests I had run I haven’t found this to be a useful process, given the disk capacity available to modern systems running ssdeep against a large HDD can be a time consuming process. It can also generate a good number of false positives when run against the OS.

After recently reading Leon van der Eijk’s post on malware carving I have been mulling a method for combining techniques to improve fuzzy hashing’s ability to identify malicious files, while reducing the number of false positives and workload required for an investigator. The theory was that, while any unexpected files on a system are not desirable, if they aren’t running in memory then they are less threatening than those that are active.

To test the theory I infected an XP SP2 victim with a sample of Blaster that had been harvested by my Dionaea honeypot and dumped the RAM following Leon’s methodology. Once the image was dissected by foremost I ran ssdeep against extracted resources. Ssdeep successfully identified the malicious files with a 100% comparison to the maliciuos sample. So far so good.

With my previous experience with ssdeep I ran a control test, repeating the procedure against the dumped memory of a completely clean install. Unsurprisingly the comparison did not find a similar 100% match, however it did falsely flag several files and artifacts with a 90%+ comparison so there is still a significant risk of false positives.

From the process I have learnt a fair deal (reading and understanding Leon’s methodolgy was no comparison to putting it into practice) but don’t intend to utilise the methods and techniques attempted in real-world scenarios any time soon. Similar, and likely faster, results can be achieved by following Leon’s process completely and running the files carved by Foremost against an anti-virus scan.

Being able to test scenarios similar to this was the main reason for me to build up the my test and development lab which I have described previously. In particular, if I had run the investigation on physical hardware I would likely not have rebuilt the environment for the control test with a clean system, losing the additional data for comparison, virtualisation snap shots made re-running the scenario trivial.

–Andrew Waite

P.S. Big thanks to Leon for writing up the memory capture and carving process used as a foundation for testing this scenario.

Expert speaker session at Northumbria University

Last week I had the pleasure of being asked to speak at Northumbria University, presenting to students of the Computer Forensics and Ethical Hacking for Computer Security programmes. As I graduated from Northumbria a few years ago it was interesting to come back to see some familiar faces and have a look at how the facilities had developed.

Despite the nerves of having to speak in front of a crowd I really enjoyed the event, especially as the other speakers were excellent and I enjoyed their sessions. The event kicked off with Dave Kennedy, a soon to retire member of Durham Police’s computer crime unit. Dave’s talked about his personal experience with a couple of high profile cases, explaining some of the groundwork and behind the scenes activity that isn’t known to the general public. I found the information interesting; but also disturbing, given the nature of the material that is handled by Dave and his department I can safely state that I wouldn’t want to have much experience in the area.

Next up was Phil Byrne, an internal auditor for HM Revenue and Customs (HMRC). For those that don’t know, HMRC were/are at the centre of one of the UK’s largest data loss stories in 2007 after CDs containing approximately 25 million child benefit records were sent, unencrypted, by standard post and did not reach their intended destination (some backstory here). Phil talked openly about the incident, discussing both the incident itself and the changes made in response. One of Phil’s comments has stayed with me (if I’m mis-quoting someone let me know):

If you put people into the process, something will go wrong at some time

Third to the stand was Gary Witts, owner of a manage services company specialising in on-line backups. The talk was very indepth and had some interesting content, but from my perspective I felt it was more of a sales pitch than a technical discussion of the secure backup’s place within a security standing.

I took the fourth and final slot of the day, which left me with the unenviable position of being between around 100 students and the pub, which didn’t help my usual rapid-fire presentation style. My presentation took a different focus from the previous sessions, discussing some of the real-world security incidents that can regularly be encountered, and some advice on handling the incidents in question. I also discussed my findings from honeypot systems, introducing a less common method for monitoring an environment for malicious activity. Assuming the feedback I’ve recieved is genuine the presentation seems to have been well-recieved.

From a student’s perspective; Tom was in the audience and has been writing up his take on the event in a series of blog postings. Tom also recorded the talks, for any one interested a direct link to my session is available here.

Andrew Waite

AV killing with powershell

A colleague recently introduced me to scripting with Powershell. After seeing a couple of examples of it’s strength for handling legitimate administration tasks my devious side came into play and I started imaging havok in my head.

As a starting project for getting to grips with Powershell basics I thought I’d try a proof of concept to replicate Meterpreter’s ability to disable AV and other defence mechanisms within the getcountermeasure function. I love meterpreter, but sometimes you need to work with more primitive native tools, as Powershell is starting to be included by default within Windows systems it is now one of the ‘primitive’ tools. My theory was that this should give me a bit of a challange, without jumping in at the deep end.

Well I was wrong, I guess showing the strength of Powershell this proved not to be a challange at all. The code below reads a list of unwanted processes from a text file, and kills the processes. All in four lines of code (I’m told this could be shortened at the expense of readability)

#read list of AV processes to kill
$avprocs = Get-Content AVprocs.txt

#kill all unwanted processes
foreach( $procname in $avprocs)
{
Stop-Process -name $procname
}
#simples…..

The next time you pop a Windows box don’t dispare, there’s more power available than just batch scripts :D

Andrew Waite

P.S. Before anyone shouts about aiding skiddies, the above code could have some great legitimate uses as well; from automatically cleaning up infected systems to aiding productivity by adding doom.exe to the list of processes ;)

The possibilities are endless, both good and bad.

Kon Boot

I’m running behind the curve on this one, but after several of my usual sources suggesting KonBoot as a useful addition to any security toolkit. The premise of Kon-Boot is simple, by modifying the system kernel (Windows or Linux) upon boot there is no need to know the users password to access the system.

Kon-Boot is designed to boot via either floppy or CD, but thanks to the work of IronGeek it is relatively painless to get Kon-Boot running from USB.

Unetbootin continues to be a powerful tool, using which you create a bootable USB drive from the KonBoot floppy drive image. Raymond.cc has a great guide for the process, but ends with the limitation that KonBoot won’t function from USB; until IronGeek steps into the ring with a patch. Simply extract the archive to the root of the USB drive to update chain.c32 and syslinux.cfg then you’re good to go.

There are plenty of videos showing Kon-Boot in action, for example this one. I’ve successfully compromised a Windows 7 host, both local and domain acount, but it can only compromise domain accounts that have previously logged onto the physical machine. Discussing the issue with a Windows admin there have been a couple of potential mitigations developed, but at this point these have yet to be put to the test.

Linux compromise seems to be less powerful as you log in as a new kon-usr user, albeit with UID 0 for superuser privs. Full authentication doesn’t seem available however; the kon-usr drops in at the command line but KDE kicks up an authentication error when trying to start a GUI session.

I still intend to test my Kon-Boot drive against a machine with an encrypted hard drive, I’m not convinced it will work as my current hypothesis is that the Kon-Boot Kernel modifications will be attempted before the drive is unencrypted. I’ll update once I’ve been able to put the hypothesis to the test in a lab.

For the time being Kon-Boot is a permenant addition to my tool-kit, as there are plenty of scenarios that make KonBoot a legitimate tool for both security and non-security techies alike. Thanks to www.piotrbania.com for development and release.

Andrew Waite

ZeroWine

Zero Wine is:

an open source (GPL v2) research project to dynamically analyze the behavior of malware. Zero wine just runs the malware using WINE in a safe virtual sandbox (in an isolated environment) collecting information about the APIs called by the program.

The output generated by wine (using the debug environment variable WINEDEBUG) are the API calls used by the malware (and the values used by it, of course). With this information, analyzing malware’s behavior turns out to be very easy.

Install was fairly simple as ZeroWine is distributed as a Qemu virtual image. Qemu, is downloaded here, and ZeroWine here.

To start the ZeroWine image I use the command (change filepaths to suit your install):

>qemu.exe c:\zerowine_vm\zerowine.img -no-kqemu -L . -redir tcp:8000::8000

Once running you can access the service by pointing a browser to localhost/8000 (the ‘-redir tcp:8000::8000′ parameter redirects the ZeroWine image’s port to your local system). This provides a simple web interface to upload and analyse your malware sample:

For a test run I uploaded the most recent sample collected by my Nepenthes honeypot, MD5 hash 3c9563dacd9afe8f2dbbe86d5d0d4c5e. The report generated shows the results of ZeroWines analysis, example below:The first section shows the behavioural analysis of the malware, this should be the most useful aspect of the ZeroWine framework. However as the ZeroWine page itself states, the output is ‘very long and, as so, hard to understand‘ and is unable to distinguish between system calls made by the malware and the underlying analysis framework. As a result I personally find the information provided by the report less useful than it could be.

There are definitely better sources for generating automated analysis of malware samples, for example VirusTotal or CWSandbox. However, depending on how the malware sample was obtained legal or business requirements may prevent you from releasing the sample to a third party, and not all provided services can provide the immediate response of a local system; meaning ZeroWine can still be a valid and useful tool in your arsenal.

Taking the concept forward, Jim Clausing recently released an excellent paper on setting up an automated malware environment with open source tools. I haven’t had a chance to try out any of Jim’s suggestions, but have read the paper and listened to the related podcast and the recommendations are definitely on my todo list to improve my malware analysis toolkit.

Andrew Waite

Denial of Service with Slowloris

2009/06/18 1 comment

Earlier this week the ha.ckers.org blog posted the release of the Slowloris HTTP DoS tool primarily coded by Rsnake, discribed as The low bandwidth, yet greedy and poisonous HTTP client!

The attack vector essentially works by initialising an HTTP request but never completes the request, causing the handling thread to wait for the end of the request. Slowloris uses multiple threads to rapidly exhaust the web servers available network resources. The attack is effective against several web server platforms, but most significant in terms of market share and install base is Apache 1.X and 2.X services.

The HTTP request currently generated by Slowloris is below, however before trying to use the info to create IDS signatures the packet contents could easily be modified to avoid overly specific signatures (and overally general signatures could generate high volume of false-positives)

GET / HTTP/1.1
Host: 192.168.80.129
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)
Content-Length: 42

Highlighting the potential damage this technique could course, the Internet Storm Center has already posted two diaries discussing the attack since it’s release, the first introduces the tool and has some rudimentary low-level analysis of the attack vector. The second discusses potential mitigation techniques, unfortunately these are currently far from bulletproof at present.

Since it’s release I’ve tested the Slowloris script and, unfortunately, it’s been wholly effective everytime. Whilst the suggested mitigations can improve a web server’s resilience to the attack, as the point of the web-server is to provide services to the outside world it is near impossible to prevent a maliciuos attack of this nature without similarly impacting legitimate service provision. Unfortunately the maliciuos user has the up-hand in this battle, as the level of resources required to deny service is miniscule compared to the potential damage and resources required to avoid the threat.

Even against a server modified with some of the proposed mitigations, the attacker still only required a sustained traffic flow of appoximately 45Kbps, easily obtainable from even a single ADSL connection. This also means that the attack may be missed by some traditional techniques such as monitoring for unusual traffic levels. Especially as other servers on the server will be responsive, basically if you have issues with your web services I’d recommend checking current connections to the Apache service, for example:

#netstat -antp | grep :80

Best response I’ve found so far is re-active, basically get hit with the attack, find source IP and block at firewall (perimeter or host based), not ideal but, at least currently I haven’t seen this attack in the wild to justify modifying functional production servers to only mitigate against a potential attack when those modifications could deny legitimate services itself.

Life could be about to get interesting…

Andrew Waite

P.S. Twitter plug: whilst this attack vector could be significant, it was publically available 48hours before I noticed any reference in even information security focused mainstream media. However, I had a test environment demoing the script within 60 minutes of RSnake (@rsnake) publicising the link on Twitter.

If you’re not already following the industry in real-time give it a go; @Jhaddix has a short guide to using Twitter to follow the security industry at EH-Net.

Lone Gunman & run books

Keeping with todays theme of working through a backlog, I’ve had two ISC diaries flagged for several months, Dealing with Security Challanges and Making the most of your runbooks.

The first is more a question of how to handle security incidents and requirements with minimal resources. This seems to be a common theme, with lots of in-house security teams complaining that resources are too tight and that priority (rightly or wrongly) is given to other business units. The summary of tips provided is too generic for my liking as most should be obvious or common-sense issues, although I’d definitely like some advice on how to actually achieve some of them. For example:

  • Set priorities, don’t waste resources on unimportant or nonconsequential tasks.
  • Get buy in from other business units (especially management)
  • Stay calm
  • Request assistance if workload is too much

The second article provides more concrete advice on planning for, and managing security incidents and boils down to three goal markers within an incident handling framework:

  • If you don’t have written procedures or steps for handling an incident, then write some.
  • Centralised, digital procedures are more useful than paper records. Allowing for easy access and colloboration by multiple members, with a specific recommendation of using a wiki.
  • Automate as much of your procedures as possible. For example if one of the procedures is to search web server logs for specific entries, add a button or page to the wiki that automatically searches the specific logs and returns the information. Less steps for responders remember/work through, and less chance of making a mistake when under pressure

Finally the diary entry makes two points to make the case for why you want these systems in place.

  • Compliance; logs from wiki based procedures can prove to auditors that the specified and pre-approved actions were taken during incident X.
  • Management; logs and records from past incidents can prove the value of security and incident handling teams within an organisation, or prove SLA compliance with internal and/or external clients.

Andrew Waite

Psst. You are checking the ISC diary daily aren’t you?

Follow

Get every new post delivered to your Inbox.