Archive

Archive for March, 2009

RSS Feeds

2009/03/26 Comments off

Something I’ve been meaning to do for a while is document and keep a list of all the RSS feeds I’ve collected over the years, mainly because I can’t remember them all. Initially I had a mild panic as I couldn’t find any of the URLs from the feeds I’ve got configured through Outlook 2007, the usual guestimate of right-click>properties failed me. For those needing to do the same ‘File > Import and Export’ is your friend. Select the ‘Export RSS Feeds to an OPML file’ and the rest should be self explanatory to get all of your RSS info in XML form.

With panic over I’ve transferred all relevant links to the RSS page over at Infosanity. Hopefully you might find a few unknown gems amongst the list. If you’ve got some good feeds I’m missing please let me know and I’ll update accordingly.

Andrew Waite

Categories: Uncategorized

Dark Reading: DIY security lab

2009/03/24 Comments off

As I’m currently setting up and playing with my home research lab this article from Dark Reading caught my attention. The article doesn’t provide too much ‘new’ material to those that have researched security labs even in minimal depths, but it does focus on how security labs can provide cheap training to keep your skills sharp during the current economic current. I don’t want to paraphrase the article as it is all fairly self-explanatory, for those considering how to use a proposed or existing lab John Sawyers’ article suggests the following possibilities:
Before everyone signs off on the security testing lab, however, you need to answer several questions to determine the design and purpose of the lab. They include:

  • Is the lab just for testing new security tools and exploits in a controlled environment?
  • Will the lab be home to staged cyberwarfare, where multiple staff members are involved as either attackers or defenders?
  • What about mock incident-response scenarios, where one team member “hacks” a system or pretends to be a disgruntled employee while the others are left trying to put the pieces back together?

The article does go on to suggest different hardware and systems for various flavours of labs but nothing particularly mindbending (VMWare for basics, more hardware required for larger labs, etc.)

Andrew Waite

Categories: Lab

Sec610 Reverse Engineering Malware Demo

2009/03/23 1 comment

I spent a very interesting hour with Lenny Zeltser (and others) around a week ago with a live demo of part of Lenny’s Sec610 course. For those interested in taking the course, or malware in general, then I’d suggest that if the demo is a representative sample of the course then you’re likely to really enjoy it. If you’re interested the webcast session was recorded; I’m not going to provide the link here as I do not know if it is intended for public consumption, but I’m sure if you contact SANS they’ll be able to hook you up.

I don’t want to give to much away but the demo session focused on reversing an unfamiliar binary that was a dummy MSN application for password harvesting. A lot of the overall tools and theory would have been fairly straightforward for anyone with knowledge in this area, basic RE tools (VMWare, OllyDbg & Wireshark etc.) were covered as related. The demo also focused on some more specialised and less well known (at least to me) tools. Mostly these were system monitoring utils and snapshot status gathering tools to get a better feel for what the malware was up to.

The main utilities that caught my attention were fakeDNS and MailPot, these tools are designed to fake standard systems to allow the malware to communicate with external sources in a safe environment. These come part of the Malcode Analysis Pack that is distributed by iDefense. Until this point I have been using fully blown (virtual) servers to run sandboxed DNS, SMTP, etc. services for malware anaylsis, I’m hoping these utilities should reduce the implementation time required for specific analysis, leaving more time and resources available to focus on the malware itself.

Andrew Waite

BBC, Botnet, Ethical, Legal?

2009/03/13 1 comment

New story seems to be everywhere at the moment. It appears that the BBC has ‘investigated’ the impact of botnets by hiring a 22,000 strong herd and ‘testing’ on there systems, but still utilising 22,000 compromised, private machines. Original BBC article is here.

There have been many sites (The Register and The Guardian) have asked the question as to whether this is legal. The BBC article claims that:
‘If this exercise had been done with criminal intent it would be breaking the law.’

Although several places have pointed out that criminal intent is not required for a criminal act (IANAL so please don’t quote me on that).

The ‘ethical’ botnet/virus/trojan/etc. has been debated for many years (discussed in Aggressive Network Self-Defense and debated by the Tipping Point team during their analysis of Kraken). Personally I think it speaks volumes that the technical experts stop short the actions taken by the BBC, but the journalists blow through without compunction.

Will be interesting to see how this plays out.

Andrew Waite

Categories: Legal, Malware

Example PCAP files

Just a quick one this time around, as it is mostly a reminder to take a closer look once I get some free time….

Included in Dave Hull’s recent blog post on the SANS forensic blog (well worth a read in it’s own right) Dave links to part of the Network Miner Sourcefire site that contains many links to publicly accessible .pcap files for training, analysis and general packet-fu fun. Direct link here.

This should provide a wealth of real-world packet captures for some realistic training and analysis. If you can spare the time, take a look.

Andrew Waite

Categories: InfoSec, Lab

dd, netcat and system recovery

Simple scenario
a linux server (Debian in this case) has run out of hard-disk capacity (4GB) and needs to be migrated to a larger capacity hard drive (6GB). Should be simple, vmware even provides a nice method to merely expand the virtual hard disk capacity. However, I’m doing this for the purposes of practice and training with various tools; as such I’m going to assume we’re working with physical hardware and this option isn’t available (despite the fact the machines being used are virtuals within my lab, I know….)

(*N.B. if following these steps on your own hardware substitute all IP addresses, device and filepaths as relevent to your environment. This is just the process that worked in my scenario, if working with your own equipment ensure that you know the impact of each command before running anything, your mileage may vary. If unsure contact an expert for further advice)

Servers involved

  • Debian Server: 10.0.0.8
  • Acquisition Server: 10.0.0.211

Data Acquisition
To transfer the data from the original server I used only free and freely available tools. To start the Debian server was booted from CD with Knoppix (note: nearly any Linux live CD would have been suitable for this task, previously I would have used Helix, but this is unfortunately no longer free).

Once booted, an image was collected via the venerable dd and transferred to the acquisition server via the equally venerable netcat.
Debian $ dd if=/dev/sda1 | nc 10.0.0.211 2000
Acquisition $ nc -lp 2000 | dd of=/mnt/sda1/debian-sda1.dd

I always name my images with the device or mount point that was captured. It is easy to forget further down the line whether you had imaged a partition or entire block-level device, this is the best way I have found to document what is contained in the image.

Depending on your connection speeds and hard disk capacities this transfer can take a long time, in my case the initial image capture took a little under 30 minutes. This can be unnerving the first few times through the process (especially if you are working under time constraints) is that the nc/dd combination does not provide any progress status.

One tip I’ll share is to ensure the netcat sessions are initiated via console sessions. In previous scenarios I have been remotely SSH’d to one of the servers and initiated the image transfer, only to have my remote session timeout and kill the dd/nc processes 😦 not a nice thing to experience…

Once the image transfer completes you can verify the integrity of the acquired image by hashing; as people of started discovering issues with some hashing algorithms it is worth hashing with several algorithms for complete peace of mind. In this scenario I used the utilities md5sum and sha1sum as forensic integrity isn’t an issue in this scenario.

Prepare new drive
At this point I powered down the Debian server, deleted the original hard drive, created a new drive with larger capacity and rebooted, again with Knoppix. The drive was partitioned with fdisk, I’ll not go into specific as there are plenty of great tutorials regarding partition available elsewhere.

Replicate Data
Next task is to copy the data back from the acquisition server, this is basically a reverse of the initial dd/nc commands:
Debian $ nc -lp 2000 | dd of=/dev/sda1
Acquisition $ dd if=/mnt/sda1/debian-sda1.dd | nc 10.0.0.8 2000

Configure server to use new hard disk
At this point the re-migrated data can be mounted and utilised, is currently not bootable and it’s filesystem will not take advantage of the additional storage capacity. The latter of these issues is easily dealt with, again from within a Knoppix boot running the below command will expand the file system to it’s capacity:
$ resize2fs /dev/sda1

To allow the system to boot from the new drive grub was re-installed with the below command:
$ grub-install –root-directory=/mnt/sda1/ /dev/sda/
And with that the system boots as before, but with additional capacity required to continue operating in a useful manner.

Andrew Waite

Categories: Lab, VMware