I was recently pointed towards www.reportspammers.net, which is a good resource for all things spam related and is steadily increased the quantity and quality of the information available. As much as I like the statistics that can be gathered from honeypot systems, live and real stats are even better and the data utilised by Report Spammers is taken from the email clusters run by Email Cloud.
One of the first resources released was the global map showing active spam sources (static image below), it is updated hourly and the fully interactive version can be found here.
In addition to the global map, Report Spammers also lists the most recent spamvertised sites seen on it’s mail clusters. I’m undecided with the ‘name and shame’ methodoly due to the risk of false postives, but if your looking for examples of spamvertised sites it will prove a good resource (and one I intend to delve deeper into next time I’m bored). Just beware, sites that actively advertise via spam are rarely places that you want to point you home browser at, you have been warned.
If you are wanting a resource to explain spam and the business model behind it Report Spammers could be a good starting point. It even has the ability to explain spam to non-infosec types that still think spam comes in tins. Keep this in mind next time you need to run another information security awareness campaign.
— Andrew Waite
Yesterday I got curious:
My initial count was five, but released I missed some with the responses that I received. In a ‘general’ sorting of most common to least from my (admittedly small) sample set, the available contact methods are:
- Email (always multiple accounts per person)
- Instant Messaging (MSN, AOL, etc.) (usually multiple per person)
- Twitter (often multiple accounts)
- Google Wave
Which seems to be a lot, and from the responses I seem to be behind the curve in contactability. All this makes me wonder, in a world where outdated and not updated client applications are a growing intrusion vector do we really need all these ways for people and systems to communicate with us?
While you’re thinking, are they any of your communication tools that you could do without? If you stopped signing into MSN (for example) would you lose contact with anyone who couldn’t contact you via a different communication channel?
I’m not sure if there is a purpose to these thoughts or the very unscientific findings, but I’ve been thinking about this for a while so thought I’d share.
— Andrew Waite
P.S. thanks to those who participated, you know who you are.
As part of a new and improved environment I’ve just finished building up a new Dionaea system. Despite the ease at which I found the install of my original system I received a lot a feedback that others had a fair amount of difficulty during system build. So this time around I decided to pay closer attention to by progress to try and assist others going through the same process.
Unfortunately I’m not sure I’m going to be able to offer as many pearls of wisdom as I originally hoped as my install went relatively smoothly. Only real problem I hit was that after following Markus’ (good documentation) my build didn’t correctly link to libemu. Bottom line, keep an eye on the output of ./configure when building Dionaea. In my case the parameters passed to the configure script didn’t match my system so needed to be modified accordingly.
On the off chance that it’s of use to others (or I forget my past failures and need a memory aid) my modified ./configure command is below:
./configure \ --with-lcfg-include=/opt/dionaea/include/ \ --with-lcfg-lib=/opt/dionaea/lib/ \ --with-python=/opt/dionaea/bin/python3.1 \ --with-cython-dir=/usr/bin \ --with-udns-include=/opt/dionaea/include/ \ --with-udns-lib=/opt/dionaea/lib/ \ --with-emu-include=/opt/dionaea/include \ --with-emu-lib=/opt/dionaea/lib/ \ --with-gc-include=/usr/include/gc \ --with-ev-include=/opt/dionaea/include \ --with-ev-lib=/opt/dionaea/lib \ --with-nl-include=/opt/dionaea/include \ --with-nl-lib=/opt/dionaea/lib/ \ --with-curl-config=/opt/dionaea/bin/ \ --with-pcap-include=/opt/dionaea/include \ --with-pcap-lib=/opt/dionaea/lib/ \ --with-glib=/opt/dionaea
— Andrew Waite
<update 20100606> New Dionaea build encountered a problem with libemu, ./configure above has been edited to reflected additional changes I required to compile with libemu support. </update>
Last night (2010-01-20) I had the pleasure of attending the launch event for NEBytes.
North East Bytes (NEBytes) is a User Group covering the North East and Cumbrian regions of the United Kingdom. We have technical meetings covering Development and IT Pro topics every month. About
The launch event was done in conjuction with the Sharepoint User Group UK (SUGUK), so was no surprise when the first topic of the night covered Sharepoint 2010, delivered very enthusiastically by Steve Smith. I’ve got no experience with Sharepoint so can’t comment too much on the material, but from the architectural changes I got the impression that it 2010 may be more secure that previous versions as the back-end is becoming more segmented, with different parts of the whole have discrete, dedicated databases. While it might not limit the threat of a vulnerability, it should be able to reduce the exposure in the event of a single breach.
Steve also highlight that there is some very granular accountability logging, in that every part of the application and every piece of data recieves a unique ‘Correlation ID’. The scenarios highlighted suggested that this allows for indepth debugging to determine the exact nature of a crash or system failure, by the same system this should allow for some good forensic pointers when investigating a potential compromise or breach.
Again viewing the material from a security stand point I was concerned that the defaults that appeared as part of Steve’s walkthrough defaulted to less secure options, NTLM authentication not Kerberos and non encrypted communication over SSL. One of Steve’s recommendations did concern me, to participate in the Customer Experience Improvement Program. While I’ve got no evidence to support it, I’m always nervous about passing debugging and troubleshooting information to a third party, never know what information might get leaked with it.
Second session of the night was Silverlight, covered by Mike Taulty (and should be worth pointing out that this session came after a decent quantity of freely provided pizza and sandwiches). As with Sharepoint I had no prior experience of Silverlight other than hearing various people complain about it via twitter, so found the talk really informative. For those that don’t know, Silverlight is designed to be a cross-browser and cross-platform ‘unified framework’ (providing your browser/platform supports Silverlight…)
From a developer and designer perspective Silverlight must be great, the built in functionality provide access to capabilities that I could only dream about when I was looking at becoming a dev in a past life. The intergration between Visual Studio for coding and Blend for design was equally impressive.
Again I viewed the talk’s content from a security perspective. Mike pressed on the fact that Silverlight runs within a tightly controlled sandbox to limit functionality and provide added security. For example the code can make HTTP[S] connections out from the browsing machine, but is limited to the same origin as the code or cross domain code providing the target allows cross domain from the same origin.
However, Silverlight applications can be installed locally in ‘Trusted’ mode, which reduces the restrictions in place by the sandbox. Before installing the app, the sandbox will inform the user that the app is to be ‘trusted’ and warn of the implications. This is great, as we all know users read these things before clicking next when wanting to get to the promised videos of cute kitties… I did query this point with Mike after the presentation and he, rightly, pointed out that any application installed locally would have the ability access all the resources that aren’t protected when in trusted mode. I agree with Mike, but I’m concerned that average Joe User will think ‘OK, it’s only a browser plugin’ (not that this is the case anyway) where they might be more cautious if a website asked them to install a full blown application. Users have been conditioned to install plugins to provide the web experience they expect (flash etc.)
The final talk was actually the one I was most interested in at the start of the night, and was presented by James O’Neil. In the end I was disappointed, unlike the other topics I didn’t get too much that was new to me from the session, I’m guessing because virtualisation solutions are something I encounter on a regular basis. Only real take-away from the talk was the James gets my Urgh! award for using the phrase ‘private cloud infrastructure’ without cracking a smile at the same time.
The night was great, so a big thanks to the guys that setup and ran the event (with costs coming out of their own pockets too). The event was free, the topics and speakers were high quality and to top it off there were some fairly impressive give aways as well, from the usual stickers and pens to boxed Win7 Ultimate packs.
If you’re a dev or IT professional, I’d definitely recommend getting down to the next event.
— Andrew Waite
a small daemon that creates virtual hosts on a network. The hosts can be configured to run arbitrary services, and their personality can be adapted so that they appear to be running certain operating systems. Honeyd enables a single host to claim multiple addresses – I have tested up to 65536 – on a LAN for network simulation. Honeyd improves cyber security by providing mechanisms for threat detection and assessment. It also deters adversaries by hiding real systems in the middle of virtual systems.
My initial experience getting HoneyD running was frustration to say the least. Going with Debian to provide a stable OS, the install process should have been as simple as apt-get install honeyd. While keeping upto date with a Debian system can sometimes be difficult, the honeyd package is as current as it gets with version 1.5c.
For reasons that I can’t explain, this didn’t work first (or second) time so I reverted to compiling from source. The process could have been worse, only real stumbling block I hit was a naming clash within Debian’s package names. HoneyD requires the ‘dumb network’ package libdnet, but if you apt-get install libdnet you get Debian’s DECnet libraries. On Debian and deriviates you need libdumbnet1.
HoneyD’s configuration has the ability to get very complex depending on what you are looking to achieve. Thankfully a sample configuration is provided that includes examples of some of the most common configuration directives. Once you’ve got a config sorted (the sample works perfectly for testing), starting the honeyd is simple: honeyd -f /path/to/config-file. There are plenty of other runtime options available, but I haven’t had time to fully experiment with all of them; check the honeyd man pages for more information.
As well as emulating hosts and network topologies, HoneyD can be configured to run what it terms ‘subsystems’. Basically this are scripts that can be used to provide additional functionality on the emulated systems for an attacker/user to interact with. Some basic (and not so basic) subsystems are included with HoneyD. Some additional service emulation scripts that have been contributed to the HoneyD project can be found here. As part of the configuration, HoneyD can also pass specified IP/Ports through to live systems, either more indepth/specialised honeypot system or a full ‘real’ system to combine low and high interaction honeypot.
I’m still bearly scratching the surface of what HoneyD is capable of, and haven’t yet transfered my system to a live network to generate any statistics, but from my reading, research and experimentation I have high expectations.
— Andrew Waite
It took longer than I had wanted, but I have just finished reading through Virtual Honeypots: From Botnet Tracking to Intrusion Detection. The book is written by Niels Provos, creator of HoneyD (among other things) and Thorsten Holz.
Given the authors I had high expectation when the delivery came through, thankfully it didn’t disappoint. Unsurprisingly the first chapter provides an overview of honypotting in general, covering high and low interaction systems over both physical and virtual systems, additionally the chapter introduces some core tools for your toolkit.
The next two chapters cover both high and low interaction honeypots respectively. I really liked the coverage of hi-int honeypots, it was this idea that drew me towards honeypots in the first place the idea of watching an attacker carefully exploit and utilise a dummy system always appealed. The material provided gives a great foundation for starting with a high interaction honeypot and some best practice advice for how to do so securely and safely. While I have read many reports and case studies that involved honeypots I have had difficulty finding in depth setup information and advice, leaving high interaction honeypots feeling a bit like black magic. The author’s information cuts through all the mystery allowing the reader to get a firm understanding of the topic. Likewise the discussion of low-interaction honeypots was equally well covered, although as I’ve spent some time with low-int systems in the past this chapter was more of a refresher than providing unknown information as I had found with the hi-int section.
Given that Neils is one of the books authors, it shouldn’t be too much of a surprise that HoneyD is covered in depth. For me, this was the most useful section of the book. As honeyd is one of the older publicly available low-int systems I had mistakenly assumed that one of the newer systems would provide more functionality, after reading through the material and regularly going ‘ooh’ out loud honeyd is now firmly at the top of my ‘need to implement’ list.
The book also covers honeypot systems that are designed for specialised purposes. For malware collection, the authors mainly focus on Nepenthes, but also touch on Honeytrap among others. This was the only section that I found to be slightly dated, as the Nepenthes’ newly released sprirtual successor Dionaea was not covered. But as the fundamental material is very well explained, Nepenthes is still a very functional system and the inherent similarities between Nepenthes and Dionaea the material still useful regardless so the chapter still provides an excellent foundation if you’re wanting to start collecting malware.
An interesting chapter covers the idea of hybrid honeypots, which is the idea of using low-int systems to monitor and handle the bulk of traffic, while forwarding anything unknown or unusual to a high-int system for more indepth analysis of the attack traffic. Unfortunately at this point openly available hybrid systems are limited, with the more functional systems being kept closed by the researchers and companies that build them (but I have just found Honeybrid while looking for a good link for hybrid systems which I wasn’t aware of. Looks promising…)
The last chapter covering honeypot systems looks at client-side honeypots, designed to look for client-side attacks. As client-side attacks have become more prominent over the last few years this is an evolving area of research, but as the attack vector is newer than traditional attacks, the honeypot systems aren’t as mature as more traditional systems. This isn’t an area that I’m experienced with so I can’t comment too much on the systems detailed by the authors, but they cover several honeyclient systems in great detail, and I’m intending to use the chapter as a foundation for implementing the systems and techniques proposed.
As well as detailing the use of honeypot systems, the authors also provide a brilliant discussion of ways that attackers (or users) can determine that they are interacting with a honeypot system. While the detailed descriptions for ways to identify a honeypot system is interesting and important from a theoretical standpoint, from previous experience running honeypot systems there are more than enough attackers and automated threats that blindly assume the system is legitimate to still enable honeypots to provide plenty of benefit to the honeypot administrators.
The book finishes up with an fairly detailed discussion of both tracking botnets using the information gathered from honeypot systems (this chapter is available as a sample PDF download from thanks to InformIT, here) and analysing the malware sample reports provided by CWSandbox. While both chapters are useful in he context of honeypot systems I didn’t think there was enough room to provide the reader with anything beyond a general overview of the topics, which if you were interested in the topic enough to purchase the book, then the reader will likely already have a similar level of understanding to the information provided.
There is also a chapter covering case studies of actual incidents that were captured by the books authors during their research. I’ve always been a fan of case studies, so enjoyed this chapter, it definitely helps whet the appetite to implement the technologies covered by the book.
Overall I really enjoyed the book, if you’re interested in systems and network monitoring, honeypots or malware then this book should probably be on your bookshelf.