The reaction most people have when you point out people are naive enough to post pictures of credit and debit cards online is to laugh, surely no one could be that unaware of the risks. But the fact is that the situation has become that common place that a number of Twitter accounts have been set-up to automatically identify and repost the images.
Some, like @CancelThatCard/http://cancelthat.cc/ attempt to show the posters the error of their ways, while others merely highlight the posts and request that people “Please quit posting pictures of your debit cards”.
As an example (and as proof for those that don’t believe me), the latest image in the @needadebitcard feed at time of writing:
As a side note, it looks like Twitter is stamping down on the practice of highlighting these posts, the last message posted by @cancel thatcard on April 14th indicate that the service has essentially been censored. I hope Twitter reverse this, providing security information to end-users is not something that should be prevented.
I’ve been following both accounts for sometime; at first my reaction to that I’ve discussed above, having a laugh at the expense of those who don’t recognise the security implications of their actions. As time went by I started messaging the accounts posting their cards to further highlight the error; this didn’t have the impact I was expecting, instead of thanks for providing free advice it more regularly resulted in insults, abuse and full denial that there was any risk imposed.
Recently I came across an image of a card where the owner had attempted to obscure part of the card number and name; smart. Not so smart was that it was the first 5 digits of the 16 digit card number that was obscured. It’s little known, and wasn’t to me until I started following these cards in more depth, is that the first 6 digits don’t identify the account or card holder, but the bank that issued the card. In this case the poster was so helpful to identify the card as a personalised BarclayCard. A quick Google search lead to this page, which knowing the 6th digit of the card lead to the fact that the missing digits could only be one of two possibilities, reducing the potential entropy gained from obscuring part of the card from ~10k possible numbers to two possible card numbers, effectively posting the entire 16 digits online.
In the above example, which is far from uncommon, when suggesting the owner may want to remove the image and cancel the card the response was one of confusion, with no understanding of the risk. Despite further information and links, the image is still online (I have no way of knowing if the card has been cancelled).
To end I’ll echo the plea from @needadebit card: Please quit posting pictures of your cards people.
– Andrew Waite
P.S. I’ve not identified any of the examples directly in this post, but I’ve also not cleared any of the conversations from my Twitter time-line if anyone is interested enough to search. If people post pictures of their account details online, and then don’t remove the same information once several people highlight the stupidity then, well, me deleting a couple of Twitter posts aren’t going to improve their security.
This week has been an interesting one for followers of the info-sec arena. On Tuesday Microsoft released a patch and security bulletin for MS12-020 for a critical flaw in remote desktop protocol, allowing for remote code execution without the need to authenticate to the target system first. Since the patch was released the good, the bad and the ugly of infosec have been attempting to reverse engineer the patch to develop a functional exploit; and over the last 24hrs PoC code has started to become publicly available.
As a result, the SANS Internet Storm Centre has raised their InfoCon threat level to Yellow. This is because weaponised versions of functional exploit code are expected over the coming days and weeks, with past experience making it likely that the exploit will be linked to worm capabilities for automated propagation.
So, the sky is falling right? Not as much as the furore would have you believe. Despite this does have the potential to become a well known, well exploited and long running bug; it is defensible with solid practices in play.
- Turn it off: If you don’t need RDP (or any port/service for that matter), turn it off. Reduces the attack vector against known or unknown weaknesses in the service
- Patch it: Microsoft released a patch of the weakness on Tuesday BEFORE exploit code was widely publicly available. You should be patching systems as standard operations; if you’re not, no would be a good time to catch up and remove the oversight.
- Limit access: If you can’t turn the service off because you need it, does it need to be available to world? If not restrict access to trusted source locations only via either perimeter or host based firewalling (or both). It doesn’t remove the threat completely, but it should severely reduce the risk if you’re not accepting connections from any machine on the internet. Only allowing access to the port via a VPN connection would also reduce the ability of a malicious source to connect to the service.
- (Bonus Point) Logging: Make sure you keep a close eye on your system logs; if you do get compromised, the damage could be limited if you can identify and respond to the breach promptly.
I’ve enjoyed watching the action this week, and the potential fallout has the potential to be more interesting still; but you should be able to prevent your systems from become part of a large statistic of low-hanging fruit with a few easy or common steps to securing your environment against the threat.
I was recently asked about the network configuration I use for my honeyd sensor. I had thought I’d already written about this so initially went to find the article on honeyd configuration; but my memory was wrong and the original post only covered configuring the guest systems, not the honeyd host itself. So, as I now have a pretty(ish) network diagram showing my setup I may as well correct the earlier omission.
<DISCLAIMER: This may not be the best network design for running honeyd, this is merely how my environment is configured and it works for me as a research platform. As usual, your mileage may vary, especially if your use-case differs from my own>
As can be seen, the design has three distinct network segments:
- Publicly route-able IPs
- Internal network for honeypot hosts
- Virtual network for honeyd guest systems. These IP addresses sit on loopback interface on the host, with a static route on the firewall to pass all virtual traffic to the honeyd host.
Using a perimeter firewall with NAT/PAT capabilities allows easy switching between emulated systems and services if your public IP resources are limited; a large network of guests can be configured in advance and left static, then a quick firewall change is all that is required to expose different systems to the world.
Additionally, as much as honeypot systems are designed to be compromised and collect information of malicious attacks (or perhaps more correctly, because of this) , low-interaction systems like honeyd is designed to avoid full compromise. If something goes wrong and the host system gets fully compromised, a (sufficiently configured) perimeter firewall provides some control of outgoing traffic, limiting the attackers options for using the honeypot sensor to attack other systems.
Not much to it really; if you use an different setup and/or can suggest ways to improve the setup let me know, always looking to improve my systems where possible.
– Andrew Waite
It’s a while since I’ve found time to add a new tool to my malware environment, so when a ISC post highlighted a new update to Cuckoo sandbox it served as a good reminder that I hadn’t got around to trying Cuckoo, something that has now changed. For those that don’t know, from it’s own site:
[...] Cuckoo Sandbox is a malware analysis system.
Its goal is to provide you a way to automatically analyze files and collect comprehensive results describing and outlining what such files do while executed inside an isolated environment.
It’s mostly used to analyze Windows executables, DLL files, PDF documents, Office documents, PHP scripts, Python scripts, Internet URLs and almost anything else you can imagine.
Considering Cuckoo is the combined product of several tools, mostly focused around VirtualBox, I found install and setup was largely trouble free, mostly thanks to the detailed installation instructions from the tools online documentation. I only encountered a couple of snags.
[2011-12-29 17:21:56,470] [Core.Init] INFO: Started.
[2011-12-29 17:21:56,686] [VirtualMachine.Check] INFO: Your VirtualBox version is: “4.1.2_Ubuntu”, good!
[2011-12-29 17:21:56,688] [Core.Init] INFO: Populating virtual machines pool…
[2011-12-29 17:21:56,703] [VirtualMachine] ERROR: Virtual machine “cuckoo1″ not found: 0x80bb0001 (Could not find a registered machine named ‘cuckoo1′)
[2011-12-29 17:21:56,704] [VirtualMachine.Infos] ERROR: No virtual machine handle.
[2011-12-29 17:21:56,705] [Core.Init] CRITICAL: None of the virtual machines are available. Please review the errors.
The online documentation specifies creating a dedicated user for the cuckoo process. Sound advice, but if you create your virtual guest machines under a different user (like I did, under a standard user account), then the cuckoo process cannot interact with the virtualbox guests. Either changing ownership of cuckoo, or specifically creating the guest VMs as the cuckoo user will solve the issue.
Last problem encountered was Cuckoo’s database, which if it doesn’t exist when the process will create a blank database. Which (obviously, in hindsight) will fail if the running user doesn’t have permissions to write to Cuckoo’s base directory.
With problems out of the way, Cuckoo runs quite nicely, with three main parts. the cuckoo.py script does the bulk of the heavy lifting and needs to be running before doing anything else. If all is well it should run through some initialisation and wait for further instructions:
/opt/cuckoo $ ./cuckoo.py
____ _ _ ____| | _ ___ ___
/ ___) | | |/ ___) |_/ ) _ \ / _ \
( (___| |_| ( (___| _ ( |_| | |_| |
\____)____/ \____)_| \_)___/ \___/ v0.3.1
Copyright (C) 2010-2011
[2011-12-29 20:27:17,120] [Core.Init] INFO: Started.
[2011-12-29 20:27:17,719] [VirtualMachine.Check] INFO: Your VirtualBox version is: “4.1.2_Ubuntu”, good!
[2011-12-29 20:27:17,720] [Core.Init] INFO: Populating virtual machines pool…
[2011-12-29 20:27:17,779] [VirtualMachine.Infos] INFO: Virtual machine “cuckoo1″ information:
[2011-12-29 20:27:17,780] [VirtualMachine.Infos] INFO: \_| Name: cuckoo1
[2011-12-29 20:27:17,781] [VirtualMachine.Infos] INFO: | ID: 9a9dddd8-f7d6-40ea-aed3-9a0dc0f30e79
[2011-12-29 20:27:17,782] [VirtualMachine.Infos] INFO: | CPU Count: 1 Core/s
[2011-12-29 20:27:17,783] [VirtualMachine.Infos] INFO: | Memory Size: 512 MB
[2011-12-29 20:27:17,783] [VirtualMachine.Infos] INFO: | VRAM Size: 16 MB
[2011-12-29 20:27:17,784] [VirtualMachine.Infos] INFO: | State: Saved
[2011-12-29 20:27:17,785] [VirtualMachine.Infos] INFO: | Current Snapshot: “cuckoo1_base”
[2011-12-29 20:27:17,785] [VirtualMachine.Infos] INFO: | MAC Address: 08:00:27:BD:9C:4F
[2011-12-29 20:27:17,786] [Core.Init] INFO: 1 virtual machine/s added to pool.
The submit.py script is one of the ways for getting cuckoo to analysis files:
python submit.py –help
Usage: submit.py [options] filepath
-h, –help show this help message and exit
-t TIMEOUT, –timeout=TIMEOUT Specify analysis execution time limit
-p PACKAGE, –package=PACKAGE Specify custom analysis package name
-r PRIORITY, –priority=PRIORITY Specify an analysis priority expressed in integer
-c CUSTOM, –custom=CUSTOM Specify any custom value to be passed to postprocessing
-d, –download Specify if the target is an URL to be downloaded
-u, –url Specify if the target is an URL to be analyzed
-m MACHINE, –machine=MACHINE Specify a virtual machine you want to specifically use for this analysis
Most of the options above are self-explanatory, just make sure to select the relevant analysis package depending on what you’re working with; possibilities are listed here.
Finally, web.py provides a web interface for reviewing the results of all analysis performed by cuckoo, bound to localhost:8080.
I’d like to thank the team that developed and continue to develop the cuckoo sandbox. I look forward to getting more automated results going forward and hopefully getting to a point where I’m able to add back to the project; until then I’d recommend getting your hands dirty, from my initial experiments I doubt you’ll be disappointed. But if you won’t take my word for it, watch Cuckoo in action analysing Zeus here.
– Andrew Waite
Written by journalist Kevin Poulsen (of wired.coms Threat Level blog), KingPin spans the hacking, cracking and carding underworld spread over several decades. The narrative covers the life and activities of Max Vision, a computer consultant, key member of the carding underworld and ultimately convicted criminal.
From the timescales involved, kingpin covers many years and several of Max’s ‘projects’ made national headlines at the time. Some, like the Pentagon being hacked via a weakness in BIND were folklore by the time I personally entered the infosec profession. While others, like the ongoing wars and takedowns between various carder forums were more recent and featured heavily in the press at the time.
The part of the book that I found fascinating throughout was that I was unaware that many of these, on the surface, unconnected stories were linked to the same individual; plus several more on the legal/whitehat side of the community, some of which I have used and experimented with prior to reading Kingpin, it’s usually interesting to get some of the backstory behind tools in this industry, but it’s especially the case with this backstory.
Equally, I found the portrayal of Max’ early years to be intriguing, reading Kingpin I had the feeling (rightly or wrongly), that the outcome of the story could have been different had a couple of actions and/decisions gone the other way, leaving Max as an asset to the infosec community rather than running one of the largest criminal forums on the net. Can’t help wondering if Max could have ended up being a positive force in the infosec community, or if those that are could have ended up going the same route had circumstances been slightly different.
From the right side of the law, I was fascinated with the details of Special Agent Mularski’s undercover work as Master Splyntr. Like a lot of the content of the book I was familiar with the impact Splyntr had had within carding community from several press articles at the time, but hadn’t dug in too much depth. Knowing more about the time and dedication required by one man that ultimately lead to many arrests I’d like to make an offer to Agent Mularski: if we’re ever in the same place, introduce yourself and the drinks are on me (and hopefully the war-stories are on you).
If you’ve got any interest in information security or crime in general, I’d strongly recommend that you put a few hours aside read Kingpin. If you’re disappointed after you finish I’ll be surprised.
Written by Microsoft’s Mark Russinovich, Zero Day focuses on the actions of a security consultant who starts a job for a client who’s systems have been infected with unknown malware and taking out of action. With the business losing money and circling the drain whilst it’s systems are out of action the characters rapidly find themselves caught up in a plot far large than they originally signed up for.
The scope of the plot starts out slow, and rapidly expands to cover a full gamut of topics, from skiddies in IRC channels and Russian hackers for hire, to corrupt government officials and Al Qaeda terrorist plots (even Bin Laden turns up in person). Dispite the Hollywood style plot elements, Russinovich keeps the technical aspects of the plot grounded in reality, even to the level that the odd code segment included can be reviewed by a (semi)proficient reader can determine the next plot arc before the characters reach the same conclusions.
The overall story, and the culture the characters operate in clearly show the difference between an author with a technical background and plenty of real world experience with the subject matter, over a proficient author who has had expert assistance to get the technical aspects of a story to a plausible level, and makes a very welcome change in this growing area of fiction. Russinovichs experience working with government and industry parties as part of the recent clampdown on botnets, the work in this area is a clear influence for the Zero Day story arc. Thankfully, Despite this being Russinovichs first novel I found it surprisingly well written, with believable characters and a plot that I became emotionally invested in (and without spoilers, cheered inside when a certain character got what I’d felt from first introduction that they deserved).
If you’ve got any interest in information security, computer/network administration to just good sci-fi I’d strongly recommend picking up a copy of Zero Day, it may be shorter that I would have liked (only because I want MORE) but I thoroughly enjoyed the time spent in its created scenario. Hopefully it will serve as a warning of what could happen, rather than a premonition of an actual occurrence; unfortunately it’s likely that those with the true power to stop events similar to the books plot won’t be interested in the story summary and will miss the warning.
– Andrew Waite
Like most techies I get the job of fixing and maintaining relatives’ PCs. As part of this after fixing whatever is broken I have some common clean-up and install routines that I go through to both help the system run faster and to extend the period before I’m called back, and I’ve used AVG free as part of this for many years to keep costs down for my users.
During a recent job I came across a new (I’m assuming, hadn’t noticed it before) feature of AVG free, the PC Analyzer component. Being the curious sort I hit the go button, scan ran for around 5 minutes and I was presented with this:
Ouch, I was surprised with the number of errors as this is a machine I keep a regular eye on, and in some cases use myself (it’s the missus’). Time to panic? Let’s see:
- Registry errors: Errors affect system stability: (125)
That doesn’t sound good, checking the ‘Details…’ link presented me with a long list Registry keys, which to a standard end-user would result in turning on BofH’s Dummy Mode. In reality, it found a lot of keys to set the ‘open with’ right-click function depending on file extension. ‘Affect system stability’? Not so much, and I find the links useful enough that I’ve previously researched how to add my own…
- Junk Files: These files take up disk space: (599)
Again checking the details, long list of randomly named files. In the temporary folder. All ~600 took a total of less the 300MB, and the machine has more the 200GB free. Something to correct come next house cleaning session, but not really a problem.
- Fragmentation: Reduces disk access speed
In fairness to the tool, it did come back clean and we know that fragmentation can be an issue. But that’s why every machine I’ve ever used has come with a defrag utility, as standard, for free. (OK, my BBC Micro B didn’t, but then it also had a cassette deck rather than a hard disk).
- Broken Shortcuts: Reduces explorer browsing speed(42)
Ok, so I forget a folder of shortcuts to junk that came pre-installed with the system. I’d deleted the junk, forgot the shortcuts. Thanks for the reminder, fixed.
Plenty of ‘problems’ highlighted, time to run out and drop £25 for an annual subscription to the clean-up tool? Nope, ignoring the fact that many of these issues are system settings that actually aid the end user, the remaining issues won’t have any negative impact that the end-user will notice.
In my own opinion, AVG is taking a leaf out of the fake AV scams and scaring non-techies into parting with their hard earned coin in a bid to keep the computer running and bank details away from the scary hackers that the nice lady on the news keeps taking about. Presenting a list of meaningless (to most) information and saying it’s bad is exactly the tactic I encountered with cold call scammers earlier in the year.
As a final side note, I’ve lost two of my ‘users’ this year to AVG simply because when the AVG free license I’d installed expired, they couldn’t find a link to download the latest free version, only MANY links to the paid version. As my users are nice people (latest ‘victim’ was my grandfather), they decided themselves that it was better for them to pay the small fee than have to call me and interrupt my life.
Can anyone recommend a free AV suite that doesn’t con the unwitting into unnecessary purchases to perform a cleanup that could be performed manually with around 5 minutes and half a clue? AVG Free is a great tool, and for free I shouldn’t really complain, but when the sales tactics change to make money selling things people don’t need, to those that don’t know any better?
After a few weeks running my daily Kippo review script I’ve noticed that whilst I’m still mostly receiving several logins per day, it’s rare for a connection to actually interact with my emulated system. (For those new here, Kippo is a medium interaction honeypot emulating an SSH daemon, get started here). So I started trying to investigate what was causing the trend.
One of Kippo’s features is the password database. Basically once an intruder gains access to the shell if they try to change the password or add a different account the system adds the password to the list of allowed. This then allows connections to log into the shell with the new password. Kippo ships with a small utility script to interact with the password database:
@kippo01:/opt/kippo-svn/utils$ ./passdb.py Usage: passdb.py <pass.db> <add|remove|list> [password]
My pass.db file contains 26 entries added by malicious ‘users’; I’m still analysing the contents in detail, but it looks like the Bad Guys(tm) are paying attention to user education 101 and using long, complex passwords.
Using the password used to log into the system, I’ve had a new (to me) way to link disparate logins. For example the query below linked connections spanning two months, originating from multiple source IP address, across three different continents (according to WHOIS records).
Source IPs for same user (based on pass)
SELECT sessions.id AS Session, sessions.ip AS Source, auth.password AS Password, auth.timestamp AS Time FROM sessions, auth WHERE sessions.id = auth.session AND auth.success = 1 AND auth.password = 'mariusbogdan';
Similarly I looked for a connection between multiple successful logins from the same source IP address. The query below provided a list of report offenders:
Successful logins from same source
SELECT COUNT(sessions.ip) AS Num, sessions.ip AS Source FROM sessions, auth WHERE auth.success = 1 AND auth.session = sessions.id GROUP BY sessions.ip ORDER BY COUNT(sessions.ip) desc LIMIT 25;
My summary from this is that Kippo is receiving a lower level of ‘interesting’ connections the longer the system is operational, as attackers login to check if they’ve maintained access to an ’0wned’ resource, without utilising the resource. I’m intending to clear my pass.db to remove existing access; hopefully this will return to more interesting connections and I’m also curious to see if any of my current tenants return from either the same source location(s) and/or re-using passwords (and proving me wrong with previous comment about user education).
Towards the end of last year I spent a few hours trialling SSH tunnels, I knew how the process worked but hadn’t had much cause to use it in anger; so my lab got some use instead, and a post was written covering the basics; SSH port forwarding 101.
Since I now know how to quickly and successfully implement a tunnel, it turns out that I previously had plenty of cause to use tunnels in the past, I just didn’t know SSH tunnels were the right tool for the job. A couple of recent conversations has made me realise others don’t always know the flexibility of tunnels either so I wanted to try and describe a common scenario to highlight the usefulness of tunnels.
Above is a fairly common setup. You’ve got an internal resource (for example an intranet wiki for documentation), this is in turn protected by a firewall that only allows access from trusted location. Under normal circumstances all staff can access the resource without problems, and any malicious sources (human or automated) can’t access the service.
This works well, until someone needs access and they aren’t at one of the trusted locations (we’re assuming this is an unusual problem and remote access solutions aren’t in place). In a lot of environments SSH is a ‘trusted’ system management solution and is world accessible (and hopefully secured well enough to keep the barbarians from the door, but that’s for different posts).
SSH tunnels (but you guessed that). Tunnel the server’s HTTP (or whatever) service back to your local system, and then connect locally. Using the syntax I discussed previously, from a ‘nix shell you can use this command:
ssh -L 8000:127.0.0.1:80 ssh-server.domain.com
This makes an SSH connection to the server (ssh-server.domain.com), tunnelling the local HTTP service running on port 80 (127.0.0.1:80) and binds it to your machines TCP 8000 port. Now you can connect to the service by typing 127.0.0.1:8000 into a browser, thus traversing the firewall source IP restrictions.
If you’re living in a Windows world, then the PuTTY equivalent configuration will be:
Next time you’re sat in the coffee shop on a Sunday morning, and the boss rings with an ‘emergency’; are you sure that you can’t access the resources you need from where you are? If you can, that coffee (and extra slice of cake) just became expense-able
Tuesday started fine, train down the capital a chance to meet up with the London work team. So far so good, until a colleague suggested a ‘quiet’ drink after work. Ended up not being too quiet after all.
With Wednesday starting off with ‘why?….’, I found some energy and headed for Security BSides London. As I’d already reconnoitered the location on Tuesday getting to the location was a breeze, only to find the door locked. Javvad Malik to the rescue, arrived at same time and managed to call one of the organisers to let us in. After brief introductions all round I met Soraya Iggy in person for the first time, absolutely nothing like I was expecting but great in every way. After receiving goodie bag (and getting repeated grief from Iggy to change into con shirt) I enjoyed some good geek chat whilst watching the venue fill.
After the official opening of the event, I headed upstairs to track two, which started with Aaron Finnon discussing DNS tunneling techniques. I was looking forward to this talk as I’d got half of the information over a drink after Aaron gave his famous SSL talk when OWASP Leeds travelled to Newcastle. My main takeaway from the discussion was that with the use of some relatively simple tools it can be relatively simple to bypass most captive wireless portals if they aren’t sufficiently tying down egress traffic. First on my to-do list of ‘I wonder what happens if you try this in my environment?’.
Second session was David Rook and Chris Wysopal, discussing ‘Jedi Mind’ tricks for building security programs. Having watched recordings of both presents from other events I was looking forward to getting the live experience, and neither disappointed. The presentation was great and I took a lot away for how to both discuss security issues with non-infosec people, and how to talk about the problems in business terms to get buy-in to effect real change in an organisation. I was somewhat surprised, as this started a trend of the event with my favourite presentations being non-technical in nature.
Third session was one that I’d heard a few people dismiss before the event as being a bit lame. I’d already picked it out as my preferred session for this timeslot (it was a tough call, other track was Justin Clark discuss web app attacks, but the end would have over run with the next talk I wanted to see). I’m glad I didn’t let the naysayers dissuade me, Ellen Moar and Colin McLean did a great job demonstrating just how simple it is for anyone with basic computer knowledge (script kiddie) to cut and paste their way past defensive countermeasures (AV). Content wasn’t anything groundbreaking (which is why I think some weren’t keen), but I think it’s the first time I’ve actually seen someone ‘prove’ what we all accept as gospel. Scary stuff.
Final session of the morning was Xavier Mertens discuss logging and event management. Not the most thrilling of topics I’ll admit, but it’s something that so few organisations seem to get right I was interested to find out if there were any ‘better’ ways that could improve the process. Not only are there apparently better ways, but apparently there are also free better ways, so I’m going to talk a closer look at OSSEC.
After lunch Steve Lord provided an ‘interesting’ look into different types/levels of pentester and what it means to be in the industry. The talk received a lot of laughs, but in hindsight I wish I seen a talk with more technical content. For me, bsides was for education and networking, I’ll leave comedy to the comedians.
Next talk was better, Wicked Clown, expanding on his Brucon Lightening talk showing how to break out of a restricted RDP session. This was a great presentation, and was another attack to add to my ‘what if’ to-do list. More importantly he also provided a simple fix to prevent the attack vector, considering it’s a single checkbox, and the workaround breaks how most would ‘expect’ the service to behave I’ll echo his confusion as to why Microsoft don’t have the checkbox ticked by default. Perhaps secure out of the box is too much to ask?
David Rook took to the stage again, this time alone and discussing static code analysis with Agnitio. I’d taken a look at Agnitio since David released it, but as I’m not much of a dev (see the utilities I release for proof…) haven’t been able to try it in anger. If you’re interested, the talk slides are available on the Security Ninja blog. If the tool can reach the stated end of it’s road map of being the ‘Burp Suite of static analysis’ then it should be a fantastic tool.
Next talk I saw was Manuel demo (reverse~re)engineering of DRM within Android applications. I found the talk fascinating, mostly by how quick Manuel was able to put the pieces of the puzzle together and bypass the protections put in place to do exactly what he was attempting. Whilst the presentation was good, it was one of those where you felt your comparative IQ drop as you see black magic being wielded at the keyboard before your eyes.
The event finished with ‘Security YMCA’, words cannot describe this ‘experience’ so I won’t attempt to, and leave you with this YouTube video. (WARNING: once seen, cannot be unseen). Unfortunately trying to hide at the back didn’t help in the end, so I must apologise to Ellen as I managed to say thanks for an interesting talk by subjecting her to my ‘singing’ attempts. To ensure the the guilty aren’t protected, those at the front assaulting your sensors are:
we don’t have a solution for the iPhone, as it’s a secure platform why bother?