Cowrie SSH Honeypot – AWS EC2 build script

Happy New Year all!

Whilst eating FAR too much turkey and chocolates over the festive break, I’ve managed to progress a couple of personal projects on (between stints on the kids’ Scalectrix track, thanks Santa). Still tasks to do(*), but a working EC2 User-Data script to build to automate deployment Cowrie honeypot has reached MVP stage.

# based on
apt -y update 
DEBIAN_FRONTEND=noninteractive apt -y upgrade 
apt -y install git python-virtualenv libssl-dev libffi-dev build-essential libpython3-dev python3-minimal authbind virtualenv
adduser --disabled-password --gecos "" cowrie
sudo -H -u cowrie /bin/bash -s << EOF >> /home/cowrie/heredoc.out
cd /home/cowrie/
git clone
cd /home/cowrie/cowrie
virtualenv --python=python3 cowrie-env
source cowrie-env/bin/activate
pip install --upgrade pip
pip install --upgrade -r requirements.txt
bin/cowrie start
# runs with cowrie.cfg.dist - will need tuning to specific usecase

Latest version will be maintained here

*current items on back of beer mat project plan, which may or may not get completed, are:

  • Customise cowrie.cfg, to launch on standard ports rather than default SSH on T:2222 – Completed
  • Fix apt upgrade issue – Fixed courtesy of @ajhdock
  • Mount Cowrie logging, output, and downloads to EFS for persistance – configure Cowrie’s native S3 output module
  • Expand instance to Spot instance pool to lower costs and/or increase instance count
  • Ingest activity logs into $something for further analysis

Andrew Waite

[Project] AWS-Card-Spotter – Terraform deployment

tl;dr – this project can now be deployed automatically with a Terraform script

Last project update, I introduced my project to leverage AWS resource to identify if pictures uploaded to an S3 bucket might contain images of credit cards, and in turn need special handling under an organisation’s PCI DSS processes. And it worked!

But cranking serverless environments by hand limits the full power of cloud (so I’m told), so I looked at ways to automate the process; seemingly there’s plenty of options, all with various pros and cons:

  • CloudFormation (CFN):
    • AWS own native platform for defining Infrastructure as Code (IaC). CFN is one of AWS ‘free’ feature sets (be wary, you still pay as usual for the services deployed BY CFN, but CFN itself isn’t chargeable).
  • Serverless Framework:
    • Well recommended, but looking at pricing model access to AWS resources such as SNS used in this project’s architecture crossed the threshold into the paid for Pro version.
  • Terraform:
    • Fully featured 3rd party offering, able to add cloud features.
  • AWS Serverless Application Model (SAM):
    • AWS’ latest framework for deploying serverless architecture. Being honest, this is probably exactly what I needed for this project, but I’d finished the project before coming across SAM in my research (maybe a followup article required…..)

As you’ve probably guessed if you read the title or tl;dr of this post: For this project, Terrform nosed over the line; largely for the unscientific reasoning of the DevOps people I know use Terraform for their IaC projects – so I could call in support and troubleshooting if needed.

Terraform Build Script

At it’s heart Terraform’s syntax provides a framework for defining what resources you want deploying to your given environment. Resource configuration can get complex if you need lots of customisation, but a basic resource definition follows a simple format:

  parameter = "$value"

For example, the definition for a new S3 bucket to upload images to for this project was:

resource "aws_s3_bucket" "bucket" {
  bucket = "infosanity-aws-card-spotter-tfbuild"

As you begin to build up more complex environments you’ll need to reference defined resources this is achieved with the format:


For example, when creating an S3 Bucket Notification resource, to trigger the lambda code when a a new file is uploaded the ID of the bucket created above is required, like so:

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = "${}"

Deployment Process

Terraform has a lot of functionality, many I’m yet to begin to process, but for basic usage I’ve found only 4 commands are required.

terraform init

I’m sure init does all many of important activity in the background; for now I just know you need to initialise a Terraform project for the first time before doing anything else. Once done however you can begin to build the infrastructure you’ve defined.

terraform plan

terraform plan essentially performs a dry-run of the build process, interpreting the required resources, determing links between resources which dictate the order in which resources are required, and ultimately what will be performed into your environment. It’s a good idea to review the out both to find any errors in terraform files, and to ensure that terraform is going to make the changes expected. For example, running plan against this project’s terraform file produces:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # aws_iam_role.iam_for_lambda will be created
  + resource "aws_iam_role" "iam_for_lambda" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
              + Statement = [
                  + {

N.B. Whilst I’ve found plan great for validating changes to a script, some errors, mistakes and commissions will only be caught once the changes are applied. Which leads me to…

terraform apply

As is probably expected, apply does exactly what it says on the tin: Applies the defined infrastructure to your given cloud environment(s). Running this command will change your cloud environment, with all the potential problems you could have caused manually; from breaking production systems, to unexpected vendor costs – but we’re all experts, so that’s not a concern? right?…..

Multiple applies can be used to edit live environments, and terraform will (with both plan and apply) determine exactly what set of changes are required. This can be highly beneficial when iteratively either troubleshooting a misconfiguration, or adding a new feature into an existing environment.

terraform destroy

destroy‘s functionality is ultimately exactly why I was originally interested in adding IaC to my toolkit whilst working within AWS. Like apply, destroy does what it says on the tin: destroying all the resources it’s previously instantiated.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_iam_role.iam_for_lambda will be destroyed
  - resource "aws_iam_role" "iam_for_lambda" {
      - arn                   = "arn:aws:iam::<REDACTED>:role/iam_for_lambda" -> null
      - assume_role_policy    = jsonencode(
              - Statement = [
                  - {

From a housekeeping perspective, this removes the potential for unneeded legacy resources being left around forever because no-one’s quite sure what it does, or what it’s connected to. When a project environment built and maintained by terraform is gone in one command. From a personal perspective, this removes (reduces?) the potential for leaving expensive resources running in a personal account: finished a development session? terraform destroy and everything’s(*) gone.

* Ok, there’s some exceptions: For example, if there’s any uploaded files left in the created S3 bucket after testing, terraform destroy won’t delete the bucket until the contents are manually removed. Which I can definitely see is a sensible precaution to help avoid accidental data loss.


Getting my hands dirty, I found a couple of issues which were difficult to initially overcome.

I’d initially naively assumed a little microservice built around a single Lambda would be a simple use-case to use as a learning project. Turns out, Lambda is one service that doesn’t work wonderfully well with terraform thanks to a circular reference: the lambda code needs writing (and packaging, zip archive) prior to deployment of infrastructure, but the lambda code will probably need to reference other cloud resources which aren’t built/known until after deployment.

This is was resolved by having the lambda code (Python, in this case) retrieve the resource references from runtime environment variables, which are populated by terraform at build. Simple enough once worked out, but meant that my simple project took longer than initially expected.

Secondly, again for ease of proof of concept, the SNS Topic for this project was intended to just be a simple email trigger with the output. Because of AWS’ (sensible) requirement for an email subscription to be verified before sending (to prevent abuse, spam, etc.) email endpoints aren’t supported by terraform:

These are unsupported because the endpoint needs to be authorized and does not generate an ARN until the target email address has been validated. This breaks the Terraform model and as a result are not currently supported.

This isn’t the greatest limitation; it just requires a manual step of subscribing to the created SNS Topic in the AWS console before the deployed pipeline is fully functional. And I’d expect that in a real-world example, the process output will likely trigger further processes, rather than just filling another inbox.


I’m aware that this has been a very superficial introduction into the functionality terraform (and other IaC platforms) can provide. Glad I took the dive to add to my AWS toolbox; I can understand why the ability to quickly define, build, and tear-down infrastructure is going to be an important foundational ability with the other projects I’ve got rattling around my skull. Watch this space….


A Northern Geeks trip, well, home(ish)

Back in the annals of time (2011) I wrote about my first experiences at a security conference; the first UK BSides in London. To say that that con had a big impact on my career is an understatement, but that’s a story for another day. That experience was exactly why; when catching up with an old colleague just two and a bit month’s ago Ben said “I’m thinking of doing BSides in Newcastle, do you want to help out?”, I immediately and without thinking said “YES!”.

When I read a message later that week that said “I’ve got a venue sorted, we’re scheduled for 9 weeks time”, I immediately said “FUUuuu……….”

Fast forward 9 weeks, and the quickest ever organised BSides(*) is complete, the self named #WeirdestEverBSides living up to it’s name (more on that later).

(* I believe that claim is accurate, but with BSides spinning up all over the globe, I’ll happily be correct if/when someone else claims the crown).


Lets get these out the way first; no event organised in 9 weeks, spearheaded by people that have never organised an event before, was ever going to be perfect. The merch order wasn’t completed by the time of the con, the planned talk recordings/streaming weren’t active, the venue (by it’s nature, more later) was cold and we were still packing the last of the attendee bags whilst the first batch were being handed out.

I’m sure some attendees have complaints and feedback I’m unaware of, please, get in touch and provide any and all feedback, we can’t improve or fix problems we’re unaware of.


Where do I start? I was a broken man after the event (especially after only just fully recovering from an illness that I feared may have forced me to miss the big day), but overhearing positive feedback throughout the event kept me going, and reading all the feedback from attendees on social media channels since the conference closed has me immensely energised and insanely proud of playing my small part in planning and on-day helper-monkey work.

One advantage of the merch delay was we had no way to differentiate any attendee; which was a great leveller, everyone was equal and I could be confident that all the positive comments I heard weren’t just made to be polite to the crew member within earshot.

Now, if you’ve read previous write-ups of my various conference travels, this is the point I usually attempt to summarise each talk I saw, distilling the copious notes I took and attempt to get key points across to anyone that missed a given talk, but was interested in the topic. But, this was the first con I was involved in the crew, so barely got to any sessions myself, and DEFINITELY didn’t get to take any notes; so the rest of this post is likely to be a brain dump of my memory from the day.

Venue – Dynamix

What can I say? University lecture rooms? Hotel conference suites? All been done before, try a skatepark; scratch that, we can go further – lets have track 1 IN the halfpipe!

I felt the choice of venue was inspired from the first pre-con visit I had to get setup for the big day; perfectly setting up the tone of the event, and providing a unique and memorable venue as the backdrop to the con.

The Dynamix team who hosted 100+ geeks and hackers? They were brilliantly supportive and helpful, many thanks to the whole team. If you’ve any interest in doing insane <redacted> propelled on little wheels, get yourself down; the skills on-show by the regulars whilst we were setting up night before were amazing. Me? Did I rekindle my youth (I used to be a blade-r you know?…..)? Considered it; then bailed whilst walking down the steps, spread all over the concrete without the aid of wheels, so those days may be behind me.

Talks – tracks 1 & 2

As mentioned above, I’m disappointed that I missed almost all the talks in their entirety, so I’ll leave the talk summaries to others. What I will say is that the odd couple of minutes I managed to snag hidden at the back whilst running between this and that task were excellent. Sam’s journey through early days home computing as a child felt strangely familiar, Rick’s journey through the evolution of cyberpunk made me feel OLD, and Ben philosophising the methodology known as the “F’#%k it!” approach was both entertaining and provided an insight into how a con was able to go from idea to delivered in ~9 weeks.

I missed all of the Jenny Radcliffe‘s keynote, but was left in stitches when I noticed the message left on the back of her hoody:

As I was running around like a mad-thing at the time I read this, still unsure if we’d be able to pull the con off (despite it had started at this point), this definitely seemed like advice I wish I’d taken. (and I still somewhat blame my naive optimism for running the event on Jenny and her team for making BSidesLiverpool’s inaugural event look so effortless)


If a given pair of talk topics didn’t take your fancy, there was plenty to keep you occupied:

  • Physical Security? Try your hand at lockpicking (and safe cracking) courtesy of Moon on a Stick
  • Looking for your foot on, or next step up, the security career ladder? Try the careers village, with great thanks to Harvey Nash and Sharpe Recruitment
  • Already on the infosec career travellator and need help dealing with the stress and burnout discussed as part of several talks on the day? Try the all (most?) important Mental Health village
  • Your kit getting old and dated? Try the charity sticker collection.

Thanks to everyone that got involved in the last activity, almost £100 raised for Great North Air Ambulance, who do crucial work and it was great to be able to support them in a small way.

As the below shot of my previously naked laptop shows, I had to be pulled away from the stand before I spent my kids’ inheritance.

Just 24hrs ago, this machine was ready for respectable business meetings. Now it’s ready to CRUSH those meetings 🙂


I’m gutted I didn’t once make it upstairs the CTF (and not just because it was one of the only areas with warmth). Everything I heard during event, and following up on social media afterwards suggests I definitely missed a great event. So must say a big thanks to the PwnDefend crew for designing and running the CTF, I must make a better effort next year.

Lunch Break

Lunch started conventionally enough, with pizza provided by the Log Fire Pizza Co. They did an excellent job of refueling attendees and crew alike.

Entertainment during the lunch break was a bit more of a curve ball. You know what totally fits with an infosec conference? Wrestling! Well, maybe not totally, but we had to take advantage of the fact that Battle Ready, featuring none other than WWE NXT’s own, Primate, were training in the far corner of the venue. They agreed to put on show for attendees during the break, for the small price of a pizza each from the LogFire van. Odd combo, but most attendees appeared to appreaciate another bulletpoint on the journey to weirdest ever BSides.

Curtain falls

Ian, aka Phat Hobbit, took centre stage for the closing keynote. Delivered in his usual bombastic style, Ian took the audience through his review of InfoSec during 2019, and crucially providing his insight and wisdom for what will be needed for the years and decades ahead as we as an industry approach the turn of a new decade. Ian had the audience’ undivided attention, the only time you couldn’t hear a pin-drop, was when Ian had the audience roaring with laughter; sometimes with cracking wit, and sometimes just a hard truth, delivered too close to home generating a nervous, knowing, chuckle.
It made me think about my 2019, I’m still thinking about that, but at the start of the year I definitely didn’t expect to be contemplating the same, listening to a keynote speech to a conference I played a small part in organising, whilst overhanging the lip of a half-pipe:

The memorable quote I took away was this:

You won’t be able to beat cyber criminals

We will beat cyber criminals

(Paraphrased as I wasn’t fast enough to take a sneaky pic of the slides before they changed. Ian, happy to be corrected if I’ve misquoted.)

So the conference which emphasised community and togetherness at the heart of an industry, closed with the same message. A request was made for anyone; speaker, official volunteer or attendee to raise their hand if they had helped out in any way, even down to the small act of re-positioning a chair in the venue – almost every single hand was in the air.

Which leaves me with the tl;dr version of a post that is FAR longer than I originally conceived:

For an industry with an annoying reputation for drama, together, we can, do and will achieve amazing things.

And goals which at the outset may have seemed impossible, improbable and just flat-out cray, will be achieved

My thanks to everyone who had enough faith in the event to give up their precious free time to join a bunch of overly naive and optimistic geeks and hackers who dared to believe that a security conference, organised in ~9weeks, in a cold warehouse in Newcastle (yes, Gateshead 🙂 ) could possibly be anything other than a disaster. I’m obviously very biased, but I believe we achieved, at least, the level of ‘not a complete disaster’.

See you next year? Maybe?


[Project] AWS-Card-Spotter

I’ve been (very) quite recently for a number of reasons which I’ll not bore everyone with; but I have recently started to get my hands dirty in the new (to me) world of AWS. As an ex-physical datacentre hosting monkey, this takes a bit of getting used to as I’m still seeing things through the prism of physical kit. Having an actual project to work on has always been my preferred method of learning, even if the outcome may not ultimately produce anything of operational value.

To that end (and having spent too much time with QSA’s at the time of coming up with workable scenario), I took a look at how/if some of AWS features could be leveraged to identify if an uploaded image contained payment card data, which could then be used to trigger an organisation’s PCI handling processes.

Version 1 – CLI tool

I’m still a commandline junkie at heart, and still writing (very poor) Python code when the need arises, so first proof of concept was a CLI tool using AWS’ Python SDK, Boto3. For services to achieve the projects aim, Rekognition hit the top of the research pile. Amongst some fancy video analysis capabilities I need to investigate separately, AWS’ Rekognition service appeared to do exactly what was needed:

Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities.

I went into the project expecting some form of OCR to extract text from an image, then need to hunt for regexs matching 16-digit card number, sort, account, expiry etc. that may be indicative of a card. Initially reading Rekognition’s documentation, it is highly capable of exactly, competently extracting text from an analysed image.

Thankfully however, whilst reading my way through the docs and SDK, I spotted something that made my life easier; and to everyone’s benefit, avoided the need for me to fight with REGEX strings. As usual, someone (AWS in this case) had got to the problem before me in the form of the DetectLabels function call. DetectLabels, does what you might expect from the name: detects things in a given image, and labels them with what Rekognition believes the thing to be; and in this case, one of the classes of things which Rekognition can detect is (you guessed it) payment cards.

With the above in hand, my initial use-case for working with AWS produced the AWS-Card-Spotter POC:

"""Testing Rekognition's ability to identify credit cards."""
    rekog = boto3.client("rekognition", "eu-west-1")

    for image in config.images:
        response = rekog.detect_labels(
                            "Bucket": config.bucket,
                            "Name": image

        for label in response['Labels']:
            if label['Name'] =="Credit Card":
                print("[%s] Credit Card Identified in %s: %i Confidence" % (config.bucket, image, int(label['Confidence'])))

It’s admittedly not much(*), provides the pipe through which to pass images to Rekognition, and displays the analysis, in the case of my test images:

  • [Your S3 Bucket Here] Credit Card Identified in Black-Credit-Card-Mockup.jpg: 86 Confidence
  • [Your S3 Bucket Here] Credit Card Identified in CreditCard.png: 91 Confidence
  • [Your S3 Bucket Here] Credit Card Identified in credit-card-perspective.jpg: 93 Confidence

(* In my defense: “not much” is precisely the power of cloud-first solutions. The ability for a novice scripter to achieve a non-trivial goal with a few lines of code, a couple of function calls and very little (no) capex is exactly why I’m currently finding this world so interesting)

Version 2 – Serverless

With the above proving my premise was workable, I next looked to turning a commandline tool on my local machine, into a consumable and automate-able, cloud native service (is that enough buzzwords for my VC elevator pitch?).

For the more experienced amongst you reading this, what came next is likely very obvious at this point:

  • Image uploaded to S3 bucket
  • Triggering a Lambda function (essentially a refactored version of CLI code above)
  • Lambda calls Rekognition
  • Results are output to an SNS Topic for consumption (in my test, an email with the results)
Designed via:

And, to my surprise as much as much as everyone else’s…… It WORKED!

Version 3 – SSsssh!

That’s a work in progress, watch this space, and you never know….. <update: now you do>

Andrew Waite

A Northern Geek's trip South West

June has been a busy month, hot on the heels from BSides London (review here), I again found myself on a train BSides-bound, this time heading for Liverpool.

Before getting to the tech, I’ll point out that this was my first time in Liverpool. After a very brief visit I found the city to be beautiful, conference location in the docklands certainly didn’t hurt; and I’ll be intending a return visit to hit the tourist spots as soon as I can manage it.

As I’m currently more response than I was with my London wrap, I’m not currently able to link to the talks’ recordings. But after watching Cooper and team run around diligently manning cameras and audio equipment I’m sure that they’ll be available shortly, and I’ll endeavour to update once they are,

The day got off to a bang courtesy of the welcome address, without repeating verbatim, it was an excellent sermon reading from the (un?)Holy Book of Cyber….

From there, I was fortunate enough to attend the (mostly) excellent talks below.

Key Note – Omri Segev Moyal

Reading Omri‘s talk abstract prior to the event, I was unsure I was going to agree with the premise “Focus on malware, not Infrastructure”. Thankfully it seemed I’d gotten the wrong impression, and instead of focusing on corporate infrastructure (as I’d expected), Omri covered malware analysis without focusing on the infrastructure required to do so.

Any long time reader may be aware that malware analysis was the initial goal that kicked off this humble blog (though I got distracted along the way); and those readers may also tie a link between the drop in post volume and me leaving access to a datacentre. Migrating to alternative models is something I’ve been working on in the background – but oh boy did Omri provide a firehose laden crash course to jumpstart that journey

I’ll not go too deep into technical detail of material covered, largely as I hope to implement some of the ideas in the coming weeks, covering in more detail once I’ve actually gotten my hands dirty myself. I will just say that the demo quickly spinning up a DNS sinkhole without (your own) infrastructure got the creative juices flowing – and was very in keeping with other talks of the day (but I’ll get to that later).

<update> Omri’s presentation deck can now be found here, with some associated code examples on GitHub </update>

Martin King – This is not the career you are looking for

It pains me to say it, as I’m not sure I can trust anyone who doesn’t like cheese; Martin dropped plenty of wisdom and advice for those contemplating a career in infosec, advice that I wish I’d had (and paid attention to) when I was starting out. I’m paraphrasing as my notes from the talk aren’t the best (Martin, please correct any point that’s been misquoted), but Martin’s top 10 tips:

  1. Today, Every company is an IT company.
  2. Never stop learning, and always be eager for more knowledge.
  3. You are the asset, your brain is more important than your muscles ability to mechanically tick boxes without impact.
  4. There’s MANY great free resources available, leaving no excuse for point 2.
  5. Learn to Google, knowing the answer is less important that always being able to find the answer.
  6. Don’t be the stereotypical infosec tech that hates people. People skills are more important that technical skills when it comes to being able to make an impact in an organisation.
  7. “Failure is the best teacher”
  8. Question everything; and automate everything else
  9. There’s as many paths into an infosec career as there are people with infosec careers: Being you is the best option.
  10. The industry is INCREDIBLE. Ask for support and you’ll (likely) get it.

Sean Lord – Deception Technology

With the topic being deception technology I was understandably looking forward to this talk. As Sean stated at the very beginning of the talk “this is not a vendor pitch”…..

Andrew Costis – LOL-Bins

For those unaware, LOL-Bins are nothing to laugh at: Living Of the Land Binaries are those tools that come (mostly) pre-installed on targeted operating systems that a hacker can leverage to achieve their goals without requiring additional software (which may trigger AV alerts).

Andrew did a good job of explaining the core concepts, the LOLBAS Project, Mitre ATT&CK framework, and most importantly; how it can all be brought together to strengthen resilience against intrusions.

Panel – How to submit a CFP

Takeaway from this session was simple, and invoked a certain brand: JUST DO IT!

Peter Blecksley – On the hunt

Yes, that, Peter Blecksley. This was the first talk that I was disappointed wasn’t recorded; but given the content of the session it’s not too surprising. Peter was an EXCELLENT speaker, detailing some of his former life undercover with Scotland Yard, in witness protection as a result, Hunted TV show and, most importantly, the particulars of his current man-hunt for “Britian’s most wanted fugitive” (head here to see if you can help).

Kashish Mittal – One Man Person Army

Kashish discussed his experiences building up several SOC teams, and the tips he’s learnt along the way.

One of the key pointers I took from the talk was the importance making an impact early, and building a reputation for getting results. Starting a new function within an organisation can be daunting, primarily because a complete version of that function has a laundry of capabilities you eventually need to be able to perform, but prioritise your goals and:

Secure > Document > Repeat

Ian Murphy – The logs don’t work

Like Omri’s keynote, I was dubious of Ian’s premise; but I found the talk far less provocative than the abstract suggested, and I found myself agreeing with all (most?) points made. Briefly:

  • Alert fatigue eventually mean even critical alerts end up being ignored. If an alert isn’t actionable, why are you alerting on it?
  • There’s not enough innovation in InfoSec. When Gartner claimed “IDS is Dead”, as an industry we changed the D to a P, and moved the same device in-line.
  • Assume breach; both already and will be in the future
  • Humans are always the weakest link.
  • Unless you’re a LARGE company, attempting to build a dedicated, fully functional SOC is nothing more than “a CISOs ego-trip”. Leverage the skillsets of specialists.

Jamie Hankins – WannaCry

I must start with a confession: Prior to this talk I don’t think I was aware of Jamie, or his proximity to the events of the WannaCry/NHS saga. That was a failing on my part, and one I’ll attempt to redress in the future.

I was also sat in the room early before the session, and was aware Jamie’s immense nervousness prior to his talk, being a first timer; I was genuinely worried that Jamie may truly bottle the session and run.

So, with all that said; what was the outcome when Jamie started? Best. Session. Of. The. Day. Seriously, I’ve no idea why Jamie was nervous, and judging by the rest of the audience shares my opinion.

Unfortunately, the session wasn’t recorded; for reasons that make sense when you consider the current ‘experiences’ of Jamie’s partner in (not) crime after getting some media attention.

Keeping with the above, and honouring the request for no pictures (which was brilliantly ignored by an attendee in the front row, despite the bouncing “no photos” screensaver projected on stage); I’ll refrain from covering most of the talk, but will share a couple of notes covering the wider.

  • NCSC’s CiSP platform and team are amazing – As a user of the platform during the incident in question I must concur. Seeing the industry come together and collaborate during an incident as ALWAYS amazing.
  • Doesn’t matter what is going on, everything gets dropped 12mins before Starbucks closes
  • The effort to prevent damage from Wannacry infections is continuing long after the media circus has subsided.

Beer Farmers

What can you say about a Beer Farmers’ talk? It was entertaining, engaging, and spoke a LOT of truth. But I wonder at the value of such a talk as it’s mostly preaching to the converted; and given the delivery style, I doubt it would be overly well received outside of the echo chamber.

Finux – Machiavelli’s guide to InfoSec!

Arron has come a long way since I was fortunate enough to listen to him speak nearly 10 years ago at an OWASP meet; But one thing that hasn’t changed is Finux’s enthusiasm for telling a story, getting a point across, and making an audience want to listen.

When audience were asked to raise their hands if they’d read Machiavelli’s work, mine remained down. So I was a little surprised to discover how well some of the teachings could be transcribed to the modern world, and InfoSec in particular. Especially as it would give speakers someone to quote other than SunTzu, I wonder if Arron will start a trend after pointing out the options.

Summing Up

Many, many thanks to BSidesLiverpool organisers, crew, goons, speakers and attendees. I wish I could have spent more time with all of you, thoroughly enjoyed the time we did share, and I hope to do it all again soon.


A Northern Geek's Trip South – 2019 edition

How time flies; and with it, another BSides London is a long distant memory.
My itinerary for the pilgrimage South was familiar, mostly following a well worn pattern

  • InfoSec Europe Tuesday
  • BSides itself Wednesday
  • Thursday? Recovery time in the capital, before heading for the train back to (my) civilised society.

And throughout: a generous smattering of catching up with ex-colleagues as the whole industry descends on the capital. I’ll not embarrass (or incriminate) those by name, but you know who you are, was good to see you all, and must do it all again soon
Tuesday – InfoSec Europe
InfoSec is what it is; was a good excuse to meet contacts at various vendors and partners for the first time, and catching up with some old contacts.
The conference hall felt like it had been hit by austerity; less crowded than previous years, fewer ‘booth babes’ (not a bad thing, maybe vendors are finally getting the message, and vendor swag? still available, but the good stuff seemed to be under the table, given out at discretion rather than just a free-for-all grab as attendees did the rounds.
Wednesday – BSides London
What’s not to like? This year topics were as varied as ever, with all sessions I attended being top-draw. Very briefly:

PowerGrid Insecurities
for reasons that make sense if you were there, this talk wasn’t recorded but WAS very informative. I now know to be more wary of squirrels than terrorists when it comes to outages on the power grid. And I may, unfortunately, now be able to explain the random tape from old-school cassettes I found around the local substation…..
A Safer Way to Pay – Card Payment Infrastructure
Chester provided a great overview of both the current, and future, state of card payment infrastructure. If you’re involved in financial transactions, PCI audits or similar this talk covered some of the background tech and networks involved.
Fixing the Internet’s Auto-Immune Problem – BugBountys and Responsible Disclosure
Debates and topics around disclosure, responsible or otherwise; are always interesting. Chloe’s take on the current legalities, and more importantly what is going to be needed in the future to provide a safe and stable foundation for non-contracted testers definitely did a good job of expressing the views of one side of the debate, and kickstarting some interesting conversations in LobbyCon.
When the Magic wears off – ML
Firstly, an admission: I ended up in this talk by accident after getting my track numbers confused. That said, the talk was interesting; but it confirmed my reasoning for not originally having it on my agenda – I simply didn’t have enough background knowledge in ML to fully understand the content; which was interesting to follow along to, but you’re going to need the analysis for someone in this world to fully explain it to you.
Build to Hack, Hack to Build – Docker (in)security
Docker (and Kubernetes) isn’t something I’ve much real world exposure with (yet: as with everything, it’s on a growing list of side projects I’ve not found time for). Session was a great introduction into the world of container (in)security, and I left with some frameworks and tooling to help bootstrap my future efforts in area – watch this space
They are the Champions – Security Champions
There’s always more security projects, than InfoSec resources in any org; so tips for leveraging the wider business never hurt. Jess always provides a thorough, professional and powerful presentation, but personally I think this was almost to it’s detriment this year, feeling too polished and sales-pitchy for a BSides. Not necessarily a criticism, but I’d prefer a return to singing in Klingon for a memorable talk.
Closed for Business – Taking down Dark Markets
I’ve always found the real-life war-stories of LEA’s taking on various dark marketplaces fascninating, so getting the chance to hear some modern examples in person was definitely high up on my priority list for this year’s sessions. John didn’t disappoint, if you’ve got an hour to kill, be prepared for an interesting journey.
Inside MageCart – Web skimming tactics revealed
This session was one of those talks that manage to bridge the gap between fascinating to me personally, and relevant professionally (helping to convince $employer to fund the trip). Left the talk with a better understanding of the techniques and incidents behind the headlines, as well as some interesting tid-bits around what could be the next evolution of the campaigns. Hopefully enough so to stay one-step ahead of the curve, and avoid being front-page news myself.
CyberRange – OpenSource Offensive Security Lab in AWS
This talk introduced a newly released toolkit for rapidly spinning up, and tearing down, offensive, defensive and vulnerable lab environments in AWS. And who doesn’t like having a packed toolkit of toys to play with, and a safe environment to use them on? – project here
Closing Remarks
This years closing remarks were bitter-sweet: capping off a great and successful day is always good, but came with a new (to me) announcement of a changing of the guard for the team behind BSidesLDN. This inevitable resulted in reminiscing back to events gone by, and as one of the handful at the first BSides London, it is remarkable to see how far the event and community around it has come since the first event in the Skills Exchange.
Thursday – recovery^W PCI Council
I’ve already said my usual itinerary uses Thursday as recovery (I love BSides but it’s one intense day), whilst catching some of the tourist spots on a meander back to KingsX. This year? “your trip to London? You said Thursday was free?” I did…. Off to a half day with the PCI Acquirers group it is.
Will admit I wasn’t looking for to this (the terms PCI, QSAs and auditors trigger my PTSD….), and getting to the (very fancy) venue in jeans, conference tee-shirt and backpacks stuffed for the full weeks trip I was feeling out of place with every other attendee suited and booted. That said, I was pleasantly surprised. All sessions (bar one, will mention no names, but I think the hostess wanted a shepard’s crook to hoist the overrunning speaker of stage) were excellent. So much so, I came back to the office with the suggest that we send colleagues to future events whenever we’re able.
Highlight of the event for me was John Elliot discussing MageCart. As I’d been in a BSides session covering the topic the day before, comparing the perspective of industry with that of those closer to the internals of PCI it self was fascinating. Unfortunately, unlike BSides, the event wasn’t recorded for later consumption; but as luck would have it, John had provided the same talk (in longer form) for a webinar session the week prior, which was recorded – enjoy
Another BSides in the can, until next year

Sanitising WSA export dates

As AV solutions go, Webroot’s Secure Anywhere (WSA) does a decent enough job of protecting against known and unknown threats; but I’ve always has disagreements with the administrative web interface for device management. As a work around if I’ve needed to extensively analyse the endpoints in any way I’ve typically exported the data from the interface to manipulate the data using typical toolkits (grep/Excel/etc.).
There’s still a problem with the exported data in terms of easy manipulation, namely the the chosen date format; which is frankly bizarre given it’s generated by a digital platform in the first place – Example: November 30 2015 16:25. Anyone that has spent any time sorting data sets by date will immediately see problems with this format.
Released today, simply reads the standard WSA “export to CSV” file, modifies the date format of the relevant fields and creates a new *-sanitised.csv file. The dates are more easily machine sortable, in the format YYYY-MM-DD HH:MM.

user@waitean-asus:~/Webroot# ./
Script sanitises the date format from Webroot Secure Anywhere’s “Export to CSV” output
script expects a single parameter, the filename of the original .csv file
script will create a single csv file with more sensible date format
./ exportToCSV.csv
user@waitean-asus:~/Webroot# ./ WebrootExampleExport.csv
[*] Opening file: WebrootExampleExport.csv
[*] Updating date fields….
100 records processed…
200 records processed…
300 records processed…
400 records processed…
500 records processed…
[*] Processing complete. 510 corrected and written to WebrootExampleExport-Sanitised.csv

The tool is basic enough, but if you regularly encounter WSA and haven’t already created a similar tool to work with the data, this script may (hopefully) prevent you from pulling your hair out.
–Andrew Waite
P.S. if you’re a developer, please take the time to review ISO 8601 to stop these tools be needed in the future.

Google Glass: New threat or business as usual?

Woke this morning to find several articles covering the release of a short script designed to locate and ultimately block wearers of Google Glass from accessing a wireless network. This was apparently released in response to someone else’s discomfort from knowing there was a wearer of Google Glass in an audience, mostly due to the recording/stream capabilities.
My immediate thoughts are three-fold:

  1. Like it or not, wearable tech will become more common; control and guide rather than trying to hold back the tide.
  2. Blocking from the wireless won’t, necessarily, stop the recording or streaming. (I’m assuming) a wearer could connect to a 3/4g AP (using a mobile) and stream over a private network.
  3. Why is this news worthy? Shouldn’t all network owners and admins be monitoring and restricting unauthorised/undesired devices from connecting to their network in the first place?

I think we’ll see similar stories in the future as the move to wearable tech becomes more widespread.

Tales from the Honeypot: Bitcoin miner

My Kippo farm has been largely retired as most of the captured sessions where becoming stale and ‘samey’. Thankfully however, I’ve still been getting daily reports thanks to this script (now available in BitBucket repo) and this morning something new caught my attention – a ‘guest’ attempted to turn the compromised machine into a BitCoin miner.
For anyone living under a rock for the last few months, Bitcoin is the first of a new breed of ‘crypto-currency’; essentially a decentralised monetary format with no geographical (or regulatory) boundaries. If you need a refresher, a good basic guide is here if you want to get up to speed.
Our guest connected from an IP address that hasn’t appeared in the honeypot logs previously; whilst the password on the root account is (intentionally) weak, I still find it unlikely that our guest got lucky on the very first attempt. Suspicions at this point are that either the compromised machine was identified as part of a previous compromise; anyone that has run a SSH honeypot for any length of time will be aware that attackers frequently attempt to use compromised machines to scan for other vulnerable victims and that successful rogue log-ins also often disconnect immediately – my assumption has always been that this is nothing more than automated scanners identifying and confirming valid credentials before reporting the system details back to their master for manual follow-up. It is also possible that this particular guest acquired a list of pre-identified vulnerable systems as a foundation for future activities.
How our guest found their way to the system is, unfortunately, pure speculation and for the purposes of this analysis largely irrelevant; what is more interesting is what they chose to do once access was gained. After (very) briefly looking around, and failing to determine the presence of the honeypot a 64-bit, bitcoin miner is downloaded. Details, for those that want to play along from home:

  • Location (live at time of writing, browser beware) –
  • MD5sum – 007471071fb57f52e60c57cb7ecca6c9 (VirusTotal)

Once downloaded, the guest attempts to run the binary with the following parameters:

  • -a sha256d –url=stratum+tcp:// –userpass=orfeousb.vps:qwertz1234chmod +x minerd64

It appears that the guest has little experience with falling foul to a honeypot; when running the binary fails he (or she) downloads the same file, from the same location and attempts to execute the miner a second time. When this fails the guest simply exits the system (after being briefly fooled by Kippo’ “localhost” trick on exit.
Those paying attention will notice the link between both the domain and the mining pool username; this leads me to believe that the miner is downloaded from the attackers own system, not a compromised system subverted for this purpose. Whois records indicate that the domain was first registered July 2013 by a private registrant, include both name and address (redacted until verified).
Given the £-value involved with crypto-currency at present it should be no surprise that enterprising criminals are attempting to cash-in on the bandwagon, with hindsight I’d be more surprised if they didn’t seek to use compromised systems to add to their own mining pool(s, username ‘orfeousb‘ suggests the potential for multiple accounts). I’m someone surprised it has taken until now to be noticed. Brief research (ok, Google-fu) tonight indicates that the minerd64 binary has been a present in active attacks since at least the turn of this year, albeit relying on a different compromise vector (Zimbra compromise), and VirusTotal shows that the exact binary has been seen in the wild since at least March 2014.
The change in attack scenario appears to possibly be part of a wider campaign, as well as this session I’m aware of a similar session taking place on another Kippo honeypot within the last 48hrs, again with connections to .hu systems.
How much this campaign has netted the pool owner(s) to this point is anyone’s guess, where there is profit there will be criminals so I doubt this will be the last we see of similar attack patterns.
Until next time, happy honeypotting.

P.S. For the curious, all shell interaction during the compromise:

ls -l
ls -l /home
cd /home/<redacted>
ls -l
cd ..
cd ..
uname -a
ls -l
./minerd64 -a sha256d –url=stratum+tcp:// –userpass=orfeousb.vps:qwertz1234chmod +x minerd64
ls -l
chmod +x minerd64
ls -l
cd /root
chmod +x minerd64
ls -l
ls -l
chmod minerd64
ls -l

Ranting at the youth

Since graduating back in 2006 I’ve been honoured by Northumbria University by being asked to return and speak with their students with the hindsight of having spent time out in industry, I covered my last trip here. So when I got an email at the tail end of last year I didn’t think twice in agreeing; though in hindsight I should have asked more questions, previous sessions have been 15minute slots, this time around I was booked in for 2 HOURS!, after I’d already agreed. – Think I nearly fainted at that point.

Thankfully one thing I’ve never had a problem with is telling war-stories, anecdotes and lessons learned. As the Uni were looking for real world experience this seemed ideal so I based the presentation around incidents I’ve encountered and (hopefully) help others learn from my experiences. For anyone willing to follow along at home the slide deck can be found here, though I doubt it’s particularly useful as the slides were more memory jogs for me, than actually useful information.

As I was unsure how long I’d be able to talk for (anyone that has seen me talk previously will know I can get rather, speedy, as I get excited) I setup a lab environment to demo some of the technologies discussed, honeypots – no surprises there. The plan was that the lab could expand and fill whatever time was left in the session after I ran out of slides. At least that was the plan; as it happens the content generated sufficient levels of debate, interest and questions that I managed to fill the whole slot and even overrun slightly with some Q&A after the event.

Remembering my experience on the other side of the divide, bored stiff listening to those in the ‘real world’ whilst at Uni caused me plenty of trepidation for the last couple of weeks that I’d be wasting everyone’s time. So I was delighted to (nervously) check my twitter feed after the session closed, to find several messages with positive feedback in my timeline; taking a leap that all the students weren’t just being polite the session seems to have been a success and of some benefit. Adding this to the usual buzz I gain after public speaking in general I’m currently a very happy geek.

Many thanks to Northumbria University for extending the invitation in the first place, and for Onyx Group’s continued understanding and flexibility to enable me the time to get involved with this and similar activities – not all profit is commercial.

Andrew Waite