AWS Cloud Deployment Toolkit

After posting previously about dipping my toe in the Infrastructure as Code waters with Terraform, a kind individual (who requested staying nameless) asked if I’d encountered AWS’ native Cloud Deployment Toolkit (CDK). I vaguely remember seeing a Beta announcement sometime back when the toolkit was first announcement, but had discounted at the time as it wasn’t stable for production workloads. Reviewing again, I’d missed the announcement that the CDK had graduated to general availability July last year.

Whilst I was quite happy with Terraform’s performance where I’ve used it, my mantra towards cloud-based platforms is: (where possible) stay native. So I took a look at what the AWS-CDK offered, and immediately came across some excellent resources to get me rapidly up to speed.

  • The CDK Workshop provides great setup guides, and without being a dev I was able to get CDK functional in my environment with minimal fuss. The workshop is helpful split and duplicated into your own (supported) language of choice (more on this benefit later). If you’ve been here before, it should be no surprise that I took advantage of the Python modules.
    N.B. The workshop will set you with a AWS Access Keys with Administrator privileges. This may have been the trigger for yesterday’s post on protecting your keys with MFA. You may want to consider the same….
  • The AWS-Samples repo should be a required bookmark for anyone working with AWS. The AWS-CDK-Examples repo is no exception, providing great real-world use cases to suggest potential architecture design patterns. As with the workshop above, the examples repo has examples across all supported coding languages.
  • Buried in the above examples repo, was a link to a recorded demo from two of the CDK’s lead developers; Elad Ben-Israel and Jason Fulghum. I’d definitely recommend taking an hour to watch the dev’s leverage the power of the CDK to live-code a solution to a real-world problem, it greatly helped me get started.

(Almost) Language agnostic

In addition to being native from AWS themselves, one of the immediate values to be is that (unlike Terraform, CloudFormation, or similar) the CDK maps into common programming languages as just another set of libraries/modules. From my perspective this means I can leverage the power of IaC, whilst staying within my preferred Python; and not need to learn an additional language/syntax.

For the curious, this is enabled by the JSII to map between the toolkit and a given language’s syntax and structure, but I’ll admit this aspect gets me well outside of my coding comfort zone; I’ll just appreciate that it works. In practice this means that if Python isn’t you’re language of choice, you’ve plenty of other popular options:

Getting Started

If I’ve whet your whistle, I’d recommend you stop reading the blog of someone who is just getting to grips with the CDK himself and jump into the resources above.

If you’re still here (why? seriously, check out the links above from those that know what they’re talking about.) I’ve essentially mapped the primary commands/features I’d leveraged from Terraform into their CDK equivalents.

terraform init <==> cdk init
cdk init will do what it says on the tin: initialise your current working directory with the basic building blocks needed to start defining your architecture and pushing to the cloud.

Warning: if you’ve an existing codebase you’re planning to CDK-ify, I’d strongly recommend init’ing in a blank directory first, so you can review the changes the command will make to your existing workspace.

terraform plan <==> cdk synth
cdk synth takes the code you’ve defined in your language of choice and creates a CloudFormation stack to deploy the defined architecture, assets and configuration.

terraform plan <==> cdk diff
Diverging from Terraform’s workflow; cdk diff provides a view of your environment from the perspective of what changes are going to be made; going from your environment as it’s currently running to the future state that deploying your current CDK stack will create. Leading us to…..

terraform apply <==> cdk deploy
cdk deploy is the first command that will actually make changes to your running AWS environment. All being well, it takes the CloudFormation template developed by cdk synth and actually runs the template under CloudFormation to build, modify or remove your infrastructure as required.

terraform destroy <==> cdk destroy
Again, as it says on the tin – cdk destroy will destroy all(*) resources created and managed by it’s defined stack(s), removing them from AWS. As I stated when discussing Terraform and IaC originally; this is the key value for my interest in IAC toolkits: confidence that at the end of a session I can tear down the infrastructure I’ve been using and (hopefully) not get hit with a nasty AWS bill if I forget about service $x and leave it running for a few weeks.

(* As a found with Terraform, S3 buckets (and I suspect other data services) don’t get removed by the various destroy commands if they’ve been populated with any date after being instantiated – you have been warned…..)

First impressions…

…very good. As I did with Terraform, I took a proof of concept of using CDK to deploy my little microservice for automatically spotting credit cards in images. This was something I was able to achieve with a few hours of learning, research, and trial and error. Will likely cover this separately shortly….

With Terraform I found the use-case of a single Lambda function and peripheral services harder than expected, the code for Lambda needed to be packaged manually as an archive prior to diving into the various *.tf configuration files. Admittedly not the end of the world once the process flow was known, but I did found it broke workflow and felt a little cumbersome.

With CDK this pain point was completely removed. When defining the Lambda resource in the CDK’s stack configuration, you simply pass the code parameter the directory containing your lambda function (and required libraries, if necessary), and CDK will do the rest on a deploy. For example, literally just:

example_function = aws_lambda.Function(self, "cardspotterLambda",code=_lambda.Code.asset("./lambda/"), some_other_params....)

Teaser for things to come…..


Andrew

AWS CLI – Forcing MFA

If you’re planning on using AWS efficiently, you’re going to want to automate with the CLI, various SDKs and/or the relatively newly released Cloud Development Kit (AWS-CDK). This typically requires an access key pair, providing access to your account, and in need of being secured against abuse. Adding MFA capabilities to the account reduces a lot of risk, it works seamlessly with the web console UI, but can cause some confusion when dealing with CLI access.

For succinctness I’m going to assume that you have an existing user, and associated access keys already. Your ~/.aws/credentials will look like the below (relevant keys will be removed by the time this is posted):

[mfa-demo]
aws_access_key_id = AKIAQOEN7NXFSJGCSB24
aws_secret_access_key = NzPMANXBqFDfh7YwkYpIPBgbET94QFg75eswzG7l
region = eu-west-1

Permissions

For the sake of this demo, we’re going to duplicate the Admin policy (below) granting near God-like access to your AWS account. Note, you probably don’t want to do this in the real world as it clearly makes a mockery of principal of Least Privilege, but works for a demo is is clearly a privileged account that we’d want to do our utmost to lock down and keep out of hostile hands.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GodLikePermissions",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

AWS CLI

Using the AWS CLI tool, you can now do anything your user has permission to do; which with the permissions above, is just about anything. For example:

awaite@Armitage:/tmp$ aws --profile mfa-demo s3 ls
2019-09-28 20:55:13 <redacted>
2019-09-28 20:56:03 <redacted>
[--snip--]

Adding MFA token

Adding an MFA token is handled from the user’s security credentials page, below. If you’ve ever used virtual MFA tokens for literally any other service, then the process should self explanatory.

Job done?

Unfortunately not, the access key pair above will still provide access just as they did above.

Deny access without MFA

Logical order of IAM policy causes an explicit deny to take priority over any competing policy statement. With this, we can add a policy statement to deny any action requested without MFA. Expanding the sample policy statement above produces:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Permissions",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        },
        {
            "Sid": "DenyNonMFA",
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            }
        }
    ]
}

And running the same cli command, now throws an error:

awaite@Armitage:/tmp$ aws --profile mfa-demo s3 ls

An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

Requesting session token with MFA

To recap, we now have a user with the same permissions as before, but unable to utilise them without verifying ownership with the matching MFA token. This is achieved with AWS’s Security Token Service, specifically get-session-token function. From the commandline:

  • –serial-number: arn for the MFA token assigned to your user.
  • –token-code: MFA token code
  • [optional] –duration: lifetime for the token to remain valid, in seconds (default is 1hr)
awaite@Armitage:/tmp$ aws --profile mfa-demo sts get-session-token --serial-number arn:aws:iam::<account_number>:mfa/mfa_demo --token-code 987654
{
    "Credentials": {
        "SessionToken": "FwoGZXIvYXdzEPb//////////wEaDEioLK19BAZ+rPCosiKGAa2cfjZK99HUj8e9w9ZowKuz5ccWo8t3oSBaSiTv70Km0uYigFWXEa1EVjzcf2PD8LYR4paAeaJrLY+8q4MVmWVslYMskVPh22TdLxF24yEaELq/MBlbBnvBwDH37tTvd8nQlD/jXsmI00ludQh4XRUbhzV+76dUgZG9BcLRB47/ClThsp47KPjYsvEFMijGo2SOHNI8xh16TFJnLIZyx4qZ9Y0A65eugu0CnclDT01KoWnLIC1x",
        "Expiration": "2020-01-26T09:00:40Z",
        "AccessKeyId": "ASIAQOEN7NXF7VSXV45K",
        "SecretAccessKey": "kfr9bELctF7okIUSSylmwepLI9jJDEH9gcaNNbML"
    }
}

Once obtained, credentials need adding into your ~/.aws/credentials file, note the additional aws_session_token variable:

[mfa-session]
aws_access_key_id = ASIAQOEN7NXF7VSXV45K
aws_secret_access_key = kfr9bELctF7okIUSSylmwepLI9jJDEH9gcaNNbML
aws_session_token = FwoGZXIvYXdzEPb//////////wEaDEioLK19BAZ+rPCosiKGAa2cfjZK99HUj8e9w9ZowKuz5ccWo8t3oSBaSiTv70Km0uYigFWXEa1EVjzcf2PD8LYR4paAeaJrLY+8q4MVmWVslYMskVPh22TdLxF24yEaELq/MBlbBnvBwDH37tTvd8nQlD/jXsmI00ludQh4XRUbhzV+76dUgZG9BcLR$region = eu-west-1

And with that, we’re back to being able to work with the CLI, more confident that we’re the only ones using this key:

awaite@Armitage:/tmp$ aws --profile mfa-session s3 ls
2019-09-28 20:55:13 <redacted>
2019-09-28 20:56:03 <redacted>

For more information, a couple of AWS KnowledgeBase and documentation articles proved useful getting all the pieces lined up correctly:


Andrew

Cowrie SSH Honeypot – AWS EC2 build script

Happy New Year all!

Whilst eating FAR too much turkey and chocolates over the festive break, I’ve managed to progress a couple of personal projects on (between stints on the kids’ Scalectrix track, thanks Santa). Still tasks to do(*), but a working EC2 User-Data script to build to automate deployment Cowrie honeypot has reached MVP stage.

#!/bin/bash
# based on https://cowrie.readthedocs.io/en/latest/INSTALL.html
apt -y update 
DEBIAN_FRONTEND=noninteractive apt -y upgrade 
apt -y install git python-virtualenv libssl-dev libffi-dev build-essential libpython3-dev python3-minimal authbind virtualenv
adduser --disabled-password --gecos "" cowrie
sudo -H -u cowrie /bin/bash -s << EOF >> /home/cowrie/heredoc.out
cd /home/cowrie/
git clone http://github.com/cowrie/cowrie
cd /home/cowrie/cowrie
virtualenv --python=python3 cowrie-env
source cowrie-env/bin/activate
pip install --upgrade pip
pip install --upgrade -r requirements.txt
bin/cowrie start
EOF
# runs with cowrie.cfg.dist - will need tuning to specific usecase

Latest version will be maintained here

*current items on back of beer mat project plan, which may or may not get completed, are:

  • Customise cowrie.cfg, to launch on standard ports rather than default SSH on T:2222 – Completed
  • Fix apt upgrade issue – Fixed courtesy of @ajhdock
  • Mount Cowrie logging, output, and downloads to EFS for persistance – configure Cowrie’s native S3 output module
  • Expand instance to Spot instance pool to lower costs and/or increase instance count
  • Ingest activity logs into $something for further analysis


Andrew Waite

[Project] AWS-Card-Spotter – Terraform deployment

tl;dr – this project can now be deployed automatically with a Terraform script

Last project update, I introduced my project to leverage AWS resource to identify if pictures uploaded to an S3 bucket might contain images of credit cards, and in turn need special handling under an organisation’s PCI DSS processes. And it worked!

But cranking serverless environments by hand limits the full power of cloud (so I’m told), so I looked at ways to automate the process; seemingly there’s plenty of options, all with various pros and cons:

  • CloudFormation (CFN):
    • AWS own native platform for defining Infrastructure as Code (IaC). CFN is one of AWS ‘free’ feature sets (be wary, you still pay as usual for the services deployed BY CFN, but CFN itself isn’t chargeable).
  • Serverless Framework:
    • Well recommended, but looking at pricing model access to AWS resources such as SNS used in this project’s architecture crossed the threshold into the paid for Pro version.
  • Terraform:
    • Fully featured 3rd party offering, able to add cloud features.
  • AWS Serverless Application Model (SAM):
    • AWS’ latest framework for deploying serverless architecture. Being honest, this is probably exactly what I needed for this project, but I’d finished the project before coming across SAM in my research (maybe a followup article required…..)

As you’ve probably guessed if you read the title or tl;dr of this post: For this project, Terrform nosed over the line; largely for the unscientific reasoning of the DevOps people I know use Terraform for their IaC projects – so I could call in support and troubleshooting if needed.

Terraform Build Script

At it’s heart Terraform’s syntax provides a framework for defining what resources you want deploying to your given environment. Resource configuration can get complex if you need lots of customisation, but a basic resource definition follows a simple format:

resource "$RESOURCE_TYPE" "$INTERNAL_RESOURCE_NAME" {
  parameter = "$value"
}

For example, the definition for a new S3 bucket to upload images to for this project was:

resource "aws_s3_bucket" "bucket" {
  bucket = "infosanity-aws-card-spotter-tfbuild"
}

As you begin to build up more complex environments you’ll need to reference defined resources this is achieved with the format:

${RESOURCE_TYPE.INTERNAL_RESOURCE_NAME.Property}

For example, when creating an S3 Bucket Notification resource, to trigger the lambda code when a a new file is uploaded the ID of the bucket created above is required, like so:

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = "${aws_s3_bucket.bucket.id}"
  <..snip..>

Deployment Process

Terraform has a lot of functionality, many I’m yet to begin to process, but for basic usage I’ve found only 4 commands are required.

terraform init

I’m sure init does all many of important activity in the background; for now I just know you need to initialise a Terraform project for the first time before doing anything else. Once done however you can begin to build the infrastructure you’ve defined.

terraform plan

terraform plan essentially performs a dry-run of the build process, interpreting the required resources, determing links between resources which dictate the order in which resources are required, and ultimately what will be performed into your environment. It’s a good idea to review the out both to find any errors in terraform files, and to ensure that terraform is going to make the changes expected. For example, running plan against this project’s terraform file produces:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # aws_iam_role.iam_for_lambda will be created
  + resource "aws_iam_role" "iam_for_lambda" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
<..SNIP..>

N.B. Whilst I’ve found plan great for validating changes to a script, some errors, mistakes and commissions will only be caught once the changes are applied. Which leads me to…

terraform apply

As is probably expected, apply does exactly what it says on the tin: Applies the defined infrastructure to your given cloud environment(s). Running this command will change your cloud environment, with all the potential problems you could have caused manually; from breaking production systems, to unexpected vendor costs – but we’re all experts, so that’s not a concern? right?…..

Multiple applies can be used to edit live environments, and terraform will (with both plan and apply) determine exactly what set of changes are required. This can be highly beneficial when iteratively either troubleshooting a misconfiguration, or adding a new feature into an existing environment.

terraform destroy

destroy‘s functionality is ultimately exactly why I was originally interested in adding IaC to my toolkit whilst working within AWS. Like apply, destroy does what it says on the tin: destroying all the resources it’s previously instantiated.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_iam_role.iam_for_lambda will be destroyed
  - resource "aws_iam_role" "iam_for_lambda" {
      - arn                   = "arn:aws:iam::<REDACTED>:role/iam_for_lambda" -> null
      - assume_role_policy    = jsonencode(
            {
              - Statement = [
                  - {

From a housekeeping perspective, this removes the potential for unneeded legacy resources being left around forever because no-one’s quite sure what it does, or what it’s connected to. When a project environment built and maintained by terraform is gone in one command. From a personal perspective, this removes (reduces?) the potential for leaving expensive resources running in a personal account: finished a development session? terraform destroy and everything’s(*) gone.

* Ok, there’s some exceptions: For example, if there’s any uploaded files left in the created S3 bucket after testing, terraform destroy won’t delete the bucket until the contents are manually removed. Which I can definitely see is a sensible precaution to help avoid accidental data loss.

Caveats

Getting my hands dirty, I found a couple of issues which were difficult to initially overcome.

I’d initially naively assumed a little microservice built around a single Lambda would be a simple use-case to use as a learning project. Turns out, Lambda is one service that doesn’t work wonderfully well with terraform thanks to a circular reference: the lambda code needs writing (and packaging, zip archive) prior to deployment of infrastructure, but the lambda code will probably need to reference other cloud resources which aren’t built/known until after deployment.

This is was resolved by having the lambda code (Python, in this case) retrieve the resource references from runtime environment variables, which are populated by terraform at build. Simple enough once worked out, but meant that my simple project took longer than initially expected.

Secondly, again for ease of proof of concept, the SNS Topic for this project was intended to just be a simple email trigger with the output. Because of AWS’ (sensible) requirement for an email subscription to be verified before sending (to prevent abuse, spam, etc.) email endpoints aren’t supported by terraform:

These are unsupported because the endpoint needs to be authorized and does not generate an ARN until the target email address has been validated. This breaks the Terraform model and as a result are not currently supported.

https://www.terraform.io/docs/providers/aws/r/sns_topic_subscription.html

This isn’t the greatest limitation; it just requires a manual step of subscribing to the created SNS Topic in the AWS console before the deployed pipeline is fully functional. And I’d expect that in a real-world example, the process output will likely trigger further processes, rather than just filling another inbox.

Summary

I’m aware that this has been a very superficial introduction into the functionality terraform (and other IaC platforms) can provide. Glad I took the dive to add to my AWS toolbox; I can understand why the ability to quickly define, build, and tear-down infrastructure is going to be an important foundational ability with the other projects I’ve got rattling around my skull. Watch this space….


Andrew

A Northern Geeks trip, well, home(ish)

Back in the annals of time (2011) I wrote about my first experiences at a security conference; the first UK BSides in London. To say that that con had a big impact on my career is an understatement, but that’s a story for another day. That experience was exactly why; when catching up with an old colleague just two and a bit month’s ago Ben said “I’m thinking of doing BSides in Newcastle, do you want to help out?”, I immediately and without thinking said “YES!”.

When I read a message later that week that said “I’ve got a venue sorted, we’re scheduled for 9 weeks time”, I immediately said “FUUuuu……….”

Fast forward 9 weeks, and the quickest ever organised BSides(*) is complete, the self named #WeirdestEverBSides living up to it’s name (more on that later).

(* I believe that claim is accurate, but with BSides spinning up all over the globe, I’ll happily be correct if/when someone else claims the crown).

Negatives

Lets get these out the way first; no event organised in 9 weeks, spearheaded by people that have never organised an event before, was ever going to be perfect. The merch order wasn’t completed by the time of the con, the planned talk recordings/streaming weren’t active, the venue (by it’s nature, more later) was cold and we were still packing the last of the attendee bags whilst the first batch were being handed out.

I’m sure some attendees have complaints and feedback I’m unaware of, please, get in touch and provide any and all feedback, we can’t improve or fix problems we’re unaware of.

Positives

Where do I start? I was a broken man after the event (especially after only just fully recovering from an illness that I feared may have forced me to miss the big day), but overhearing positive feedback throughout the event kept me going, and reading all the feedback from attendees on social media channels since the conference closed has me immensely energised and insanely proud of playing my small part in planning and on-day helper-monkey work.

One advantage of the merch delay was we had no way to differentiate any attendee; which was a great leveller, everyone was equal and I could be confident that all the positive comments I heard weren’t just made to be polite to the crew member within earshot.

Now, if you’ve read previous write-ups of my various conference travels, this is the point I usually attempt to summarise each talk I saw, distilling the copious notes I took and attempt to get key points across to anyone that missed a given talk, but was interested in the topic. But, this was the first con I was involved in the crew, so barely got to any sessions myself, and DEFINITELY didn’t get to take any notes; so the rest of this post is likely to be a brain dump of my memory from the day.

Venue – Dynamix

What can I say? University lecture rooms? Hotel conference suites? All been done before, try a skatepark; scratch that, we can go further – lets have track 1 IN the halfpipe!

I felt the choice of venue was inspired from the first pre-con visit I had to get setup for the big day; perfectly setting up the tone of the event, and providing a unique and memorable venue as the backdrop to the con.

The Dynamix team who hosted 100+ geeks and hackers? They were brilliantly supportive and helpful, many thanks to the whole team. If you’ve any interest in doing insane <redacted> propelled on little wheels, get yourself down; the skills on-show by the regulars whilst we were setting up night before were amazing. Me? Did I rekindle my youth (I used to be a blade-r you know?…..)? Considered it; then bailed whilst walking down the steps, spread all over the concrete without the aid of wheels, so those days may be behind me.

Talks – tracks 1 & 2

As mentioned above, I’m disappointed that I missed almost all the talks in their entirety, so I’ll leave the talk summaries to others. What I will say is that the odd couple of minutes I managed to snag hidden at the back whilst running between this and that task were excellent. Sam’s journey through early days home computing as a child felt strangely familiar, Rick’s journey through the evolution of cyberpunk made me feel OLD, and Ben philosophising the methodology known as the “F’#%k it!” approach was both entertaining and provided an insight into how a con was able to go from idea to delivered in ~9 weeks.

I missed all of the Jenny Radcliffe‘s keynote, but was left in stitches when I noticed the message left on the back of her hoody:

As I was running around like a mad-thing at the time I read this, still unsure if we’d be able to pull the con off (despite it had started at this point), this definitely seemed like advice I wish I’d taken. (and I still somewhat blame my naive optimism for running the event on Jenny and her team for making BSidesLiverpool’s inaugural event look so effortless)

Villages

If a given pair of talk topics didn’t take your fancy, there was plenty to keep you occupied:

  • Physical Security? Try your hand at lockpicking (and safe cracking) courtesy of Moon on a Stick
  • Looking for your foot on, or next step up, the security career ladder? Try the careers village, with great thanks to Harvey Nash and Sharpe Recruitment
  • Already on the infosec career travellator and need help dealing with the stress and burnout discussed as part of several talks on the day? Try the all (most?) important Mental Health village
  • Your kit getting old and dated? Try the charity sticker collection.

Thanks to everyone that got involved in the last activity, almost £100 raised for Great North Air Ambulance, who do crucial work and it was great to be able to support them in a small way.

As the below shot of my previously naked laptop shows, I had to be pulled away from the stand before I spent my kids’ inheritance.

Just 24hrs ago, this machine was ready for respectable business meetings. Now it’s ready to CRUSH those meetings 🙂

CTF

I’m gutted I didn’t once make it upstairs the CTF (and not just because it was one of the only areas with warmth). Everything I heard during event, and following up on social media afterwards suggests I definitely missed a great event. So must say a big thanks to the PwnDefend crew for designing and running the CTF, I must make a better effort next year.

Lunch Break

Lunch started conventionally enough, with pizza provided by the Log Fire Pizza Co. They did an excellent job of refueling attendees and crew alike.

Entertainment during the lunch break was a bit more of a curve ball. You know what totally fits with an infosec conference? Wrestling! Well, maybe not totally, but we had to take advantage of the fact that Battle Ready, featuring none other than WWE NXT’s own, Primate, were training in the far corner of the venue. They agreed to put on show for attendees during the break, for the small price of a pizza each from the LogFire van. Odd combo, but most attendees appeared to appreaciate another bulletpoint on the journey to weirdest ever BSides.

Curtain falls

Ian, aka Phat Hobbit, took centre stage for the closing keynote. Delivered in his usual bombastic style, Ian took the audience through his review of InfoSec during 2019, and crucially providing his insight and wisdom for what will be needed for the years and decades ahead as we as an industry approach the turn of a new decade. Ian had the audience’ undivided attention, the only time you couldn’t hear a pin-drop, was when Ian had the audience roaring with laughter; sometimes with cracking wit, and sometimes just a hard truth, delivered too close to home generating a nervous, knowing, chuckle.
It made me think about my 2019, I’m still thinking about that, but at the start of the year I definitely didn’t expect to be contemplating the same, listening to a keynote speech to a conference I played a small part in organising, whilst overhanging the lip of a half-pipe:

The memorable quote I took away was this:

You won’t be able to beat cyber criminals

We will beat cyber criminals

(Paraphrased as I wasn’t fast enough to take a sneaky pic of the slides before they changed. Ian, happy to be corrected if I’ve misquoted.)

So the conference which emphasised community and togetherness at the heart of an industry, closed with the same message. A request was made for anyone; speaker, official volunteer or attendee to raise their hand if they had helped out in any way, even down to the small act of re-positioning a chair in the venue – almost every single hand was in the air.

Which leaves me with the tl;dr version of a post that is FAR longer than I originally conceived:

For an industry with an annoying reputation for drama, together, we can, do and will achieve amazing things.

And goals which at the outset may have seemed impossible, improbable and just flat-out cray, will be achieved

My thanks to everyone who had enough faith in the event to give up their precious free time to join a bunch of overly naive and optimistic geeks and hackers who dared to believe that a security conference, organised in ~9weeks, in a cold warehouse in Newcastle (yes, Gateshead 🙂 ) could possibly be anything other than a disaster. I’m obviously very biased, but I believe we achieved, at least, the level of ‘not a complete disaster’.

See you next year? Maybe?


Andrew

[Project] AWS-Card-Spotter

I’ve been (very) quite recently for a number of reasons which I’ll not bore everyone with; but I have recently started to get my hands dirty in the new (to me) world of AWS. As an ex-physical datacentre hosting monkey, this takes a bit of getting used to as I’m still seeing things through the prism of physical kit. Having an actual project to work on has always been my preferred method of learning, even if the outcome may not ultimately produce anything of operational value.

To that end (and having spent too much time with QSA’s at the time of coming up with workable scenario), I took a look at how/if some of AWS features could be leveraged to identify if an uploaded image contained payment card data, which could then be used to trigger an organisation’s PCI handling processes.

Version 1 – CLI tool

I’m still a commandline junkie at heart, and still writing (very poor) Python code when the need arises, so first proof of concept was a CLI tool using AWS’ Python SDK, Boto3. For services to achieve the projects aim, Rekognition hit the top of the research pile. Amongst some fancy video analysis capabilities I need to investigate separately, AWS’ Rekognition service appeared to do exactly what was needed:

Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Amazon Rekognition API, and the service can identify objects, people, text, scenes, and activities.

https://docs.aws.amazon.com/rekognition/latest/dg/what-is.html

I went into the project expecting some form of OCR to extract text from an image, then need to hunt for regexs matching 16-digit card number, sort, account, expiry etc. that may be indicative of a card. Initially reading Rekognition’s documentation, it is highly capable of exactly, competently extracting text from an analysed image.

Thankfully however, whilst reading my way through the docs and SDK, I spotted something that made my life easier; and to everyone’s benefit, avoided the need for me to fight with REGEX strings. As usual, someone (AWS in this case) had got to the problem before me in the form of the DetectLabels function call. DetectLabels, does what you might expect from the name: detects things in a given image, and labels them with what Rekognition believes the thing to be; and in this case, one of the classes of things which Rekognition can detect is (you guessed it) payment cards.

With the above in hand, my initial use-case for working with AWS produced the AWS-Card-Spotter POC:

"""Testing Rekognition's ability to identify credit cards."""
    rekog = boto3.client("rekognition", "eu-west-1")

    for image in config.images:
        response = rekog.detect_labels(
                    Image={
                        "S3Object":{
                            "Bucket": config.bucket,
                            "Name": image
                            }
                    }
        )

        for label in response['Labels']:
            if label['Name'] =="Credit Card":
                print("[%s] Credit Card Identified in %s: %i Confidence" % (config.bucket, image, int(label['Confidence'])))

It’s admittedly not much(*), provides the pipe through which to pass images to Rekognition, and displays the analysis, in the case of my test images:

  • [Your S3 Bucket Here] Credit Card Identified in Black-Credit-Card-Mockup.jpg: 86 Confidence
  • [Your S3 Bucket Here] Credit Card Identified in CreditCard.png: 91 Confidence
  • [Your S3 Bucket Here] Credit Card Identified in credit-card-perspective.jpg: 93 Confidence

(* In my defense: “not much” is precisely the power of cloud-first solutions. The ability for a novice scripter to achieve a non-trivial goal with a few lines of code, a couple of function calls and very little (no) capex is exactly why I’m currently finding this world so interesting)

Version 2 – Serverless

With the above proving my premise was workable, I next looked to turning a commandline tool on my local machine, into a consumable and automate-able, cloud native service (is that enough buzzwords for my VC elevator pitch?).

For the more experienced amongst you reading this, what came next is likely very obvious at this point:

  • Image uploaded to S3 bucket
  • Triggering a Lambda function (essentially a refactored version of CLI code above)
  • Lambda calls Rekognition
  • Results are output to an SNS Topic for consumption (in my test, an email with the results)
Designed via: cloudcraft.co

And, to my surprise as much as much as everyone else’s…… It WORKED!

Version 3 – SSsssh!

That’s a work in progress, watch this space, and you never know….. <update: now you do>


Andrew Waite

A Northern Geek's trip South West

June has been a busy month, hot on the heels from BSides London (review here), I again found myself on a train BSides-bound, this time heading for Liverpool.

Before getting to the tech, I’ll point out that this was my first time in Liverpool. After a very brief visit I found the city to be beautiful, conference location in the docklands certainly didn’t hurt; and I’ll be intending a return visit to hit the tourist spots as soon as I can manage it.

As I’m currently more response than I was with my London wrap, I’m not currently able to link to the talks’ recordings. But after watching Cooper and team run around diligently manning cameras and audio equipment I’m sure that they’ll be available shortly, and I’ll endeavour to update once they are,

The day got off to a bang courtesy of the welcome address, without repeating verbatim, it was an excellent sermon reading from the (un?)Holy Book of Cyber….

From there, I was fortunate enough to attend the (mostly) excellent talks below.

Key Note – Omri Segev Moyal

Reading Omri‘s talk abstract prior to the event, I was unsure I was going to agree with the premise “Focus on malware, not Infrastructure”. Thankfully it seemed I’d gotten the wrong impression, and instead of focusing on corporate infrastructure (as I’d expected), Omri covered malware analysis without focusing on the infrastructure required to do so.

Any long time reader may be aware that malware analysis was the initial goal that kicked off this humble blog (though I got distracted along the way); and those readers may also tie a link between the drop in post volume and me leaving access to a datacentre. Migrating to alternative models is something I’ve been working on in the background – but oh boy did Omri provide a firehose laden crash course to jumpstart that journey

I’ll not go too deep into technical detail of material covered, largely as I hope to implement some of the ideas in the coming weeks, covering in more detail once I’ve actually gotten my hands dirty myself. I will just say that the demo quickly spinning up a DNS sinkhole without (your own) infrastructure got the creative juices flowing – and was very in keeping with other talks of the day (but I’ll get to that later).

<update> Omri’s presentation deck can now be found here, with some associated code examples on GitHub </update>

Martin King – This is not the career you are looking for

It pains me to say it, as I’m not sure I can trust anyone who doesn’t like cheese; Martin dropped plenty of wisdom and advice for those contemplating a career in infosec, advice that I wish I’d had (and paid attention to) when I was starting out. I’m paraphrasing as my notes from the talk aren’t the best (Martin, please correct any point that’s been misquoted), but Martin’s top 10 tips:

  1. Today, Every company is an IT company.
  2. Never stop learning, and always be eager for more knowledge.
  3. You are the asset, your brain is more important than your muscles ability to mechanically tick boxes without impact.
  4. There’s MANY great free resources available, leaving no excuse for point 2.
  5. Learn to Google, knowing the answer is less important that always being able to find the answer.
  6. Don’t be the stereotypical infosec tech that hates people. People skills are more important that technical skills when it comes to being able to make an impact in an organisation.
  7. “Failure is the best teacher”
  8. Question everything; and automate everything else
  9. There’s as many paths into an infosec career as there are people with infosec careers: Being you is the best option.
  10. The industry is INCREDIBLE. Ask for support and you’ll (likely) get it.

Sean Lord – Deception Technology

With the topic being deception technology I was understandably looking forward to this talk. As Sean stated at the very beginning of the talk “this is not a vendor pitch”…..

Andrew Costis – LOL-Bins

For those unaware, LOL-Bins are nothing to laugh at: Living Of the Land Binaries are those tools that come (mostly) pre-installed on targeted operating systems that a hacker can leverage to achieve their goals without requiring additional software (which may trigger AV alerts).

Andrew did a good job of explaining the core concepts, the LOLBAS Project, Mitre ATT&CK framework, and most importantly; how it can all be brought together to strengthen resilience against intrusions.

Panel – How to submit a CFP

Takeaway from this session was simple, and invoked a certain brand: JUST DO IT!

Peter Blecksley – On the hunt

Yes, that, Peter Blecksley. This was the first talk that I was disappointed wasn’t recorded; but given the content of the session it’s not too surprising. Peter was an EXCELLENT speaker, detailing some of his former life undercover with Scotland Yard, in witness protection as a result, Hunted TV show and, most importantly, the particulars of his current man-hunt for “Britian’s most wanted fugitive” (head here to see if you can help).

Kashish Mittal – One Man Person Army

Kashish discussed his experiences building up several SOC teams, and the tips he’s learnt along the way.

One of the key pointers I took from the talk was the importance making an impact early, and building a reputation for getting results. Starting a new function within an organisation can be daunting, primarily because a complete version of that function has a laundry of capabilities you eventually need to be able to perform, but prioritise your goals and:

Secure > Document > Repeat

Ian Murphy – The logs don’t work

Like Omri’s keynote, I was dubious of Ian’s premise; but I found the talk far less provocative than the abstract suggested, and I found myself agreeing with all (most?) points made. Briefly:

  • Alert fatigue eventually mean even critical alerts end up being ignored. If an alert isn’t actionable, why are you alerting on it?
  • There’s not enough innovation in InfoSec. When Gartner claimed “IDS is Dead”, as an industry we changed the D to a P, and moved the same device in-line.
  • Assume breach; both already and will be in the future
  • Humans are always the weakest link.
  • Unless you’re a LARGE company, attempting to build a dedicated, fully functional SOC is nothing more than “a CISOs ego-trip”. Leverage the skillsets of specialists.

Jamie Hankins – WannaCry

I must start with a confession: Prior to this talk I don’t think I was aware of Jamie, or his proximity to the events of the WannaCry/NHS saga. That was a failing on my part, and one I’ll attempt to redress in the future.

I was also sat in the room early before the session, and was aware Jamie’s immense nervousness prior to his talk, being a first timer; I was genuinely worried that Jamie may truly bottle the session and run.

So, with all that said; what was the outcome when Jamie started? Best. Session. Of. The. Day. Seriously, I’ve no idea why Jamie was nervous, and judging by the rest of the audience shares my opinion.

Unfortunately, the session wasn’t recorded; for reasons that make sense when you consider the current ‘experiences’ of Jamie’s partner in (not) crime after getting some media attention.

Keeping with the above, and honouring the request for no pictures (which was brilliantly ignored by an attendee in the front row, despite the bouncing “no photos” screensaver projected on stage); I’ll refrain from covering most of the talk, but will share a couple of notes covering the wider.

  • NCSC’s CiSP platform and team are amazing – As a user of the platform during the incident in question I must concur. Seeing the industry come together and collaborate during an incident as ALWAYS amazing.
  • Doesn’t matter what is going on, everything gets dropped 12mins before Starbucks closes
  • The effort to prevent damage from Wannacry infections is continuing long after the media circus has subsided.

Beer Farmers

What can you say about a Beer Farmers’ talk? It was entertaining, engaging, and spoke a LOT of truth. But I wonder at the value of such a talk as it’s mostly preaching to the converted; and given the delivery style, I doubt it would be overly well received outside of the echo chamber.

Finux – Machiavelli’s guide to InfoSec!

Arron has come a long way since I was fortunate enough to listen to him speak nearly 10 years ago at an OWASP meet; But one thing that hasn’t changed is Finux’s enthusiasm for telling a story, getting a point across, and making an audience want to listen.

When audience were asked to raise their hands if they’d read Machiavelli’s work, mine remained down. So I was a little surprised to discover how well some of the teachings could be transcribed to the modern world, and InfoSec in particular. Especially as it would give speakers someone to quote other than SunTzu, I wonder if Arron will start a trend after pointing out the options.

Summing Up

Many, many thanks to BSidesLiverpool organisers, crew, goons, speakers and attendees. I wish I could have spent more time with all of you, thoroughly enjoyed the time we did share, and I hope to do it all again soon.


Andrew

A Northern Geek's Trip South – 2019 edition

How time flies; and with it, another BSides London is a long distant memory.
My itinerary for the pilgrimage South was familiar, mostly following a well worn pattern

  • InfoSec Europe Tuesday
  • BSides itself Wednesday
  • Thursday? Recovery time in the capital, before heading for the train back to (my) civilised society.

And throughout: a generous smattering of catching up with ex-colleagues as the whole industry descends on the capital. I’ll not embarrass (or incriminate) those by name, but you know who you are, was good to see you all, and must do it all again soon
Tuesday – InfoSec Europe
InfoSec is what it is; was a good excuse to meet contacts at various vendors and partners for the first time, and catching up with some old contacts.
The conference hall felt like it had been hit by austerity; less crowded than previous years, fewer ‘booth babes’ (not a bad thing, maybe vendors are finally getting the message, and vendor swag? still available, but the good stuff seemed to be under the table, given out at discretion rather than just a free-for-all grab as attendees did the rounds.
Wednesday – BSides London
What’s not to like? This year topics were as varied as ever, with all sessions I attended being top-draw. Very briefly:

PowerGrid Insecurities
for reasons that make sense if you were there, this talk wasn’t recorded but WAS very informative. I now know to be more wary of squirrels than terrorists when it comes to outages on the power grid. And I may, unfortunately, now be able to explain the random tape from old-school cassettes I found around the local substation…..
A Safer Way to Pay – Card Payment Infrastructure
Chester provided a great overview of both the current, and future, state of card payment infrastructure. If you’re involved in financial transactions, PCI audits or similar this talk covered some of the background tech and networks involved.
Fixing the Internet’s Auto-Immune Problem – BugBountys and Responsible Disclosure
Debates and topics around disclosure, responsible or otherwise; are always interesting. Chloe’s take on the current legalities, and more importantly what is going to be needed in the future to provide a safe and stable foundation for non-contracted testers definitely did a good job of expressing the views of one side of the debate, and kickstarting some interesting conversations in LobbyCon.
When the Magic wears off – ML
Firstly, an admission: I ended up in this talk by accident after getting my track numbers confused. That said, the talk was interesting; but it confirmed my reasoning for not originally having it on my agenda – I simply didn’t have enough background knowledge in ML to fully understand the content; which was interesting to follow along to, but you’re going to need the analysis for someone in this world to fully explain it to you.
Build to Hack, Hack to Build – Docker (in)security
Docker (and Kubernetes) isn’t something I’ve much real world exposure with (yet: as with everything, it’s on a growing list of side projects I’ve not found time for). Session was a great introduction into the world of container (in)security, and I left with some frameworks and tooling to help bootstrap my future efforts in area – watch this space
They are the Champions – Security Champions
There’s always more security projects, than InfoSec resources in any org; so tips for leveraging the wider business never hurt. Jess always provides a thorough, professional and powerful presentation, but personally I think this was almost to it’s detriment this year, feeling too polished and sales-pitchy for a BSides. Not necessarily a criticism, but I’d prefer a return to singing in Klingon for a memorable talk.
Closed for Business – Taking down Dark Markets
I’ve always found the real-life war-stories of LEA’s taking on various dark marketplaces fascninating, so getting the chance to hear some modern examples in person was definitely high up on my priority list for this year’s sessions. John didn’t disappoint, if you’ve got an hour to kill, be prepared for an interesting journey.
Inside MageCart – Web skimming tactics revealed
This session was one of those talks that manage to bridge the gap between fascinating to me personally, and relevant professionally (helping to convince $employer to fund the trip). Left the talk with a better understanding of the techniques and incidents behind the headlines, as well as some interesting tid-bits around what could be the next evolution of the campaigns. Hopefully enough so to stay one-step ahead of the curve, and avoid being front-page news myself.
CyberRange – OpenSource Offensive Security Lab in AWS
This talk introduced a newly released toolkit for rapidly spinning up, and tearing down, offensive, defensive and vulnerable lab environments in AWS. And who doesn’t like having a packed toolkit of toys to play with, and a safe environment to use them on? – project here
Closing Remarks
This years closing remarks were bitter-sweet: capping off a great and successful day is always good, but came with a new (to me) announcement of a changing of the guard for the team behind BSidesLDN. This inevitable resulted in reminiscing back to events gone by, and as one of the handful at the first BSides London, it is remarkable to see how far the event and community around it has come since the first event in the Skills Exchange.
Thursday – recovery^W PCI Council
I’ve already said my usual itinerary uses Thursday as recovery (I love BSides but it’s one intense day), whilst catching some of the tourist spots on a meander back to KingsX. This year? “your trip to London? You said Thursday was free?” I did…. Off to a half day with the PCI Acquirers group it is.
Will admit I wasn’t looking for to this (the terms PCI, QSAs and auditors trigger my PTSD….), and getting to the (very fancy) venue in jeans, conference tee-shirt and backpacks stuffed for the full weeks trip I was feeling out of place with every other attendee suited and booted. That said, I was pleasantly surprised. All sessions (bar one, will mention no names, but I think the hostess wanted a shepard’s crook to hoist the overrunning speaker of stage) were excellent. So much so, I came back to the office with the suggest that we send colleagues to future events whenever we’re able.
Highlight of the event for me was John Elliot discussing MageCart. As I’d been in a BSides session covering the topic the day before, comparing the perspective of industry with that of those closer to the internals of PCI it self was fascinating. Unfortunately, unlike BSides, the event wasn’t recorded for later consumption; but as luck would have it, John had provided the same talk (in longer form) for a webinar session the week prior, which was recorded – enjoy
Another BSides in the can, until next year
Andrew

Sanitising WSA export dates

As AV solutions go, Webroot’s Secure Anywhere (WSA) does a decent enough job of protecting against known and unknown threats; but I’ve always has disagreements with the administrative web interface for device management. As a work around if I’ve needed to extensively analyse the endpoints in any way I’ve typically exported the data from the interface to manipulate the data using typical toolkits (grep/Excel/etc.).
There’s still a problem with the exported data in terms of easy manipulation, namely the the chosen date format; which is frankly bizarre given it’s generated by a digital platform in the first place – Example: November 30 2015 16:25. Anyone that has spent any time sorting data sets by date will immediately see problems with this format.
Released today, sanitiseWebroot.py simply reads the standard WSA “export to CSV” file, modifies the date format of the relevant fields and creates a new *-sanitised.csv file. The dates are more easily machine sortable, in the format YYYY-MM-DD HH:MM.

user@waitean-asus:~/Webroot# ./sanitiseWebroot.py
Script sanitises the date format from Webroot Secure Anywhere’s “Export to CSV” output
script expects a single parameter, the filename of the original .csv file
script will create a single csv file with more sensible date format
USAGE:
./sanitiseWebroot.py exportToCSV.csv
user@waitean-asus:~/Webroot# ./sanitiseWebroot.py WebrootExampleExport.csv
[*] Opening file: WebrootExampleExport.csv
[*] Updating date fields….
100 records processed…
200 records processed…
300 records processed…
400 records processed…
500 records processed…
[*] Processing complete. 510 corrected and written to WebrootExampleExport-Sanitised.csv

The tool is basic enough, but if you regularly encounter WSA and haven’t already created a similar tool to work with the data, this script may (hopefully) prevent you from pulling your hair out.
–Andrew Waite
P.S. if you’re a developer, please take the time to review ISO 8601 to stop these tools be needed in the future.

Google Glass: New threat or business as usual?

Woke this morning to find several articles covering the release of a short script designed to locate and ultimately block wearers of Google Glass from accessing a wireless network. This was apparently released in response to someone else’s discomfort from knowing there was a wearer of Google Glass in an audience, mostly due to the recording/stream capabilities.
My immediate thoughts are three-fold:

  1. Like it or not, wearable tech will become more common; control and guide rather than trying to hold back the tide.
  2. Blocking from the wireless won’t, necessarily, stop the recording or streaming. (I’m assuming) a wearer could connect to a 3/4g AP (using a mobile) and stream over a private network.
  3. Why is this news worthy? Shouldn’t all network owners and admins be monitoring and restricting unauthorised/undesired devices from connecting to their network in the first place?

I think we’ll see similar stories in the future as the move to wearable tech becomes more widespread.
–Andrew