Bad workmen – a Terraform Lambda deployment story

You know the old adage of “a bad workman blames their tools”? Well, guilty as charged…

When I built my AWS-cardspotter project with Terraform, the main goal was to learn Terraform, which I had no/limited/not-enough experience with at the time. Looking back at that initial deployment (it’s awful, please don’t judge me, or use as a reference point for anything in production workloads…), one of the criticisms I’d laid against Terraform was the need for a separate built step, bundling the Lambda’s code to a zip archive prior to being deployed with Terraform. As you might have guessed from the title and opening of this post, I was wrong.

Introducing the all powerful (well, useful) archive_file data block:

data "archive_file" "spotter_lambda_archive" {
  type        = "zip"
  source_dir = "../AWS-card-spotter/Lambda/"
  output_path = ""

Simple as that.


AWS CLI – MFA with aws-vault – making it seamless

Oooof! That’s a long title, but I realised after last post (did you miss last episode? catchup here) that whilst the post covered all the technical requirements for getting aws-vault operational, it missed some steps to truly integrate with your current workflows, without introducing additional cycle. So without additional pre-amble, introducing……


As it’s name may imply, the credential_process directive is added to the standard .aws/credentials file to tell AWS clients/SDKs how to generate the requested credentials on the fly rather than hardcoding key-pairs into the .aws/credentials file (remember: hardcoded, clear-test credentials ==BAD).

What does this look like in practice? If you’ve been following along, you’ll have seen that I have a demo environment with a profile infosanitydemo, and that to access the temporal credentials generated by aws-vault, I’d shown passing the command I wanted to run as a given user/role to the aws-vault binary. Like so:

>aws-vault exec infosanity-demo aws sts get-caller-identity
Enter token for arn:aws:iam::<redacted>:mfa/infosanity_demo: 858178
    "UserId": "AIDA<redacted>",
    "Account": "<redacted>",
    "Arn": "arn:aws:iam::<redacted>:user/infosanity_demo"

This works, but I promised integration with your existing, non-aws-vault-aware workloads; you don’t really want the extra finger workouts for typing aws-vault exec $profile in front of every command requiring AWS credentials. [Profile] blocks are common components of the .aws/credentials file, and if you’ve already got hardcoded key pairs, this is where you’d find them with aws_secret_access-key config lines. If you’ve confirmed that aws-vault is configured and working (the above sts get-caller-identity call is perfect for testing), then replace your current hardcoded keys with similar to:

credential_process = aws-vault exec infosanity-demo --json

It really is that simple. Now, the next time an AWS client/SDK/etc attempts to use your profile, it will trigger aws-vault in the background, providing the particular script/runtime/etc. with temporary keys, without the tool needing to change for, even be aware of, the integration with aws-vault keeping your actuall AWS key pairs much safer (or non-existent if you’re using role assumption, which I’d recommend).

N.B. this isn’t just for aws-vault, if you manage your AWS secrets via a different method, credential_process will still provide the capabilities described with any utility that is able to provide AWS credentials in the expected format. Truly powerful.

Now, reverting back to the native cli tool works flawlessly….

> aws --profile infosanity-demo sts get-caller-identity
Enter MFA code for arn:aws:iam::<redacted>:mfa/infosanity_demo: 
    "UserId": "AROA<redacted>:botocore-session-1606155746",
    "Account": "<redacted>",
    "Arn": "arn:aws:sts::<redacted>:assumed-role/infosanity-demo-test-role/botocore-session-1606155746"

No more (well, less) excuses for hardcoding AWS keypairs and (potentially) leaving them in backups, git commits, etc. – because that would be embarrassing….


AWS CLI – MFA with aws-vault

Previously I’ve covered why it’s important to protect AWS Key Pairs, how to enforce MFA to aid that protection, and how to continue working with the key pairs once MFA is required. If you missed the initial article post, all is available here.

Everything in that article works, but as with a lot of security it’s a bit of a trade-off between working securely, and working efficiently. It’s certainly more secure than cleartext keys being the only defense between you and an adversary raiding your cloud environment, but from a dev/ops’ perspective? It’s additional steps before anyone can do anything productive. Who wants a workflow that requires a couple of minutes unproductive work before we can do actual work? How can we improve?

Automate all the things

Once understood; the workflow for generating temporary keys is relatively simple. It just requires a fair amount of copy/pasting, which is tedious for anyone. Surely it can be automated, jumping into my favoured Python, with Boto imported, it can be.

The guts of the requirement is a single get_session_token call to AWS’ Security Token Service (STS), in this case using AWS’ Boto3 library for Python to handle the creation of an API client.

    temp_session = client.get_session_token(
        SerialNumber = conf["tokenSerial"],
        TokenCode = token

Once we have our temporary credentials, a handful of quick print statements will re-purpose received credentials ready for inclusion into ~/.aws/credentials file:

    print("aws_secret_access_key = %s" %(temp_session["Credentials"]["SecretAccessKey"]))
    print("aws_access_key_id = %s" %(temp_session["Credentials"]["AccessKeyId"]))
    print("aws_session_token = %s" %(temp_session["Credentials"]["SessionToken"]))

A fullly working CLI script is available in Gist form here

Enter AWS-Vault

The quick script above serves our needs, and is an improvement over manually setting serial-token ARNs etc. manually everytime we want to do some work. But now we know how to leverage the available SDKs and understand how the underlying process works, lets stop naively assuming we’re inventing the wheel for the first time and review some existing utilities.

AWS-Vault as a project has been around for roughly 5 years, and is still evolving and improving today. Among many other capabilities, it handles the usecase outlined above (without relying on a script, written in an evening), and is cross platform covering Windows, OSX and Linux (although there’s no prepackaged bundle for dpkg based platforms, my ‘nix of choice).

Once installed, aws-vault is aware of your existing .aws/config file, limiting the configuration steps required to get up and running (no need to duplicate serial-token ARNs to conf[] as with my quick scripts). Just be aware, that as aws-vault is .aws/config aware, it will also modify the same config file as needed whilst you interact with the vault; just in case, backups are (as always) recommended.

The first thing you need to do is add base credentials for your source user, in my case:

>aws-vault add infosanity-demo
Enter Access Key ID: AKIAsomekeyhere
Enter Secret Access Key: 
Added credentials to profile "infosanity-demo" in vault

Once securely (aws-vault integrates with OS’ native secure key store, such as KeyChain or Windows Credential Manager. No more keys in plaintext files littering your HDD) stored within the utility, you can see the credentials you have available:

>aws-vault list
Profile                  Credentials              Sessions
=======                  ===========              ========
infosanity-demo          infosanity-demo          -

And start working with AWS, letting aws-vault handle management of temporary access keys in the background. For example using aws-vault’ exec to wrap your usual command, in this case aws cli client itself to verify our identity, and then confirming aws-vault’s session state.

>aws-vault exec infosanity-demo aws sts get-caller-identity
Enter token for arn:aws:iam::<redacted>:mfa/infosanity_demo: 858178
    "UserId": "AIDA<redacted>",
    "Account": "<redacted>",
    "Arn": "arn:aws:iam::<redacted>:user/infosanity_demo"

>aws-vault list   
Profile                  Credentials              Sessions
=======                  ===========              ========
infosanity-demo          infosanity-demo          sts.GetSessionToken:59m53s

With aws-vault up and running, you’re ready to leverage all the power of aws’ APIs and associated IaC frameworks (such as my favoured CDK). Safe in the knowledge that your access credentials are securely managed in the background, and (hopefully) reducing both the likelihood and impact of access keys accidentally sneaking into a source code commit, or accidental(?) tweet….).


DC44191 – AWS Security Ramblings

In the last week of August, in the middle of Summer vacation, I had the honour of being asked to give a presentation at the second meeting of the newly formed DC44191 in (virtual, for now) Newcastle. Local DefCon groups are an offshoot of the long running, DefCon conference (usually) hosted in annually in Las Vegas. If you’re not aware of the great history of DefCon, you can get a jumpstart here.

Whilst honoured to be asked, and keen to do anything necessary to support the rise of another local infosec group; I was nervous. DefCon is all about teh hackz, and whilst I’ve spent some time as a roadwarrior-ing red-teamer, that’s not been my day to day activity for the last few years. Would those interested in attending DC groups be as interested in BlueTeam topics? I needn’t have worried, I pitched my idea of “Things I wish I’d known about AWS, when I started working with AWS” to Callum, and the session was greenlit.

The talk is born

Most cloud security discussions, whilst useful, tend to begin from one of three premises:

  • Building ideal infrastructure on greenfield site, this is what we’re going to build
  • Pre-built ideal infrastructure, this is how we built it
  • Or a deep dive into a specific technologies and services, in isolation from the wider picture.

What I wanted to discuss, and hopefully I managed, was what I would do if parachuted into an existing AWS account and was suddenly made responsible for securing whatever is there. Whether starting a new role and needing to get up to speed rapidly, or upon discovery of more of the the ever popular ShadowIT. With that in mind, what was covered in this whirlwind tour of day one(ish) with a new, to your, AWS environment?

Stop using your root account!

Seriously, if you only take one thing from this, STOP USING THE ROOT ACCOUNT, and whilst you keep it nice and idle make sure to delete any access keys and enable MFA for extra protection. You do not want someone unauthorised helpfully spinning up infrastructure on your behalf (and at your expense….).

If you’re using the root AWS account, nothing I’m going to discuss below is as important; stop reading now and go deal with that first. I’ll still be here once you’re done.

Don’t know if the root account is in use or suitably secured? Read on, we’ve some services that might help you later…

Logs! Enable CloudTrail

In the ephemeral Cloud, observability is key. If you can’t see it, you can’t secure it; and CloudTrail is the cornerstone of almost all other key security features below. Turn it on, make sure it can see EVERYTHING (yes, even that region you’ve no intention of ever using). Get it turned on an capturing data you’ll need later.

Automated config control and asset management sound good?

Then AWS:Config is where you want to be.

Easily setup with a couple of clicks, Config will begin to track the configuration of your AWS resources (at AWS level, not internal configuration of EC2 server instances etc.). Creating a searchable asset inventory and audit history of configuration changes. Want to know who made the supersecret S3 bucket public 30mins after a security audit checked it was suitable restricted? Config can answer your who did what to what questions.

But wait, there’s more. Take the above example, knowing who did what is good, but that’s horse-bolted-stable-door territory. Wouldn’t it be good if we could have a business rule that states “S3 buckets will not expose data direct to the entire Internet”, and the infrastructure could self enforce? Enter rules and remediation actions. Define the rules, and what actions to take for items that fail. With Lambda routines, ANY (*almost) requirement you have can be automated. But I’m going too far down a rabbithole for this overview, maybe there’s a followup post or three there, for now just know that Config rules can help tell you, for example, if the Root user account has active access keys in use (see, told you above we’d help with that).

In summary AWS:Config verifies sensible configuration from the people with legitimate access to your cloud, and makes proving that with auditors really easy. Change Control Auditing as a Service.

What about attackers?

AWS also has you covered with GuardDuty, essentially an incloud IDS monitoring for all manner nefarious activity. If you are dropped into an existing AWS workload, either review Guardduty logs for an overview of what threats are bombarding the systems; or enable it (it’s quick and relatively inexpensive) and then review the findings for an overview of what threats are bombarding the systems. Its one of those services that you hope never troubles your inbox, or command too much of your time, but you can rest easy(ier) in the knowledge that it’s monitoring for threat activity 24/7 so you don’t have to.

Central Control?

We’re still only in our hypothetical day one of being responsible for a new AWS environment, and we’re already managing a good number of systems. Time to call our favourite reseller for one of those “single pane of glass” platforms? Nope, AWS again has you covered, in this case with SecurityHub. Recently (-ish, 2018) released to combine security findings and metrics from a number of different sources (including those above, funny how that happens) and presents all from one service.

And as an added bonus, if you got a bit overwhelmed by the quantity of potential rules you can employ within AWS:Config (before we even get to custom rules) you can stand on the shoulders of giants (CIS, PCI Council, or AWS themselves) and enable a steadily growing list of security standards, which will enable and report back on a multitude of checks and metrics at the touch of an Enable button. This workload handle payment data and fall underscope for PCI DSS controls? Enable the PCI-DSS standard, and relevant checks will spring to life, helpfully mapped to the specific control requirements for easy auditing purposes.

The End….

With that, my whirlwind tour of where I’d start reviewing and securing an AWS account that I was newly responsible for was over. I’ve not quite put it into practice to see if you could actually achieve all that in day one, but it’s good to have goals. If anyone puts it to the test, I’d be curious to know the outcome when theory meets reality.

Unlike the inaugural DC44191, kicked off by the inimitable Mike Thompson discussing 3rd party web apps, this meeting wasn’t recorded for prosperity; which as I have a face for radio, not live streaming, is probably a good thing. Despite that limitation, I’m since regretting that decision to not record as I’ve had a few questions from people after the event that would have been easier to answer with a recording of the talk to point to. And the session’s Q&A was lively and covered some more advanced topics I’d considered, but left out of the presentation, it would have been good to have those interactions recorded also, especially as some audience members more knowledgeable than I jumped in to expand some answers as well as ask perceptive questions.

  • If you were in attendance for the live DC44191 presentation, I hope it was worth your time.
  • Similarly, if you’ve got this far, thank you for reading and I hope it’s sparked some ideas which can help your current and future challenges securing AWS workloads.
  • If you’ve any questions, or would like to discuss any aspect in more depth; please leave a comment below, or find me on twitter

Andrew Waite

A Northern Geeks trip, well, nowhere

It’s hard to judge time given current non-technical ongoings, but it’s (about) a year since the “A Northern Geeks trip…..” series stayed close to home. That was the inaugural BSides Newcastle, and somehow it came time for the 2020 edition. Which brought about some changes; firstly, C-19 forced the organising team to abandon some amazing plans as this years event went virtual (trust me, some of the plans would have been amazing; no spoilers as I hope they get resurrected for next year (in the hope we can once again share a meatspace location). Secondly, on a personal level I stepped back from being involved with the organising team this year, the strain of C-19 meant I couldn’t take on any additional demands, instead focusing on young family, paid employment, and mental health. Thankfully, and entirely predictably, my absence had zero negative impact on a great conference, which I was able to selfishly enjoy risk free as a participant.

My first thoughts, Virtual conference still feel weird to me. As I posted at the time, livestreaming a conference to my living room that I’m currently travelling in person for just felt off. I suspect this may be the delivery method for the forseeable future, so I hope I can get used to them quickly.

A negative of being in home mode, rather than conference mode is that my note talking during talks was dire, so I can’t provide my usual long form review of the sessions I attended. But there were a few talks that stood that I’d like to mention:

  • Sam Hogy‘s talk on Friday covering security CD/CI pipelines was excellent, and definitely had some content that I want to review again later.
  • Avinash Jain covered a similar topic with discussion of moving security to earlier steps of the DevSevOps pipeline
  • It’s hard to make topics that include the work compliance interesting, even to those of us that work within the various frameworks. But Bugcat did a great job of walking through methods of leveraging SIEM logs and capabilities to drive and prove PCI-DSS compliance. My only complaint was that the resolution of the demos/screenshares was hard to make out some of the exact content shown.

Looking at the talks that really stood out, I found it interesting that my preference in conference material has shifted along with my professional change from red team to blue over the last few years. Whilst they were good talks, Gabriel Ryan generating obfuscated malware payloads on the fly with the introduction of DropEngine, or Mauro and Luis weaponising USB powerbanks didn’t pique my interest the way similar topics have in previous years.

That’s the talks covered, but BSidesNewcastle wouldn’t be living up to it’s tagline of #WhereTheWeirdThingsGrow (emphasis mine) without some weird. Remote nature of this years con meant that we weren’t all huddled in a skate park, or watching a wrestling display whilst enjoying fresh stone baked pizza, but the team did not disappoint on the weird front. From the Antaganostics waving socks containing bricks at swordsmen to settle disputes (don’t ask) to a tin-foil-hat making competition, there was plenty of fun to be had, and memories to be treasured.

So whilst I may personally struggle with the context shift to virtual cons, in a year with physical cons (rightly) cancelled left, right, and center; I’d like to extend my deep appreciation for all of those involved in making the event a great success against all the difficulties this year has presented. This equally goes for the corporate sponsors whom helped provide the resources to make any conference possible, the move away from physical conference must have made sponsorship a risky ROI discussion, I hope the faith in the BSidesNewcastle team and community was well reward (and I’ll try not to take it personally that my own corporate overloads sponsored this year’s event, but was deemed too risky when I was personally involved in running last years proceedings. #itHurts….. 😀 )

Until next year, hopefully we can all safely return to meet, hack and be merry in person.


AWS Cloud Deployment Toolkit

After posting previously about dipping my toe in the Infrastructure as Code waters with Terraform, a kind individual (who requested staying nameless) asked if I’d encountered AWS’ native Cloud Deployment Toolkit (CDK). I vaguely remember seeing a Beta announcement sometime back when the toolkit was first announcement, but had discounted at the time as it wasn’t stable for production workloads. Reviewing again, I’d missed the announcement that the CDK had graduated to general availability July last year.

Whilst I was quite happy with Terraform’s performance where I’ve used it, my mantra towards cloud-based platforms is: (where possible) stay native. So I took a look at what the AWS-CDK offered, and immediately came across some excellent resources to get me rapidly up to speed.

  • The CDK Workshop provides great setup guides, and without being a dev I was able to get CDK functional in my environment with minimal fuss. The workshop is helpful split and duplicated into your own (supported) language of choice (more on this benefit later). If you’ve been here before, it should be no surprise that I took advantage of the Python modules.
    N.B. The workshop will set you with a AWS Access Keys with Administrator privileges. This may have been the trigger for yesterday’s post on protecting your keys with MFA. You may want to consider the same….
  • The AWS-Samples repo should be a required bookmark for anyone working with AWS. The AWS-CDK-Examples repo is no exception, providing great real-world use cases to suggest potential architecture design patterns. As with the workshop above, the examples repo has examples across all supported coding languages.
  • Buried in the above examples repo, was a link to a recorded demo from two of the CDK’s lead developers; Elad Ben-Israel and Jason Fulghum. I’d definitely recommend taking an hour to watch the dev’s leverage the power of the CDK to live-code a solution to a real-world problem, it greatly helped me get started.

(Almost) Language agnostic

In addition to being native from AWS themselves, one of the immediate values to be is that (unlike Terraform, CloudFormation, or similar) the CDK maps into common programming languages as just another set of libraries/modules. From my perspective this means I can leverage the power of IaC, whilst staying within my preferred Python; and not need to learn an additional language/syntax.

For the curious, this is enabled by the JSII to map between the toolkit and a given language’s syntax and structure, but I’ll admit this aspect gets me well outside of my coding comfort zone; I’ll just appreciate that it works. In practice this means that if Python isn’t you’re language of choice, you’ve plenty of other popular options:

Getting Started

If I’ve whet your whistle, I’d recommend you stop reading the blog of someone who is just getting to grips with the CDK himself and jump into the resources above.

If you’re still here (why? seriously, check out the links above from those that know what they’re talking about.) I’ve essentially mapped the primary commands/features I’d leveraged from Terraform into their CDK equivalents.

terraform init <==> cdk init
cdk init will do what it says on the tin: initialise your current working directory with the basic building blocks needed to start defining your architecture and pushing to the cloud.

Warning: if you’ve an existing codebase you’re planning to CDK-ify, I’d strongly recommend init’ing in a blank directory first, so you can review the changes the command will make to your existing workspace.

terraform plan <==> cdk synth
cdk synth takes the code you’ve defined in your language of choice and creates a CloudFormation stack to deploy the defined architecture, assets and configuration.

terraform plan <==> cdk diff
Diverging from Terraform’s workflow; cdk diff provides a view of your environment from the perspective of what changes are going to be made; going from your environment as it’s currently running to the future state that deploying your current CDK stack will create. Leading us to…..

terraform apply <==> cdk deploy
cdk deploy is the first command that will actually make changes to your running AWS environment. All being well, it takes the CloudFormation template developed by cdk synth and actually runs the template under CloudFormation to build, modify or remove your infrastructure as required.

terraform destroy <==> cdk destroy
Again, as it says on the tin – cdk destroy will destroy all(*) resources created and managed by it’s defined stack(s), removing them from AWS. As I stated when discussing Terraform and IaC originally; this is the key value for my interest in IAC toolkits: confidence that at the end of a session I can tear down the infrastructure I’ve been using and (hopefully) not get hit with a nasty AWS bill if I forget about service $x and leave it running for a few weeks.

(* As a found with Terraform, S3 buckets (and I suspect other data services) don’t get removed by the various destroy commands if they’ve been populated with any date after being instantiated – you have been warned…..)

First impressions…

…very good. As I did with Terraform, I took a proof of concept of using CDK to deploy my little microservice for automatically spotting credit cards in images. This was something I was able to achieve with a few hours of learning, research, and trial and error. Will likely cover this separately shortly….

With Terraform I found the use-case of a single Lambda function and peripheral services harder than expected, the code for Lambda needed to be packaged manually as an archive prior to diving into the various *.tf configuration files. Admittedly not the end of the world once the process flow was known, but I did found it broke workflow and felt a little cumbersome.

With CDK this pain point was completely removed. When defining the Lambda resource in the CDK’s stack configuration, you simply pass the code parameter the directory containing your lambda function (and required libraries, if necessary), and CDK will do the rest on a deploy. For example, literally just:

example_function = aws_lambda.Function(self, "cardspotterLambda",code=_lambda.Code.asset("./lambda/"), some_other_params....)

Teaser for things to come…..


AWS CLI – Forcing MFA

If you’re planning on using AWS efficiently, you’re going to want to automate with the CLI, various SDKs and/or the relatively newly released Cloud Development Kit (AWS-CDK). This typically requires an access key pair, providing access to your account, and in need of being secured against abuse. Adding MFA capabilities to the account reduces a lot of risk, it works seamlessly with the web console UI, but can cause some confusion when dealing with CLI access.

For succinctness I’m going to assume that you have an existing user, and associated access keys already. Your ~/.aws/credentials will look like the below (relevant keys will be removed by the time this is posted):

aws_access_key_id = AKIAQOEN7NXFSJGCSB24
aws_secret_access_key = NzPMANXBqFDfh7YwkYpIPBgbET94QFg75eswzG7l
region = eu-west-1


For the sake of this demo, we’re going to duplicate the Admin policy (below) granting near God-like access to your AWS account. Note, you probably don’t want to do this in the real world as it clearly makes a mockery of principal of Least Privilege, but works for a demo is is clearly a privileged account that we’d want to do our utmost to lock down and keep out of hostile hands.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "GodLikePermissions",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"


Using the AWS CLI tool, you can now do anything your user has permission to do; which with the permissions above, is just about anything. For example:

awaite@Armitage:/tmp$ aws --profile mfa-demo s3 ls
2019-09-28 20:55:13 <redacted>
2019-09-28 20:56:03 <redacted>

Adding MFA token

Adding an MFA token is handled from the user’s security credentials page, below. If you’ve ever used virtual MFA tokens for literally any other service, then the process should self explanatory.

Job done?

Unfortunately not, the access key pair above will still provide access just as they did above.

Deny access without MFA

Logical order of IAM policy causes an explicit deny to take priority over any competing policy statement. With this, we can add a policy statement to deny any action requested without MFA. Expanding the sample policy statement above produces:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Permissions",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
            "Sid": "DenyNonMFA",
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"

And running the same cli command, now throws an error:

awaite@Armitage:/tmp$ aws --profile mfa-demo s3 ls

An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

Requesting session token with MFA

To recap, we now have a user with the same permissions as before, but unable to utilise them without verifying ownership with the matching MFA token. This is achieved with AWS’s Security Token Service, specifically get-session-token function. From the commandline:

  • –serial-number: arn for the MFA token assigned to your user.
  • –token-code: MFA token code
  • [optional] –duration: lifetime for the token to remain valid, in seconds (default is 1hr)
awaite@Armitage:/tmp$ aws --profile mfa-demo sts get-session-token --serial-number arn:aws:iam::<account_number>:mfa/mfa_demo --token-code 987654
    "Credentials": {
        "SessionToken": "FwoGZXIvYXdzEPb//////////wEaDEioLK19BAZ+rPCosiKGAa2cfjZK99HUj8e9w9ZowKuz5ccWo8t3oSBaSiTv70Km0uYigFWXEa1EVjzcf2PD8LYR4paAeaJrLY+8q4MVmWVslYMskVPh22TdLxF24yEaELq/MBlbBnvBwDH37tTvd8nQlD/jXsmI00ludQh4XRUbhzV+76dUgZG9BcLRB47/ClThsp47KPjYsvEFMijGo2SOHNI8xh16TFJnLIZyx4qZ9Y0A65eugu0CnclDT01KoWnLIC1x",
        "Expiration": "2020-01-26T09:00:40Z",
        "AccessKeyId": "ASIAQOEN7NXF7VSXV45K",
        "SecretAccessKey": "kfr9bELctF7okIUSSylmwepLI9jJDEH9gcaNNbML"

Once obtained, credentials need adding into your ~/.aws/credentials file, note the additional aws_session_token variable:

aws_access_key_id = ASIAQOEN7NXF7VSXV45K
aws_secret_access_key = kfr9bELctF7okIUSSylmwepLI9jJDEH9gcaNNbML
aws_session_token = FwoGZXIvYXdzEPb//////////wEaDEioLK19BAZ+rPCosiKGAa2cfjZK99HUj8e9w9ZowKuz5ccWo8t3oSBaSiTv70Km0uYigFWXEa1EVjzcf2PD8LYR4paAeaJrLY+8q4MVmWVslYMskVPh22TdLxF24yEaELq/MBlbBnvBwDH37tTvd8nQlD/jXsmI00ludQh4XRUbhzV+76dUgZG9BcLR$region = eu-west-1

And with that, we’re back to being able to work with the CLI, more confident that we’re the only ones using this key:

awaite@Armitage:/tmp$ aws --profile mfa-session s3 ls
2019-09-28 20:55:13 <redacted>
2019-09-28 20:56:03 <redacted>

For more information, a couple of AWS KnowledgeBase and documentation articles proved useful getting all the pieces lined up correctly:


Cowrie SSH Honeypot – AWS EC2 build script

Happy New Year all!

Whilst eating FAR too much turkey and chocolates over the festive break, I’ve managed to progress a couple of personal projects on (between stints on the kids’ Scalectrix track, thanks Santa). Still tasks to do(*), but a working EC2 User-Data script to build to automate deployment Cowrie honeypot has reached MVP stage.

# based on
apt -y update 
DEBIAN_FRONTEND=noninteractive apt -y upgrade 
apt -y install git python-virtualenv libssl-dev libffi-dev build-essential libpython3-dev python3-minimal authbind virtualenv
adduser --disabled-password --gecos "" cowrie
sudo -H -u cowrie /bin/bash -s << EOF >> /home/cowrie/heredoc.out
cd /home/cowrie/
git clone
cd /home/cowrie/cowrie
virtualenv --python=python3 cowrie-env
source cowrie-env/bin/activate
pip install --upgrade pip
pip install --upgrade -r requirements.txt
bin/cowrie start
# runs with cowrie.cfg.dist - will need tuning to specific usecase

Latest version will be maintained here

*current items on back of beer mat project plan, which may or may not get completed, are:

  • Customise cowrie.cfg, to launch on standard ports rather than default SSH on T:2222 – Completed
  • Fix apt upgrade issue – Fixed courtesy of @ajhdock
  • Mount Cowrie logging, output, and downloads to EFS for persistance – configure Cowrie’s native S3 output module
  • Expand instance to Spot instance pool to lower costs and/or increase instance count
  • Ingest activity logs into $something for further analysis

Andrew Waite

[Project] AWS-Card-Spotter – Terraform deployment

tl;dr – this project can now be deployed automatically with a Terraform script

Last project update, I introduced my project to leverage AWS resource to identify if pictures uploaded to an S3 bucket might contain images of credit cards, and in turn need special handling under an organisation’s PCI DSS processes. And it worked!

But cranking serverless environments by hand limits the full power of cloud (so I’m told), so I looked at ways to automate the process; seemingly there’s plenty of options, all with various pros and cons:

  • CloudFormation (CFN):
    • AWS own native platform for defining Infrastructure as Code (IaC). CFN is one of AWS ‘free’ feature sets (be wary, you still pay as usual for the services deployed BY CFN, but CFN itself isn’t chargeable).
  • Serverless Framework:
    • Well recommended, but looking at pricing model access to AWS resources such as SNS used in this project’s architecture crossed the threshold into the paid for Pro version.
  • Terraform:
    • Fully featured 3rd party offering, able to add cloud features.
  • AWS Serverless Application Model (SAM):
    • AWS’ latest framework for deploying serverless architecture. Being honest, this is probably exactly what I needed for this project, but I’d finished the project before coming across SAM in my research (maybe a followup article required…..)

As you’ve probably guessed if you read the title or tl;dr of this post: For this project, Terrform nosed over the line; largely for the unscientific reasoning of the DevOps people I know use Terraform for their IaC projects – so I could call in support and troubleshooting if needed.

Terraform Build Script

At it’s heart Terraform’s syntax provides a framework for defining what resources you want deploying to your given environment. Resource configuration can get complex if you need lots of customisation, but a basic resource definition follows a simple format:

  parameter = "$value"

For example, the definition for a new S3 bucket to upload images to for this project was:

resource "aws_s3_bucket" "bucket" {
  bucket = "infosanity-aws-card-spotter-tfbuild"

As you begin to build up more complex environments you’ll need to reference defined resources this is achieved with the format:


For example, when creating an S3 Bucket Notification resource, to trigger the lambda code when a a new file is uploaded the ID of the bucket created above is required, like so:

resource "aws_s3_bucket_notification" "bucket_notification" {
  bucket = "${}"

Deployment Process

Terraform has a lot of functionality, many I’m yet to begin to process, but for basic usage I’ve found only 4 commands are required.

terraform init

I’m sure init does all many of important activity in the background; for now I just know you need to initialise a Terraform project for the first time before doing anything else. Once done however you can begin to build the infrastructure you’ve defined.

terraform plan

terraform plan essentially performs a dry-run of the build process, interpreting the required resources, determing links between resources which dictate the order in which resources are required, and ultimately what will be performed into your environment. It’s a good idea to review the out both to find any errors in terraform files, and to ensure that terraform is going to make the changes expected. For example, running plan against this project’s terraform file produces:

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # aws_iam_role.iam_for_lambda will be created
  + resource "aws_iam_role" "iam_for_lambda" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
              + Statement = [
                  + {

N.B. Whilst I’ve found plan great for validating changes to a script, some errors, mistakes and commissions will only be caught once the changes are applied. Which leads me to…

terraform apply

As is probably expected, apply does exactly what it says on the tin: Applies the defined infrastructure to your given cloud environment(s). Running this command will change your cloud environment, with all the potential problems you could have caused manually; from breaking production systems, to unexpected vendor costs – but we’re all experts, so that’s not a concern? right?…..

Multiple applies can be used to edit live environments, and terraform will (with both plan and apply) determine exactly what set of changes are required. This can be highly beneficial when iteratively either troubleshooting a misconfiguration, or adding a new feature into an existing environment.

terraform destroy

destroy‘s functionality is ultimately exactly why I was originally interested in adding IaC to my toolkit whilst working within AWS. Like apply, destroy does what it says on the tin: destroying all the resources it’s previously instantiated.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_iam_role.iam_for_lambda will be destroyed
  - resource "aws_iam_role" "iam_for_lambda" {
      - arn                   = "arn:aws:iam::<REDACTED>:role/iam_for_lambda" -> null
      - assume_role_policy    = jsonencode(
              - Statement = [
                  - {

From a housekeeping perspective, this removes the potential for unneeded legacy resources being left around forever because no-one’s quite sure what it does, or what it’s connected to. When a project environment built and maintained by terraform is gone in one command. From a personal perspective, this removes (reduces?) the potential for leaving expensive resources running in a personal account: finished a development session? terraform destroy and everything’s(*) gone.

* Ok, there’s some exceptions: For example, if there’s any uploaded files left in the created S3 bucket after testing, terraform destroy won’t delete the bucket until the contents are manually removed. Which I can definitely see is a sensible precaution to help avoid accidental data loss.


Getting my hands dirty, I found a couple of issues which were difficult to initially overcome.

I’d initially naively assumed a little microservice built around a single Lambda would be a simple use-case to use as a learning project. Turns out, Lambda is one service that doesn’t work wonderfully well with terraform thanks to a circular reference: the lambda code needs writing (and packaging, zip archive) prior to deployment of infrastructure, but the lambda code will probably need to reference other cloud resources which aren’t built/known until after deployment.

This is was resolved by having the lambda code (Python, in this case) retrieve the resource references from runtime environment variables, which are populated by terraform at build. Simple enough once worked out, but meant that my simple project took longer than initially expected.

Secondly, again for ease of proof of concept, the SNS Topic for this project was intended to just be a simple email trigger with the output. Because of AWS’ (sensible) requirement for an email subscription to be verified before sending (to prevent abuse, spam, etc.) email endpoints aren’t supported by terraform:

These are unsupported because the endpoint needs to be authorized and does not generate an ARN until the target email address has been validated. This breaks the Terraform model and as a result are not currently supported.

This isn’t the greatest limitation; it just requires a manual step of subscribing to the created SNS Topic in the AWS console before the deployed pipeline is fully functional. And I’d expect that in a real-world example, the process output will likely trigger further processes, rather than just filling another inbox.


I’m aware that this has been a very superficial introduction into the functionality terraform (and other IaC platforms) can provide. Glad I took the dive to add to my AWS toolbox; I can understand why the ability to quickly define, build, and tear-down infrastructure is going to be an important foundational ability with the other projects I’ve got rattling around my skull. Watch this space….


A Northern Geeks trip, well, home(ish)

Back in the annals of time (2011) I wrote about my first experiences at a security conference; the first UK BSides in London. To say that that con had a big impact on my career is an understatement, but that’s a story for another day. That experience was exactly why; when catching up with an old colleague just two and a bit month’s ago Ben said “I’m thinking of doing BSides in Newcastle, do you want to help out?”, I immediately and without thinking said “YES!”.

When I read a message later that week that said “I’ve got a venue sorted, we’re scheduled for 9 weeks time”, I immediately said “FUUuuu……….”

Fast forward 9 weeks, and the quickest ever organised BSides(*) is complete, the self named #WeirdestEverBSides living up to it’s name (more on that later).

(* I believe that claim is accurate, but with BSides spinning up all over the globe, I’ll happily be correct if/when someone else claims the crown).


Lets get these out the way first; no event organised in 9 weeks, spearheaded by people that have never organised an event before, was ever going to be perfect. The merch order wasn’t completed by the time of the con, the planned talk recordings/streaming weren’t active, the venue (by it’s nature, more later) was cold and we were still packing the last of the attendee bags whilst the first batch were being handed out.

I’m sure some attendees have complaints and feedback I’m unaware of, please, get in touch and provide any and all feedback, we can’t improve or fix problems we’re unaware of.


Where do I start? I was a broken man after the event (especially after only just fully recovering from an illness that I feared may have forced me to miss the big day), but overhearing positive feedback throughout the event kept me going, and reading all the feedback from attendees on social media channels since the conference closed has me immensely energised and insanely proud of playing my small part in planning and on-day helper-monkey work.

One advantage of the merch delay was we had no way to differentiate any attendee; which was a great leveller, everyone was equal and I could be confident that all the positive comments I heard weren’t just made to be polite to the crew member within earshot.

Now, if you’ve read previous write-ups of my various conference travels, this is the point I usually attempt to summarise each talk I saw, distilling the copious notes I took and attempt to get key points across to anyone that missed a given talk, but was interested in the topic. But, this was the first con I was involved in the crew, so barely got to any sessions myself, and DEFINITELY didn’t get to take any notes; so the rest of this post is likely to be a brain dump of my memory from the day.

Venue – Dynamix

What can I say? University lecture rooms? Hotel conference suites? All been done before, try a skatepark; scratch that, we can go further – lets have track 1 IN the halfpipe!

I felt the choice of venue was inspired from the first pre-con visit I had to get setup for the big day; perfectly setting up the tone of the event, and providing a unique and memorable venue as the backdrop to the con.

The Dynamix team who hosted 100+ geeks and hackers? They were brilliantly supportive and helpful, many thanks to the whole team. If you’ve any interest in doing insane <redacted> propelled on little wheels, get yourself down; the skills on-show by the regulars whilst we were setting up night before were amazing. Me? Did I rekindle my youth (I used to be a blade-r you know?…..)? Considered it; then bailed whilst walking down the steps, spread all over the concrete without the aid of wheels, so those days may be behind me.

Talks – tracks 1 & 2

As mentioned above, I’m disappointed that I missed almost all the talks in their entirety, so I’ll leave the talk summaries to others. What I will say is that the odd couple of minutes I managed to snag hidden at the back whilst running between this and that task were excellent. Sam’s journey through early days home computing as a child felt strangely familiar, Rick’s journey through the evolution of cyberpunk made me feel OLD, and Ben philosophising the methodology known as the “F’#%k it!” approach was both entertaining and provided an insight into how a con was able to go from idea to delivered in ~9 weeks.

I missed all of the Jenny Radcliffe‘s keynote, but was left in stitches when I noticed the message left on the back of her hoody:

As I was running around like a mad-thing at the time I read this, still unsure if we’d be able to pull the con off (despite it had started at this point), this definitely seemed like advice I wish I’d taken. (and I still somewhat blame my naive optimism for running the event on Jenny and her team for making BSidesLiverpool’s inaugural event look so effortless)


If a given pair of talk topics didn’t take your fancy, there was plenty to keep you occupied:

  • Physical Security? Try your hand at lockpicking (and safe cracking) courtesy of Moon on a Stick
  • Looking for your foot on, or next step up, the security career ladder? Try the careers village, with great thanks to Harvey Nash and Sharpe Recruitment
  • Already on the infosec career travellator and need help dealing with the stress and burnout discussed as part of several talks on the day? Try the all (most?) important Mental Health village
  • Your kit getting old and dated? Try the charity sticker collection.

Thanks to everyone that got involved in the last activity, almost £100 raised for Great North Air Ambulance, who do crucial work and it was great to be able to support them in a small way.

As the below shot of my previously naked laptop shows, I had to be pulled away from the stand before I spent my kids’ inheritance.

Just 24hrs ago, this machine was ready for respectable business meetings. Now it’s ready to CRUSH those meetings 🙂


I’m gutted I didn’t once make it upstairs the CTF (and not just because it was one of the only areas with warmth). Everything I heard during event, and following up on social media afterwards suggests I definitely missed a great event. So must say a big thanks to the PwnDefend crew for designing and running the CTF, I must make a better effort next year.

Lunch Break

Lunch started conventionally enough, with pizza provided by the Log Fire Pizza Co. They did an excellent job of refueling attendees and crew alike.

Entertainment during the lunch break was a bit more of a curve ball. You know what totally fits with an infosec conference? Wrestling! Well, maybe not totally, but we had to take advantage of the fact that Battle Ready, featuring none other than WWE NXT’s own, Primate, were training in the far corner of the venue. They agreed to put on show for attendees during the break, for the small price of a pizza each from the LogFire van. Odd combo, but most attendees appeared to appreaciate another bulletpoint on the journey to weirdest ever BSides.

Curtain falls

Ian, aka Phat Hobbit, took centre stage for the closing keynote. Delivered in his usual bombastic style, Ian took the audience through his review of InfoSec during 2019, and crucially providing his insight and wisdom for what will be needed for the years and decades ahead as we as an industry approach the turn of a new decade. Ian had the audience’ undivided attention, the only time you couldn’t hear a pin-drop, was when Ian had the audience roaring with laughter; sometimes with cracking wit, and sometimes just a hard truth, delivered too close to home generating a nervous, knowing, chuckle.
It made me think about my 2019, I’m still thinking about that, but at the start of the year I definitely didn’t expect to be contemplating the same, listening to a keynote speech to a conference I played a small part in organising, whilst overhanging the lip of a half-pipe:

The memorable quote I took away was this:

You won’t be able to beat cyber criminals

We will beat cyber criminals

(Paraphrased as I wasn’t fast enough to take a sneaky pic of the slides before they changed. Ian, happy to be corrected if I’ve misquoted.)

So the conference which emphasised community and togetherness at the heart of an industry, closed with the same message. A request was made for anyone; speaker, official volunteer or attendee to raise their hand if they had helped out in any way, even down to the small act of re-positioning a chair in the venue – almost every single hand was in the air.

Which leaves me with the tl;dr version of a post that is FAR longer than I originally conceived:

For an industry with an annoying reputation for drama, together, we can, do and will achieve amazing things.

And goals which at the outset may have seemed impossible, improbable and just flat-out cray, will be achieved

My thanks to everyone who had enough faith in the event to give up their precious free time to join a bunch of overly naive and optimistic geeks and hackers who dared to believe that a security conference, organised in ~9weeks, in a cold warehouse in Newcastle (yes, Gateshead 🙂 ) could possibly be anything other than a disaster. I’m obviously very biased, but I believe we achieved, at least, the level of ‘not a complete disaster’.

See you next year? Maybe?