Session Manager via VPC EndPoint

Session Manager

For some time I’ve come to rely on AWS’s Session Manager for remote administration of my EC2 instances. The ability to drop into a admin shell with nothing other than a browser is too handy to pass up. Especially when you can begin removing ingress points which can be abused, reducing attack surface is always a laudable goal.

But a recent project hit a use-case I’d not encountered. A truly private subnet, not just no ingress allowed via Security Group, not just a private IP address via NAT gateway; a subnet with no Internet connectivity.

Should still work…

Admittedly, my first instinct was this should still work. I was still connecting to AWS via the web browser, and the instance was still within AWS’s infrastructure. Set everything up as normal, yet no remote connection. Investigation ensured and eventually (with some assistance) came across the explanation – Instances require access to various AWS services, which were unavailable in the configuration described above, and those requirements can be plumbed directly into an otherwise private subnet via VPC EndPoints

To Terraform!

A couple of hours trial and error later, and a working demo environment deployed with Terraform was surprisingly easy.

Basic requirements are:

  • EC2 instance with Session Manager agent installed and active –
    • Amazon2 OS have agent pre-installed
  • IAM instance role with permissions to access ec2Messages, SSM and SSMMessages APIs
    • Native IAM Policy AmazonSSMManagedInstanceCore works nicely
  • VPC EndPoints for the same APIs.

Example VPC configuration:

resource "aws_vpc_endpoint" "ec2messages" {
  vpc_id              = aws_vpc.private.id
  service_name        = "com.amazonaws.eu-west-1.ec2messages"
  vpc_endpoint_type   = "Interface"
  security_group_ids  = [aws_security_group.vpc_ep_sg.id]
  subnet_ids          = [aws_subnet.private.id]
  private_dns_enabled = true
  tags = {
    "Name" = "ec2Messages"
  }
}

If you’d like to play along yourself, my demo deployment is available here, albeit with a caveat…

Deploying the project will incur (minimal) costs, and it was developed rapidly from the sofa, in a single evening, specifically as a quick proof of concept; it works, but I’d definitely recommend further review before deploying into a production environment, do so at your own risk.


Andrew

Automating infrastructure code audits with tfsec

Unless you’ve been living under a rock for the last few years, you’ll know a few things about the Cloud:

  • Functionality and capabilities released by Cloud vendors are expanding at an exponential rate.
  • DevOps paradigm is (seemingly) here to stay – the several cold days of building physical hardware sat on the floor of a datacentre where I started my career are a rarity.
  • Infrastructure as Code makes it ridiculously easy to quickly deploy a vast inventory of resources

All of the above are excellent, and each re:invent, ignite, etc get me as excited as my little kids in the run-up to Christmas, just thinking about all those potentially new goodies to play with. But even as an overly excited security monkey, keeping track of ALL that functionality is near impossible, and ensuring that you don’t introduce a weakness into your environment when leveraging the latest and greatest tech is fraught with danger.

There are many platforms to help identify configuration weaknesses; within AWS I’ve been a great fan of SecurityHub since it’s announcement in 2018, especially after the release of the Foundational Security Best Practice standard earlier this year – but as useful as SecHub is, it’s a bit stable-door-y, in that it’s identifying issues after they’re in your environment. What if there was a better way?….

TFSec

With the advent of Infrastructure as Code (IaC) referenced above, I’ve been building a lot of cloud architecture as code; and the environment’s I’ve been working within have tended to leverage Terraform’s platform for this purpose. TFSec is a tool to identify potential security issues with a projects plans/designs before you commit to deploying the resources into your production environment. And taken a step further, tfsec’s design can be leveraged to implement into your existing CI/CD pipeline (either passive or blocking), allowing common security issues to be identified automatically during the standard deployment process, without needing any specific input from the security team.

Sound good? Let’s take a closer look….

Install

I’ll not steal any thunder from the tfsec documentation, but I found installation surprisingly straightforward, a one-liner with your favourite package manager was all it took to get the binary on my system

choco install tfsec

Running

Running the tool is equally straightforward from your local system. Simply navigate to the directory containing the project you have concerns about, and:

tfsec

Yes, it really is that straightforward.

Example/demo time…

Using the AWS Access Key Honeypot hobby project I’ve been working on recently as an example, tfsec ran a quick audit and identified that the SNS Topic used for notifications, whilst working perfectly for happy-path use-case, wasn’t encrypting it’s messages in transit.

code\Access-Key-Pot>tfsec     

Problem 1

  [AWS016][ERROR] Resource 'aws_sns_topic.honeypot-notifications' defines an unencrypted SNS topic.
  C:\Users\awaite\Documents\code\Access-Key-Pot\cloudwatch.tf:9-31

       6 |   name = var.cloudtrail_logs
       7 | }
       8 |
       9 | resource "aws_sns_topic" "honeypot-notifications" {
      10 |
      11 |   name            = "honeypotAlarmsTopic"
      12 |   delivery_policy = <<EOF
      13 | {
      14 |   "http": {
      15 |     "defaultHealthyRetryPolicy": {
      16 |       "minDelayTarget": 20,
      17 |       "maxDelayTarget": 20,
      18 |       "numRetries": 3,
      19 |       "numMaxDelayRetries": 0,
      20 |       "numNoDelayRetries": 0,
      21 |       "numMinDelayRetries": 0,
      22 |       "backoffFunction": "linear"
      23 |     },
      24 |     "disableSubscriptionOverrides": false,
      25 |     "defaultThrottlePolicy": {
      26 |       "maxReceivesPerSecond": 1
      27 |     }
      28 |   }
      29 | }
      30 | EOF
      31 | }
      32 |
      33 | resource "aws_cloudwatch_log_metric_filter" "honeyuser-metric" {
      34 |   name           = "HoneyUser_Activity"

  See https://tfsec.dev/docs/aws/AWS016/ for more information.

  disk i/o             9.4633ms
  parsing HCL          0s
  evaluating values    3.6542ms
  running checks       1.8473ms
  files loaded         4

1 potential problems detected.

A feature that I found really helpful when starting to work with tfsec, is that it maintains specific documentation for each configuration issue it checks against, including an explanation for why the finding is a problem, as well as a reference example of what a secure/resolved configuration looks like. In the case of the above, all details can be found at: https://www.tfsec.dev/docs/aws/AWS016/

In my case, the message content expected via the SNS Topic isn’t overly sensitive; and key management configuration is outside the scope of a demo project designed as a starting point for people to implement the same in their own environments (as all KMS key configurations, policies etc. could be different) – I may deal with this issue later, but for now I was happy with the single finding; but how to inform tfsec of this acceptance?

All findings/checks can be ignored within your code base, sticking with the above example, I added the below line to the SNS Topic’s resource block. Both telling tfsec to ignore the finding on output, and providing documentation of said decision, all trackable in source control as standard.

#tfsec:ignore:AWS016 - SNS topic encryption beyond scope of demo deployment

And with that, analysis of the updated project returns clean:

code\Access-Key-Pot>tfsec

  disk i/o             9.4636ms
  parsing HCL          0s
  evaluating values    500.5µs
  running checks       499.1µs
  files loaded         4

No problems detected!

Additional integrations

Whilst the above use case of manually checking your projects has a lot of merit, I believe the true potential value of tfsec (and similar tooling) comes from integration with the automated deployment pipelines leveraged by modern dev teams. Triggering a security audit of planned infrastructure on every change, and feeding the findings back into the project’s issues via (for example) Github Actions should make continuous security auditing painless, seamless, and all within platforms already familiar to your dev and infrastructure colleagues. Example (‘borrowed’ from tfsec’s documenation) below.

github security alerts

Cloud Agnostic

The other advantage tfsec provides over my previous example of AWS Security hub (or Azure’s Sentinel); is it’s ability to identify security issues across multiple IaaS vendors, currently covering the big 3 of AWS, Azure and GCP. If you’re concerned of being tied to one vendor, tfsec can help keep your toolset and skills transferable if (when?) your business unit changes direction to a different vendor, your change employer entirely, or simply find that ShadowIT platform that Bob from finance spun up on the weekend that needs an emergency review and lockdown.

Summary

So, go ahead, add tfsec into your deployment pipelines, throw the binary locally at the project you’re currently working on (or that ‘legacy’ platform that Sharon built years ago, and no one has had the confidence (/naivety?) to touch since she left….


Andrew

AWS HoneyUsers

Deception technology and techniques are having a resurgence, expanding beyond the ‘traditional’ high/low- interaction honeypots, into honeyfiles, honeytokens and (as you may have guessed from title) honeyusers. Today is the culmination of a “what if?” idea I’d been thinking for years, actually started working on earlier in the year (but then 2020 happened), but is now released. For those that want to skip ahead, GitHub repo here.

As name suggests project deploys dummy users into an AWS account, which can then be lost, embedded or leaked in places an adversary may come across them. Created users have (*almost) no permissions to do anything, but that attempted activity can be monitored and alerted on, giving 100% true positive threat intel of unauthorised activity within your environment.

(*almost no permissions: Given AWS API model, STS call get-caller-identity will succeed; but this is a read only function providing metadata of the honeyuser itself.
N.B. if any cloud red-teamers are aware of an exploit that can chain from just the information leaked from this call, I’d love to know more)

Full information available in the projects ReadMe, but high-level architecture of the components deployed by Terraform:

Summary Architecture

Once created, any from the dummy user will trigger a notification, which can be consumed by whatever alert/monitoring/threatIntel platform you’re leveraging (or just a simple email)

Example notification

If you don’t want to run some infrastructure as code scripts from a random InfoSecMonkey on the Internet (I won’t hold it against you). Take a look at Thinkst’ Canary project, where you can implement similar tokens (and MANY more, seriously, Thinkst deserve massive props, or at least a follow) without needing to get your hands dirty directly, and via a nice webUI.

Finally, I’ll end with a request: I’d love to end with a request: If you’re sprinkling deception users across your estate with this project (or any other method), I’d love to hear some war-stories if they’re not locked behind NDAs.


Andrew (@infosanity)

Bad workmen – a Terraform Lambda deployment story

You know the old adage of “a bad workman blames their tools”? Well, guilty as charged…

When I built my AWS-cardspotter project with Terraform, the main goal was to learn Terraform, which I had no/limited/not-enough experience with at the time. Looking back at that initial deployment (it’s awful, please don’t judge me, or use as a reference point for anything in production workloads…), one of the criticisms I’d laid against Terraform was the need for a separate built step, bundling the Lambda’s code to a zip archive prior to being deployed with Terraform. As you might have guessed from the title and opening of this post, I was wrong.

Introducing the all powerful (well, useful) archive_file data block:

data "archive_file" "spotter_lambda_archive" {
  type        = "zip"
  source_dir = "../AWS-card-spotter/Lambda/"
  output_path = "AWS-card-spotter_LAMBDA.zip"
}

Simple as that.


Andrew

AWS CLI – MFA with aws-vault – making it seamless

Oooof! That’s a long title, but I realised after last post (did you miss last episode? catchup here) that whilst the post covered all the technical requirements for getting aws-vault operational, it missed some steps to truly integrate with your current workflows, without introducing additional cycle. So without additional pre-amble, introducing……

credential_process=….

As it’s name may imply, the credential_process directive is added to the standard .aws/credentials file to tell AWS clients/SDKs how to generate the requested credentials on the fly rather than hardcoding key-pairs into the .aws/credentials file (remember: hardcoded, clear-test credentials ==BAD).

What does this look like in practice? If you’ve been following along, you’ll have seen that I have a demo environment with a profile infosanitydemo, and that to access the temporal credentials generated by aws-vault, I’d shown passing the command I wanted to run as a given user/role to the aws-vault binary. Like so:

>aws-vault exec infosanity-demo aws sts get-caller-identity
Enter token for arn:aws:iam::<redacted>:mfa/infosanity_demo: 858178
{
    "UserId": "AIDA<redacted>",
    "Account": "<redacted>",
    "Arn": "arn:aws:iam::<redacted>:user/infosanity_demo"
}

This works, but I promised integration with your existing, non-aws-vault-aware workloads; you don’t really want the extra finger workouts for typing aws-vault exec $profile in front of every command requiring AWS credentials. [Profile] blocks are common components of the .aws/credentials file, and if you’ve already got hardcoded key pairs, this is where you’d find them with aws_secret_access-key config lines. If you’ve confirmed that aws-vault is configured and working (the above sts get-caller-identity call is perfect for testing), then replace your current hardcoded keys with similar to:

[infosanity-demo]
credential_process = aws-vault exec infosanity-demo --json

It really is that simple. Now, the next time an AWS client/SDK/etc attempts to use your profile, it will trigger aws-vault in the background, providing the particular script/runtime/etc. with temporary keys, without the tool needing to change for, even be aware of, the integration with aws-vault keeping your actuall AWS key pairs much safer (or non-existent if you’re using role assumption, which I’d recommend).

N.B. this isn’t just for aws-vault, if you manage your AWS secrets via a different method, credential_process will still provide the capabilities described with any utility that is able to provide AWS credentials in the expected format. Truly powerful.

Now, reverting back to the native cli tool works flawlessly….

> aws --profile infosanity-demo sts get-caller-identity
Enter MFA code for arn:aws:iam::<redacted>:mfa/infosanity_demo: 
{
    "UserId": "AROA<redacted>:botocore-session-1606155746",
    "Account": "<redacted>",
    "Arn": "arn:aws:sts::<redacted>:assumed-role/infosanity-demo-test-role/botocore-session-1606155746"
}

No more (well, less) excuses for hardcoding AWS keypairs and (potentially) leaving them in backups, git commits, etc. – because that would be embarrassing….


Andrew

AWS CLI – MFA with aws-vault

Previously I’ve covered why it’s important to protect AWS Key Pairs, how to enforce MFA to aid that protection, and how to continue working with the key pairs once MFA is required. If you missed the initial article post, all is available here.

Everything in that article works, but as with a lot of security it’s a bit of a trade-off between working securely, and working efficiently. It’s certainly more secure than cleartext keys being the only defense between you and an adversary raiding your cloud environment, but from a dev/ops’ perspective? It’s additional steps before anyone can do anything productive. Who wants a workflow that requires a couple of minutes unproductive work before we can do actual work? How can we improve?

Automate all the things

Once understood; the workflow for generating temporary keys is relatively simple. It just requires a fair amount of copy/pasting, which is tedious for anyone. Surely it can be automated, jumping into my favoured Python, with Boto imported, it can be.

The guts of the requirement is a single get_session_token call to AWS’ Security Token Service (STS), in this case using AWS’ Boto3 library for Python to handle the creation of an API client.

    temp_session = client.get_session_token(
        SerialNumber = conf["tokenSerial"],
        TokenCode = token
    )

Once we have our temporary credentials, a handful of quick print statements will re-purpose received credentials ready for inclusion into ~/.aws/credentials file:

    print("[TemporaryProfile]")
    print("aws_secret_access_key = %s" %(temp_session["Credentials"]["SecretAccessKey"]))
    print("aws_access_key_id = %s" %(temp_session["Credentials"]["AccessKeyId"]))
    print("aws_session_token = %s" %(temp_session["Credentials"]["SessionToken"]))

A fullly working CLI script is available in Gist form here

Enter AWS-Vault

The quick script above serves our needs, and is an improvement over manually setting serial-token ARNs etc. manually everytime we want to do some work. But now we know how to leverage the available SDKs and understand how the underlying process works, lets stop naively assuming we’re inventing the wheel for the first time and review some existing utilities.

AWS-Vault as a project has been around for roughly 5 years, and is still evolving and improving today. Among many other capabilities, it handles the usecase outlined above (without relying on a script, written in an evening), and is cross platform covering Windows, OSX and Linux (although there’s no prepackaged bundle for dpkg based platforms, my ‘nix of choice).

Once installed, aws-vault is aware of your existing .aws/config file, limiting the configuration steps required to get up and running (no need to duplicate serial-token ARNs to conf[] as with my quick scripts). Just be aware, that as aws-vault is .aws/config aware, it will also modify the same config file as needed whilst you interact with the vault; just in case, backups are (as always) recommended.

The first thing you need to do is add base credentials for your source user, in my case:

>aws-vault add infosanity-demo
Enter Access Key ID: AKIAsomekeyhere
Enter Secret Access Key: 
Added credentials to profile "infosanity-demo" in vault

Once securely (aws-vault integrates with OS’ native secure key store, such as KeyChain or Windows Credential Manager. No more keys in plaintext files littering your HDD) stored within the utility, you can see the credentials you have available:

>aws-vault list
Profile                  Credentials              Sessions
=======                  ===========              ========
infosanity-demo          infosanity-demo          -

And start working with AWS, letting aws-vault handle management of temporary access keys in the background. For example using aws-vault’ exec to wrap your usual command, in this case aws cli client itself to verify our identity, and then confirming aws-vault’s session state.

>aws-vault exec infosanity-demo aws sts get-caller-identity
Enter token for arn:aws:iam::<redacted>:mfa/infosanity_demo: 858178
{
    "UserId": "AIDA<redacted>",
    "Account": "<redacted>",
    "Arn": "arn:aws:iam::<redacted>:user/infosanity_demo"
}

>aws-vault list   
Profile                  Credentials              Sessions
=======                  ===========              ========
infosanity-demo          infosanity-demo          sts.GetSessionToken:59m53s

With aws-vault up and running, you’re ready to leverage all the power of aws’ APIs and associated IaC frameworks (such as my favoured CDK). Safe in the knowledge that your access credentials are securely managed in the background, and (hopefully) reducing both the likelihood and impact of access keys accidentally sneaking into a source code commit, or accidental(?) tweet….).


Andrew

DC44191 – AWS Security Ramblings

In the last week of August, in the middle of Summer vacation, I had the honour of being asked to give a presentation at the second meeting of the newly formed DC44191 in (virtual, for now) Newcastle. Local DefCon groups are an offshoot of the long running, DefCon conference (usually) hosted in annually in Las Vegas. If you’re not aware of the great history of DefCon, you can get a jumpstart here.

Whilst honoured to be asked, and keen to do anything necessary to support the rise of another local infosec group; I was nervous. DefCon is all about teh hackz, and whilst I’ve spent some time as a roadwarrior-ing red-teamer, that’s not been my day to day activity for the last few years. Would those interested in attending DC groups be as interested in BlueTeam topics? I needn’t have worried, I pitched my idea of “Things I wish I’d known about AWS, when I started working with AWS” to Callum, and the session was greenlit.

The talk is born

Most cloud security discussions, whilst useful, tend to begin from one of three premises:

  • Building ideal infrastructure on greenfield site, this is what we’re going to build
  • Pre-built ideal infrastructure, this is how we built it
  • Or a deep dive into a specific technologies and services, in isolation from the wider picture.

What I wanted to discuss, and hopefully I managed, was what I would do if parachuted into an existing AWS account and was suddenly made responsible for securing whatever is there. Whether starting a new role and needing to get up to speed rapidly, or upon discovery of more of the the ever popular ShadowIT. With that in mind, what was covered in this whirlwind tour of day one(ish) with a new, to your, AWS environment?

Stop using your root account!

Seriously, if you only take one thing from this, STOP USING THE ROOT ACCOUNT, and whilst you keep it nice and idle make sure to delete any access keys and enable MFA for extra protection. You do not want someone unauthorised helpfully spinning up infrastructure on your behalf (and at your expense….).

If you’re using the root AWS account, nothing I’m going to discuss below is as important; stop reading now and go deal with that first. I’ll still be here once you’re done.

Don’t know if the root account is in use or suitably secured? Read on, we’ve some services that might help you later…

Logs! Enable CloudTrail

In the ephemeral Cloud, observability is key. If you can’t see it, you can’t secure it; and CloudTrail is the cornerstone of almost all other key security features below. Turn it on, make sure it can see EVERYTHING (yes, even that region you’ve no intention of ever using). Get it turned on an capturing data you’ll need later.

Automated config control and asset management sound good?

Then AWS:Config is where you want to be.

Easily setup with a couple of clicks, Config will begin to track the configuration of your AWS resources (at AWS level, not internal configuration of EC2 server instances etc.). Creating a searchable asset inventory and audit history of configuration changes. Want to know who made the supersecret S3 bucket public 30mins after a security audit checked it was suitable restricted? Config can answer your who did what to what questions.

But wait, there’s more. Take the above example, knowing who did what is good, but that’s horse-bolted-stable-door territory. Wouldn’t it be good if we could have a business rule that states “S3 buckets will not expose data direct to the entire Internet”, and the infrastructure could self enforce? Enter rules and remediation actions. Define the rules, and what actions to take for items that fail. With Lambda routines, ANY (*almost) requirement you have can be automated. But I’m going too far down a rabbithole for this overview, maybe there’s a followup post or three there, for now just know that Config rules can help tell you, for example, if the Root user account has active access keys in use (see, told you above we’d help with that).

In summary AWS:Config verifies sensible configuration from the people with legitimate access to your cloud, and makes proving that with auditors really easy. Change Control Auditing as a Service.

What about attackers?

AWS also has you covered with GuardDuty, essentially an incloud IDS monitoring for all manner nefarious activity. If you are dropped into an existing AWS workload, either review Guardduty logs for an overview of what threats are bombarding the systems; or enable it (it’s quick and relatively inexpensive) and then review the findings for an overview of what threats are bombarding the systems. Its one of those services that you hope never troubles your inbox, or command too much of your time, but you can rest easy(ier) in the knowledge that it’s monitoring for threat activity 24/7 so you don’t have to.

Central Control?

We’re still only in our hypothetical day one of being responsible for a new AWS environment, and we’re already managing a good number of systems. Time to call our favourite reseller for one of those “single pane of glass” platforms? Nope, AWS again has you covered, in this case with SecurityHub. Recently (-ish, 2018) released to combine security findings and metrics from a number of different sources (including those above, funny how that happens) and presents all from one service.

And as an added bonus, if you got a bit overwhelmed by the quantity of potential rules you can employ within AWS:Config (before we even get to custom rules) you can stand on the shoulders of giants (CIS, PCI Council, or AWS themselves) and enable a steadily growing list of security standards, which will enable and report back on a multitude of checks and metrics at the touch of an Enable button. This workload handle payment data and fall underscope for PCI DSS controls? Enable the PCI-DSS standard, and relevant checks will spring to life, helpfully mapped to the specific control requirements for easy auditing purposes.

The End….

With that, my whirlwind tour of where I’d start reviewing and securing an AWS account that I was newly responsible for was over. I’ve not quite put it into practice to see if you could actually achieve all that in day one, but it’s good to have goals. If anyone puts it to the test, I’d be curious to know the outcome when theory meets reality.

Unlike the inaugural DC44191, kicked off by the inimitable Mike Thompson discussing 3rd party web apps, this meeting wasn’t recorded for prosperity; which as I have a face for radio, not live streaming, is probably a good thing. Despite that limitation, I’m since regretting that decision to not record as I’ve had a few questions from people after the event that would have been easier to answer with a recording of the talk to point to. And the session’s Q&A was lively and covered some more advanced topics I’d considered, but left out of the presentation, it would have been good to have those interactions recorded also, especially as some audience members more knowledgeable than I jumped in to expand some answers as well as ask perceptive questions.

  • If you were in attendance for the live DC44191 presentation, I hope it was worth your time.
  • Similarly, if you’ve got this far, thank you for reading and I hope it’s sparked some ideas which can help your current and future challenges securing AWS workloads.
  • If you’ve any questions, or would like to discuss any aspect in more depth; please leave a comment below, or find me on twitter


Andrew Waite

A Northern Geeks trip, well, nowhere

It’s hard to judge time given current non-technical ongoings, but it’s (about) a year since the “A Northern Geeks trip…..” series stayed close to home. That was the inaugural BSides Newcastle, and somehow it came time for the 2020 edition. Which brought about some changes; firstly, C-19 forced the organising team to abandon some amazing plans as this years event went virtual (trust me, some of the plans would have been amazing; no spoilers as I hope they get resurrected for next year (in the hope we can once again share a meatspace location). Secondly, on a personal level I stepped back from being involved with the organising team this year, the strain of C-19 meant I couldn’t take on any additional demands, instead focusing on young family, paid employment, and mental health. Thankfully, and entirely predictably, my absence had zero negative impact on a great conference, which I was able to selfishly enjoy risk free as a participant.

My first thoughts, Virtual conference still feel weird to me. As I posted at the time, livestreaming a conference to my living room that I’m currently travelling in person for just felt off. I suspect this may be the delivery method for the forseeable future, so I hope I can get used to them quickly.

A negative of being in home mode, rather than conference mode is that my note talking during talks was dire, so I can’t provide my usual long form review of the sessions I attended. But there were a few talks that stood that I’d like to mention:

  • Sam Hogy‘s talk on Friday covering security CD/CI pipelines was excellent, and definitely had some content that I want to review again later.
  • Avinash Jain covered a similar topic with discussion of moving security to earlier steps of the DevSevOps pipeline
  • It’s hard to make topics that include the work compliance interesting, even to those of us that work within the various frameworks. But Bugcat did a great job of walking through methods of leveraging SIEM logs and capabilities to drive and prove PCI-DSS compliance. My only complaint was that the resolution of the demos/screenshares was hard to make out some of the exact content shown.

Looking at the talks that really stood out, I found it interesting that my preference in conference material has shifted along with my professional change from red team to blue over the last few years. Whilst they were good talks, Gabriel Ryan generating obfuscated malware payloads on the fly with the introduction of DropEngine, or Mauro and Luis weaponising USB powerbanks didn’t pique my interest the way similar topics have in previous years.

That’s the talks covered, but BSidesNewcastle wouldn’t be living up to it’s tagline of #WhereTheWeirdThingsGrow (emphasis mine) without some weird. Remote nature of this years con meant that we weren’t all huddled in a skate park, or watching a wrestling display whilst enjoying fresh stone baked pizza, but the team did not disappoint on the weird front. From the Antaganostics waving socks containing bricks at swordsmen to settle disputes (don’t ask) to a tin-foil-hat making competition, there was plenty of fun to be had, and memories to be treasured.

So whilst I may personally struggle with the context shift to virtual cons, in a year with physical cons (rightly) cancelled left, right, and center; I’d like to extend my deep appreciation for all of those involved in making the event a great success against all the difficulties this year has presented. This equally goes for the corporate sponsors whom helped provide the resources to make any conference possible, the move away from physical conference must have made sponsorship a risky ROI discussion, I hope the faith in the BSidesNewcastle team and community was well reward (and I’ll try not to take it personally that my own corporate overloads sponsored this year’s event, but was deemed too risky when I was personally involved in running last years proceedings. #itHurts….. 😀 )

Until next year, hopefully we can all safely return to meet, hack and be merry in person.


Andrew

AWS Cloud Deployment Toolkit

After posting previously about dipping my toe in the Infrastructure as Code waters with Terraform, a kind individual (who requested staying nameless) asked if I’d encountered AWS’ native Cloud Deployment Toolkit (CDK). I vaguely remember seeing a Beta announcement sometime back when the toolkit was first announcement, but had discounted at the time as it wasn’t stable for production workloads. Reviewing again, I’d missed the announcement that the CDK had graduated to general availability July last year.

Whilst I was quite happy with Terraform’s performance where I’ve used it, my mantra towards cloud-based platforms is: (where possible) stay native. So I took a look at what the AWS-CDK offered, and immediately came across some excellent resources to get me rapidly up to speed.

  • The CDK Workshop provides great setup guides, and without being a dev I was able to get CDK functional in my environment with minimal fuss. The workshop is helpful split and duplicated into your own (supported) language of choice (more on this benefit later). If you’ve been here before, it should be no surprise that I took advantage of the Python modules.
    N.B. The workshop will set you with a AWS Access Keys with Administrator privileges. This may have been the trigger for yesterday’s post on protecting your keys with MFA. You may want to consider the same….
  • The AWS-Samples repo should be a required bookmark for anyone working with AWS. The AWS-CDK-Examples repo is no exception, providing great real-world use cases to suggest potential architecture design patterns. As with the workshop above, the examples repo has examples across all supported coding languages.
  • Buried in the above examples repo, was a link to a recorded demo from two of the CDK’s lead developers; Elad Ben-Israel and Jason Fulghum. I’d definitely recommend taking an hour to watch the dev’s leverage the power of the CDK to live-code a solution to a real-world problem, it greatly helped me get started.

(Almost) Language agnostic

In addition to being native from AWS themselves, one of the immediate values to be is that (unlike Terraform, CloudFormation, or similar) the CDK maps into common programming languages as just another set of libraries/modules. From my perspective this means I can leverage the power of IaC, whilst staying within my preferred Python; and not need to learn an additional language/syntax.

For the curious, this is enabled by the JSII to map between the toolkit and a given language’s syntax and structure, but I’ll admit this aspect gets me well outside of my coding comfort zone; I’ll just appreciate that it works. In practice this means that if Python isn’t you’re language of choice, you’ve plenty of other popular options:

Getting Started

If I’ve whet your whistle, I’d recommend you stop reading the blog of someone who is just getting to grips with the CDK himself and jump into the resources above.

If you’re still here (why? seriously, check out the links above from those that know what they’re talking about.) I’ve essentially mapped the primary commands/features I’d leveraged from Terraform into their CDK equivalents.

terraform init <==> cdk init
cdk init will do what it says on the tin: initialise your current working directory with the basic building blocks needed to start defining your architecture and pushing to the cloud.

Warning: if you’ve an existing codebase you’re planning to CDK-ify, I’d strongly recommend init’ing in a blank directory first, so you can review the changes the command will make to your existing workspace.

terraform plan <==> cdk synth
cdk synth takes the code you’ve defined in your language of choice and creates a CloudFormation stack to deploy the defined architecture, assets and configuration.

terraform plan <==> cdk diff
Diverging from Terraform’s workflow; cdk diff provides a view of your environment from the perspective of what changes are going to be made; going from your environment as it’s currently running to the future state that deploying your current CDK stack will create. Leading us to…..

terraform apply <==> cdk deploy
cdk deploy is the first command that will actually make changes to your running AWS environment. All being well, it takes the CloudFormation template developed by cdk synth and actually runs the template under CloudFormation to build, modify or remove your infrastructure as required.

terraform destroy <==> cdk destroy
Again, as it says on the tin – cdk destroy will destroy all(*) resources created and managed by it’s defined stack(s), removing them from AWS. As I stated when discussing Terraform and IaC originally; this is the key value for my interest in IAC toolkits: confidence that at the end of a session I can tear down the infrastructure I’ve been using and (hopefully) not get hit with a nasty AWS bill if I forget about service $x and leave it running for a few weeks.

(* As a found with Terraform, S3 buckets (and I suspect other data services) don’t get removed by the various destroy commands if they’ve been populated with any date after being instantiated – you have been warned…..)

First impressions…

…very good. As I did with Terraform, I took a proof of concept of using CDK to deploy my little microservice for automatically spotting credit cards in images. This was something I was able to achieve with a few hours of learning, research, and trial and error. Will likely cover this separately shortly….

With Terraform I found the use-case of a single Lambda function and peripheral services harder than expected, the code for Lambda needed to be packaged manually as an archive prior to diving into the various *.tf configuration files. Admittedly not the end of the world once the process flow was known, but I did found it broke workflow and felt a little cumbersome.

With CDK this pain point was completely removed. When defining the Lambda resource in the CDK’s stack configuration, you simply pass the code parameter the directory containing your lambda function (and required libraries, if necessary), and CDK will do the rest on a deploy. For example, literally just:

example_function = aws_lambda.Function(self, "cardspotterLambda",code=_lambda.Code.asset("./lambda/"), some_other_params....)

Teaser for things to come…..


Andrew

AWS CLI – Forcing MFA

If you’re planning on using AWS efficiently, you’re going to want to automate with the CLI, various SDKs and/or the relatively newly released Cloud Development Kit (AWS-CDK). This typically requires an access key pair, providing access to your account, and in need of being secured against abuse. Adding MFA capabilities to the account reduces a lot of risk, it works seamlessly with the web console UI, but can cause some confusion when dealing with CLI access.

For succinctness I’m going to assume that you have an existing user, and associated access keys already. Your ~/.aws/credentials will look like the below (relevant keys will be removed by the time this is posted):

[mfa-demo]
aws_access_key_id = AKIAQOEN7NXFSJGCSB24
aws_secret_access_key = NzPMANXBqFDfh7YwkYpIPBgbET94QFg75eswzG7l
region = eu-west-1

Permissions

For the sake of this demo, we’re going to duplicate the Admin policy (below) granting near God-like access to your AWS account. Note, you probably don’t want to do this in the real world as it clearly makes a mockery of principal of Least Privilege, but works for a demo is is clearly a privileged account that we’d want to do our utmost to lock down and keep out of hostile hands.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GodLikePermissions",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

AWS CLI

Using the AWS CLI tool, you can now do anything your user has permission to do; which with the permissions above, is just about anything. For example:

awaite@Armitage:/tmp$ aws --profile mfa-demo s3 ls
2019-09-28 20:55:13 <redacted>
2019-09-28 20:56:03 <redacted>
[--snip--]

Adding MFA token

Adding an MFA token is handled from the user’s security credentials page, below. If you’ve ever used virtual MFA tokens for literally any other service, then the process should self explanatory.

Job done?

Unfortunately not, the access key pair above will still provide access just as they did above.

Deny access without MFA

Logical order of IAM policy causes an explicit deny to take priority over any competing policy statement. With this, we can add a policy statement to deny any action requested without MFA. Expanding the sample policy statement above produces:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Permissions",
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        },
        {
            "Sid": "DenyNonMFA",
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            }
        }
    ]
}

And running the same cli command, now throws an error:

awaite@Armitage:/tmp$ aws --profile mfa-demo s3 ls

An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied

Requesting session token with MFA

To recap, we now have a user with the same permissions as before, but unable to utilise them without verifying ownership with the matching MFA token. This is achieved with AWS’s Security Token Service, specifically get-session-token function. From the commandline:

  • –serial-number: arn for the MFA token assigned to your user.
  • –token-code: MFA token code
  • [optional] –duration: lifetime for the token to remain valid, in seconds (default is 1hr)
awaite@Armitage:/tmp$ aws --profile mfa-demo sts get-session-token --serial-number arn:aws:iam::<account_number>:mfa/mfa_demo --token-code 987654
{
    "Credentials": {
        "SessionToken": "FwoGZXIvYXdzEPb//////////wEaDEioLK19BAZ+rPCosiKGAa2cfjZK99HUj8e9w9ZowKuz5ccWo8t3oSBaSiTv70Km0uYigFWXEa1EVjzcf2PD8LYR4paAeaJrLY+8q4MVmWVslYMskVPh22TdLxF24yEaELq/MBlbBnvBwDH37tTvd8nQlD/jXsmI00ludQh4XRUbhzV+76dUgZG9BcLRB47/ClThsp47KPjYsvEFMijGo2SOHNI8xh16TFJnLIZyx4qZ9Y0A65eugu0CnclDT01KoWnLIC1x",
        "Expiration": "2020-01-26T09:00:40Z",
        "AccessKeyId": "ASIAQOEN7NXF7VSXV45K",
        "SecretAccessKey": "kfr9bELctF7okIUSSylmwepLI9jJDEH9gcaNNbML"
    }
}

Once obtained, credentials need adding into your ~/.aws/credentials file, note the additional aws_session_token variable:

[mfa-session]
aws_access_key_id = ASIAQOEN7NXF7VSXV45K
aws_secret_access_key = kfr9bELctF7okIUSSylmwepLI9jJDEH9gcaNNbML
aws_session_token = FwoGZXIvYXdzEPb//////////wEaDEioLK19BAZ+rPCosiKGAa2cfjZK99HUj8e9w9ZowKuz5ccWo8t3oSBaSiTv70Km0uYigFWXEa1EVjzcf2PD8LYR4paAeaJrLY+8q4MVmWVslYMskVPh22TdLxF24yEaELq/MBlbBnvBwDH37tTvd8nQlD/jXsmI00ludQh4XRUbhzV+76dUgZG9BcLR$region = eu-west-1

And with that, we’re back to being able to work with the CLI, more confident that we’re the only ones using this key:

awaite@Armitage:/tmp$ aws --profile mfa-session s3 ls
2019-09-28 20:55:13 <redacted>
2019-09-28 20:56:03 <redacted>

For more information, a couple of AWS KnowledgeBase and documentation articles proved useful getting all the pieces lined up correctly:


Andrew