You’ve gotten this far. You’ve made one bucket. Now, what if you need to do the same thing for a bunch of other accounts? Infrastructures can grow incredibly quickly in the cloud and they become a pain to manage from the console.

Configuring one bucket is fine, but have you tried fifty? What if, of those fifty, only one has been misconfigured by a member of your team? Don’t waste time checking each and every one through the console. There are better ways to do it.

Preventive measures mean giving access to your S3 bucket to only those users and resources who absolutely need it. Not everyone needs write access, and “delete” should probably be reserved for your most senior team members. Additionally, infrastructure should be written out as code, versioned, and tested in multiple environments before being released to production.

Detective measures means improving the visibility into your infrastructure. At scale, this can mean implementing procedures to be alerted, ideally in real-time, when new security risks, wasteful resources, performance or reliability issues are discovered. Essentially, large infrastructures require consistent and frequent auditing.

Corrective measures means going one step further with your best practice compliance. Imagine a scenario where a junior team member makes a negligent change to your infrastructure that exposes your internal data to unauthorized users. Even one hour of exposure can be too much, and that’s just the time it’ll take for a senior team member to be alerted and fix the issue. In an ideal world, infrastructure corrects itself.

Before getting there, let’s start with the basics.

Infrastructure as Code (IaC)

Using the AWS console can get tedious, especially if you’re managing multiple different accounts and environments for your organisation. The following three ways of configuring your infrastructure are all much better options than the console in these circumstances. The first one, especially, is a must-learn for anyone learning to develop for the cloud.

  1. Command Line Interface (CLI)

Repeating a task manually will inevitably lead to human errors. Using the command line will significantly reduce that risk. A poorly written command will probably not execute, while a console misconfiguration can go unnoticed until it’s too late.

AWS Command Line Interface

I’d recommend having a quick read over the official docs to learn how to install it, and run basic commands.

![](https://lh6.googleusercontent.com/q0DiAy8WQYvWUXDaPMhTxN6AsxPoll3RSnoPrRsPYkTYRqokSS0QoqLUNPryoD-a4Ut7dNhstWiiMrlvhiMNw4mRGfWGFk68HH-IZT_xXP5pKoLWxlqcRrDeSb8tayrKTfz61bnS “Step by step CLI guides in the Knowledge Base” =341x425)

Once you’ve done that, check out the Cloud Conformity S3 Knowledge Base. There are 17 step-by-step guides on implementing S3 best practices through the CLI, and over 350 guides across the different services.

Of course, the CLI has its limitations. It’s still extremely tedious and error-prone to recreate a development environment in production using only the CLI. This is why configurations are written out as code that provisions entire infrastructures for you.

2) CloudFormation

3 minute intro video from AWS

CloudFormation is great. We use it here at Cloud Conformity to manage our infrastructure, however, it is a managed service exclusive to AWS. For organisations using more than one cloud, Terraform is a popular open source option.

3) Terraform

From the official Terraform docs:

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

These tools will help you version and better monitor your buckets, but you might still need help making your S3 configuration template. This is where the AWS Policy Generator can help out.

4) AWS Policy Generator

AWS policy generator

AWS S3 Buckets can be difficult to work with for developers. Other resources and processes often depend on reliable access to data stored on S3. It may be tempting for developers to let all resources get access to all actions. Things will just work and you will never see a permission denied error. As you may have already guessed, this is not best practice. The Principle of Least Privilege is your golden rule when deciding bucket policy.

There are three basic types of actions other resources can take on objects within S3 buckets; GetObject, PutObject, and DeleteObject.

By default, resources should get privileges to none of these actions. Resources should only get the permissions they absolutely need to fulfill their purpose. This will greatly reduce the amount of ways hackers can access your data.

Monitoring — From Part-Time to Real-Time

Now that your infrastructure has been written out as code, it needs to be properly tested. Security risks need to be discovered as they are introduced, and fixed before they reach production environments.

Enabling Amazon Macie is a good first step. This is an AWS managed service that will give you visibility into how your most critical data is being accessed and used. Check out the relevant rule page for more details and instructions on how to enable the service.

Further than this, you will want some kind of ability to detect any sort of bad practice in your account, ideally in real-time. You can do this via CloudWatch and CloudWatch Event Rules.

![](https://lh4.googleusercontent.com/AZxMjv94T__x5K4IQA1MjCgC3Xxx_vTgMzJ7xwAifk8oBjrcDAUHGsqK_WfXdDC83h0RbAxbLmxZH63kuvFENSGxmQHxTmyUKLMTTDxYZY-9xWst2nMwjRttRpIbhlCPu75p137o “A map of how an automated infrastructure monitoring system can be set up.” =602x392)

Using these, you can specify which actions you’re interested in. For example, a bucket policy being changed. If the action taken on your infrastructure matches a rule you’ve defined in CloudWatch, you can tell CloudWatch to trigger a Lambda function of your choice. This Lambda function should send metadata to a separate AWS account that you use to manage security and DevOps.

This is an extremely flexible system. For example, at Cloud Conformity, our handler Lambda functions publish SNS notifications that reach our development team leads with actionable information about newly introduced security risks.

The Future: Self-Healing & Auto-Remediation

Finding best practice failures and notifying staff is not the last step in managing cloud infrastructures at scale. Ideally, an infrastructure should correct itself. This is the cutting edge of DevOps and cloud management.

CloudWatch and Lambda can be used to do more than just infrastructure detective work. Lambda functions can be used to automatically correct AWS resources that are non-compliant with your organisation’s security practices. Here’s a scenario:

  1. A user makes an S3 bucket publicly readable via S3 Access Control Lists (ACLs)
  2. The detective system identifies the risk in real-time
  3. It publishes a message to the specified SNS Topic
  4. SNS topic triggers the orchestrator Lambda function which in turns calls S3 bucket auto-remediate function
  5. S3 BucketPublicReadAccess Auto Remediate Function updates the S3 bucket ACL and closes the security gap

Although you can definitely set up a system like this on your own, we’ve started an open-source auto-remediation project. Please feel free to contribute, take inspiration from, or comment in our public Github repo.

The following are additional best practices and tips that didn’t make this article, but are worth mentioning.

  • AWS CloudFront is a service that integrates extremely well with S3. It can be used to increase the speed of your data transfers if your users are far away from where your S3 buckets are located. See the speed comparison tool to see if it’s worth enabling, as additional costs apply.
  • Storage Classes are worth studying if S3 costs are soaring. Changing the storage class can reduce your bill by a tremendous amount. The tradeoff is reduced access to your data. Data from AWS Glacier, for example, can take hours to retrieve, instead of quasi-instant speeds from the standard storage class.

Many thanks to Mike Rahmati (our CTO) for the technical guidance in writing this article.