Do you have leaky buckets? Find out if you should be worried and what you should do about it.

An extremely secure storage facility.

Amazon Simple Storage Service (S3) is the most ubiquitous of the Web Services. It’s been around since the beginning of AWS, and integrates extremely well with most of the other services.

You’ve probably used something like it before. Dropbox, which used S3 for 8 years, works in a pretty similar way. Upload your files and download them, from anywhere, at any time.

Advantages

Many organisations also use S3 to serve HTML files.

If you need to host a static website, you could go with an Apache server. However, you’re likely to face issues of scale, availability, and durability.

These are non-issues when using S3.

  • Scaling is automatic, and there are essentially no practical limits on storage capacity.
  • Availability is guaranteed by the Service Level Agreement to 99.99% up-time, the most of any service on AWS.
  • Durability is designed for 99.999999999% for any file over a year. Practically, this means that every 10,000 years, on average, you should only lose one file. This has never happened in the history of S3. You can say with almost certainty that the files will be there forever.

Pricing

Like many other Amazon Services, pricing is calculated based on usage. Uploading to S3 is virtually free, but storage, downloading, and other requests to S3 will increase your monthly cost. Be sure to create your bucket in a region close to your end users, because data transfer cost between regions can add up very quickly.

Problems

So now that you know what it is, what it’s used for, and how you’ll be charged, how do you use it?

The simplest way is to login to the AWS Console and go through the ‘create a new S3 Bucket’ wizard. This is normally where most developers’ problems start.

Sometimes, for convenience, developers will change S3 bucket configurations so that files are a bit easier to access and work with, without having to worry about permissions or IP address restrictions. However, not all S3 buckets have the same value. Some could contain logs meaningless to anyone without context, whereas others might host very sensitive info: personally identifiable information (PII) such as user email addresses, phone numbers, physical addresses, names, or even activity logs.

It’s extremely important to make sure that none of your S3 buckets are exposed to users that should not have access.

Imagine you have a hard drive, sitting in your computer. You could, without realizing it, share this drive to your local network via simply ticking or unticking a box that you shouldn’t have.

On a much larger scale, S3 is similar. An S3 bucket is an internet directory anyone can read from or write to if its configurations have been set to public. You could expose your data and put your business at risk of appearing on national news.

At Cloud Conformity, we use S3 as temporary storage. If our buckets were misconfigured, we would expose our customer’s data to the public. Anyone could download encrypted information from our buckets and this data would be unusable. We have be paranoid about our data security. Fortunately, we’re AWS pros 💪, here to share best practices and make the internet a safer place.

How to configure an S3 bucket like a pro

The S3 bucket creation wizard

I’ll walk you through the creation of an S3 bucket using the AWS console’s wizard. This is the most basic way to configure your bucket. Not all options will be found here, so if your specific environment needs something custom, make sure to read the sections following this one.

Name — Put something unique and meaningful to you. What are you going to store in this? The name will be used in the bucket’s URL, so dasherize your bucket name if it has more than one word.

Region — Choose somewhere close to your end users. Don’t choose Mumbai as a region if you’re serving the USA. Not only will your infrastructure be slow, but your bill will be a lot higher than it needs to be. See AWS’ pricing page for more info.

Copy settings — Ignore this.

Versioning — Part of the reliability pillar. You’ll want to enable this as an extra layer of data protection and/or data retention. See the relevant rule page for more information.

Server access logging — Part of the security pillar. Use a dedicated log bucket, and “S3” as a prefix. It’ll be easy for you to tell which services your logs belong to, and your S3 bucket folders will be equivalent to your chosen prefixes.

Log management & monitoring in AWS is a huge rabbit hole that is out of scope for this post. Suffice to say that best practice is to set up a completely new AWS account dedicated to logs, where you lock the account’s metaphorical door and throw away its metaphorical key.

Check out the relevant rule page for more information.

Tags — Part of the operational excellence pillar. Best practice suggests that you should tag your AWS resources with four different tags:

  • Name: used to identify individual resources.
  • Role: used to describe the function of a specific resource (e.g. web tier, database tier).
  • Environment: used to distinguish between different stages (e.g. development, production).
  • Owner: used to identify the person responsible for the resource.

See the resource tags rule page for additional information.

Object-level logging — Part of the security pillar.

For particularly sensitive buckets where you want to track who has accessed what data, you may want to consider enabling this feature.

From the official docs:

Data events are object-level API operations that access Amazon S3 buckets, such as GetObject, DeleteObject, and PutObject. By default, trails don’t log data events, but you can configure trails to log data events for S3 objects that you specify, or to log data events for all Amazon S3 buckets in your AWS account.

For more information, check out this official AWS tutorial.

Default encryption — Part of the security pillar. Enabling Server-side encryption will protect your data at the object level.

There are two options to choose from in the wizard.

  1. Amazon S3-Managed Keys represents Model B in Figure 1, below. It uses AES-256 encryption, which means that as long as you still have the encryption key, you’ll be able to access the information stored in your S3 bucket without using AWS decryption.
  2. AWS KMS-Managed Keys represents model C in Figure 1. It’s a proprietary encryption technology, which means that encryption and decryption is fully managed for you. It also means that if you download your encrypted S3 data to your local drive, you will NOT be able to decrypt it yourself.

There is evidently another option, Model A, where the customer provides their own encryption keys. AWS will not store the keys, meaning that if you lose them, you lose your data. It also means that there is no way for AWS to have access to your data.

In cases where your data is extremely valuable and confidential, say, a list of eleven herbs and spices, you may want to consider using this. You can read more about it here.

Manage Users — Leave this as is for now. Similar to log management, user management is another AWS rabbit hole. Identity and Access Management will feature its own in depth guide.

Access for other AWS account — Part of the security pillar.

In cases where your infrastructure is on multiple different AWS accounts, granting access for other accounts is necessary.

The master billing bucket, where detailed AWS billing info is consolidated, is one such resource where access for other AWS accounts needs to be granted.

See the dedicated S3 cross account access rule page for additional information.

Manage public permissions — There are more options outside of the wizard; change them there.

Manage system permissions — Is this bucket used for logging? You should only have one of these. In most cases, you’ll want to set this to “Do not”


That’s all there is to it! In the next part, I’ll be going over how to configure your S3 buckets in a more granular, reliable, and scalable way: with code.

Further Configurations and Scale Considerations

You’ve gotten this far. You’ve made one bucket. Now, what if you need to do the same thing for a bunch of other accounts? Infrastructures can grow incredibly quickly in the cloud. They become a pain to manage from the console.

Configuring one bucket is fine, but have you tried fifty? What if, of those fifty, only one has been misconfigured by a member of your team? Don’t waste time checking each and every one through the console. There are better ways to do it.

Preventive measures mean giving access to your S3 bucket to only those users and resources who absolutely need it. Not everyone needs write access, and delete should probably be reserved for your most senior team members. Additionally, infrastructure should be written out as code, versioned, and tested in multiple environments before being released to production.

Detective measures means improving the visibility into your infrastructure. At scale, this can mean implementing procedures to be alerted, ideally in real-time, when new security risks, wasteful resources, performance or reliability issues are discovered. Essentially, large infrastructures require consistent and frequent auditing.

Corrective measures means going one step further with your best practice compliance. Imagine a scenario where a junior team member makes a negligent change to your infrastructure that exposes your internal data to unauthorized users. Even one hour of exposure can be too much, and that’s just the time it’ll take for a senior team member to be alerted and fix the issue. In an ideal world, infrastructure corrects itself.

Before getting there, let’s start with the basics.

Infrastructure as Code

Using the AWS console can get tedious, especially if you’re managing multiple different accounts and environments for your organisation.

The following three ways of configuring your infrastructure are all much better options than the console. The first one, especially, is a must-learn for anyone learning to develop for the cloud.

Command Line Interface

Repeating a task manually will inevitably lead to human errors. Using the command line will significantly reduce that risk. A poorly written command will probably not execute, while a console misconfiguration can go unnoticed until it’s too late.

AWS Command Line Interface

I’d recommend having a quick read over the official docs to learn how to install it, and run basic commands.

Step by step CLI guides in the Knowledge Base

Once you’ve done that, check out the Cloud Conformity S3 Knowledge Base. There are 17 step by step guides on implementing S3 best practices through the CLI, and over 350 guides across the different services.

Of course, the CLI has its limitations. It’s still extremely tedious and error-prone to recreate a development environment in production using only the CLI. This is why configurations are written out as code that provisions entire infrastructures for you.

CloudFormation

3 minute intro video from AWS

CloudFormation is great. We use it here at Cloud Conformity to manage our infrastructure. However, it is a managed service exclusive to AWS.

For organisations using more than one cloud, Terraform is a popular open source option.

Terraform

From the official Terraform docs:

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

These tools will help you version and better monitor your buckets, but you might still need help making your S3 configuration template. This is where the AWS Policy Generator can help out.

AWS Policy Generator

AWS policy generator

AWS S3 Buckets can be difficult to work with for developers. Other resources and processes often depend on reliable access to data stored on S3. It may be tempting for developers to let all resources get access to all actions. Things will just work and you will never see a permission denied error. As you may have already guessed, this is not best practice.

The Principle of least privilege is your golden rule when deciding bucket policy.

There are three basic types of actions other resources can take on objects within S3 buckets. GetObject, PutObject, and DeleteObject.

By default, resources should get privileges to none of these actions. Resources should only get the permissions they absolutely need to fulfill their purpose. This will greatly reduce the amount of ways hackers can access your data.

Monitoring — From Part-Time to Real-Time

Now that your infrastructure has been written out as code, it needs to be properly tested. Security risks need to be discovered as they are introduced, and fixed before they reach production environments.

Enabling Amazon Macie is a good first step. This is an AWS managed service that will give you visibility into how your most critical data is being accessed and used. Check out the relevant rule page for more details and instructions on how to enable the service.

Further than this, you will want some kind of ability to detect any sort of bad practice in your account, ideally in real-time. You can do this via CloudWatch and CloudWatch Event Rules.

A map of how an automated infrastructure monitoring system can be set up.

You can specify which actions you’re interested in. For example, a bucket policy being changed. If the action taken on your infrastructure matches a rule you’ve defined in CloudWatch, you can tell CloudWatch to trigger a Lambda function of your choice. This lambda function should send metadata to a separate AWS account that you use to manage security and DevOps.

This is an extremely flexible system. For example, at Cloud Conformity, our handler Lambda functions publish SNS notifications that reach our development team leads with actionable information about newly introduced security risks.

The Future: Self-Healing & Auto-Remediation

Finding best practice failures and notifying staff is not the last step in managing cloud infrastructures at scale. Ideally, an infrastructure should correct itself.

This is the cutting edge of DevOps and cloud management. CloudWatch and Lambda can be used to do more than just infrastructure detective work. Lambda functions can be used to automatically correct AWS resources that are non-compliant with your organisation’s security practices.

Here’s a scenario:

  1. A user makes an S3 bucket publicly readable via S3 Access Control Lists (ACLs)
  2. The detective system identifies the risk in real-time
  3. It publishes a message to the specified SNS Topic
  4. SNS topic triggers the orchestrator lambda function which in turns calls S3 bucket auto-remediate function
  5. S3 BucketPublicReadAccess Auto Remediate Function updates the S3 bucket ACL and closes the security gap

Although you can definitely set up a system like this on your own, we’ve started an open-source auto-remediation project. Please feel free to contribute, take inspiration from, or comment in our public Github repo.

cloudconformity/auto-remedate

The following are additional best practices and tips that didn’t make this article, but are worth mentioning.

  • CloudFront is an service that integrates extremely well with S3. It can be used to increase the speed of your data transfers if your users are far away from where your S3 buckets are located. See the speed comparison tool to see if it’s worth enabling, as additional costs apply.
  • Storage Classes are worth studying if S3 costs are soaring. Changing the storage class can reduce your bill by a tremendous amount. The tradeoff is reduced access to your data. Data from AWS Glacier, for example, can take hours to retrieve, instead of quasi-instant speeds from the standard storage class.

Many thanks to Mike Rahmati (our CTO) for the technical guidance in writing this article.