I’ll walk you through the creation of an S3 bucket using the AWS console’s wizard. This is the most basic way to configure your bucket. Not all options will be found here, so if your specific environment needs something custom, make sure to read the sections following this one.

Name — Put something unique and meaningful to you. What are you going to store in this? The name will be used in the bucket’s URL, so dasherize your bucket name if it has more than one word.

Region — Choose somewhere close to your end users. Don’t choose Mumbai as a region if you’re serving the USA. Not only will your infrastructure be slow, but your bill will be a lot higher than it needs to be. See AWS’ pricing page for more info.

Copy settings — Ignore this.

Versioning — Part of the Reliability Pillar from the Well-Architected Framework. You’ll want to enable this as an extra layer of data protection and/or data retention. See the relevant rule page for more information.

Server access logging — Part of the Security Pillar. Use a dedicated log bucket, and “S3” as a prefix. It’ll be easy for you to tell which services your logs belong to, and your S3 bucket folders will be equivalent to your chosen prefixes.

Log management & monitoring in AWS is a huge rabbit hole that is out of scope for this post. Suffice to say that best practice is to set up a completely new AWS account dedicated to logs, where you lock the account’s metaphorical door and throw away its metaphorical key.

Check out the relevant rule page for more information.

Tags — Part of the Operational Excellence Pillar. Best practice suggests that you should tag your AWS resources with four different tags:

  • Name: used to identify individual resources.
  • Role: used to describe the function of a specific resource (e.g. web tier, database tier).
  • Environment: used to distinguish between different stages (e.g. development, production).
  • Owner: used to identify the person responsible for the resource.

See the resource tags rule page for additional information.

Object-level logging — Part of the Security Pillar. For particularly sensitive buckets where you want to track who has accessed what data, you may want to consider enabling this feature.

From the official docs:

Data events are object-level API operations that access Amazon S3 buckets, such as GetObject, DeleteObject, and PutObject. By default, trails don’t log data events, but you can configure trails to log data events for S3 objects that you specify, or to log data events for all Amazon S3 buckets in your AWS account.

For more information, check out this official AWS tutorial.

Default encryption — Part of the Security Pillar. Enabling Server-Side Encryption will protect your data at the object level. There are two options to choose from in the wizard;

  1. Amazon S3-Managed Keys represents Model B in Figure 1, below. It uses AES-256 encryption, which means that as long as you still have the encryption key, you’ll be able to access the information stored in your S3 bucket without using AWS decryption.
  2. AWS KMS-Managed Keys represents model C in Figure 1. It’s a proprietary encryption technology, which means that encryption and decryption is fully managed for you. It also means that if you download your encrypted S3 data to your local drive, you will NOT be able to decrypt it yourself.

From the AWS Encrypting Data at Rest Whitepaper

There is evidently another option, Model A, where the customer provides their own encryption keys. AWS will not store the keys, meaning that if you lose them, you lose your data. It also means that there is no way for AWS to have access to your data.

In cases where your data is extremely valuable and confidential, say, a list of eleven herbs and spices, you may want to consider using this. You can read more about it here.

Manage Users — Leave this as is for now. Similar to log management, user management is another AWS rabbit hole. Identity and Access Management will feature its own in depth guide

Access for other AWS account — Part of the Security Pillar. In cases where your infrastructure is on multiple different AWS accounts, granting access for other accounts is necessary.

The master billing bucket, where detailed AWS billing info is consolidated, is one such resource where access for other AWS accounts needs to be granted.

See the dedicated S3 cross account access rule page for additional information.

Manage public permissions — There are more options outside of the wizard; change them there.

Manage system permissions — Is this bucket used for logging? You should only have one of these. In most cases, you’ll want to set this to “Do not”


That’s all there is to it! In the next part, I’ll be going over how to configure your S3 buckets in a more granular, reliable, and scalable way: with code.