Open menu
-->

Redshift Cluster In VPC

Cloud Conformity allows you to automate the auditing process of this resolution page. Register for a 14 day evaluation and check your compliance level for free!

Start a Free Trial Product features
Security

Risk level: Medium (should be achieved)

Ensure that your Redshift clusters are provisioned within the AWS EC2-VPC platform instead of EC2-Classic platform (outdated) for better flexibility and control over clusters security, traffic routing, availability and more.

This rule resolution is part of the Cloud Conformity Base Auditing Package

Creating and managing Amazon Redshift clusters using EC2-VPC platform instead of EC2-Classic can bring multiple advantages such as better networking infrastructure (network isolation, cluster subnet groups and Elastic IP addresses), much more flexible control over access security (network ACLs, VPC security group outbound traffic filtering) and last but not least, access to newer and powerful node types (DS2).

Audit

To determine the platform (EC2-Classic or EC2-VPC) used to launch your Amazon Redshift clusters, perform the following:

Using AWS Console

01 Login to the AWS Management Console.

02 Navigate to Redshift dashboard at https://console.aws.amazon.com/redshift/.

03 In the left navigation panel, under Redshift Dashboard, click Clusters.

04 Choose the Redshift cluster that you want to examine then click on its identifier (name) link:

Choose the Redshift cluster that you want to examine then click on its identifier (name) link

listed in the Cluster column.

05 On the selected cluster settings page, select Configuration tab and check for the VPC ID property that should be available in the Cluster Properties section. If the VPC ID property is not listed in the properties section, e.g.

If the VPC ID property is not listed in the properties section

the selected Redshift cluster is not running within an AWS Virtual Private Cloud (EC2-VPC platform), instead it’s using the outdated EC2-Classic platform.

06 Repeat steps no. 3 - 5 to verify the launch platform type used by other clusters provisioned in the current region.

07 Change the AWS region from the navigation bar and repeat the entire audit process for other regions.

Using AWS CLI

01 Run describe-clusters command (OSX/Linux/UNIX) using custom query filters to list the identifiers of all Redshift clusters currently available in the selected region:

aws redshift describe-clusters
	--region us-east-1
	--output table
	--query 'Clusters[*].ClusterIdentifier'

02 The command output should return a table with the requested cluster names:

----------------------
|  DescribeClusters  |
+--------------------+
|  cc-cluster        |
|  sandbox-cluster   |
|  bigdata-cluster   |
+--------------------+

03 Run again describe-clusters command (OSX/Linux/UNIX) using the name of cluster that you want to examine for the identifier parameter and the necessary query filters to expose the ID of the Virtual Private Cloud (VPC) where the selected cluster is currently running:

aws redshift describe-clusters
	--region us-east-1
	--cluster-identifier cc-cluster
	--query 'Clusters[*].VpcId'

04 The command output should return an AWS VPC ID or an empty array if the VPC is not used:

[
    []
]

If the command output returns an empty array, i.e. [ ], the selected Redshift cluster is not running within an AWS Virtual Private Cloud (EC2-VPC platform), instead it’s using the outdated EC2-Classic platform where clusters run inside a single, flat network that is shared with other AWS customers.

05 Repeat step no. 3 and 4 to verify the launch platform type used by other clusters provisioned in the current region.

06 Change the AWS region by updating the --region command parameter value and repeat steps no. 1 - 5 to perform the audit process for other regions.

Remediation / Resolution

To migrate your Redshift clusters provisioned with the EC2-Classic platform to the EC2-VPC platform, you must relaunch the clusters within a VPC environment, unload the data from the EC2-Classic clusters to Amazon S3 then load the data into the EC2-VPC clusters created. To launch the new EC2-VPC Redshift clusters and move the existing data between platforms, perform the following:

Using AWS Console

01 Login to the AWS Management Console.

02 Navigate to Redshift dashboard at https://console.aws.amazon.com/redshift/.

03 In the left navigation panel, under Redshift Dashboard, click Clusters.

04 Click Launch Cluster button from the dashboard top menu to start the cluster setup process.

05 On the Cluster Details configuration page, enter a unique name for your new cluster in the Cluster Identifier field and fill out the rest of the fields available on this page with the information taken from the existing cluster, launched with the EC2-Classic platform.

06 Click the Continue button to continue the setup process.

07 On the Node Configuration page, select the appropriate node type for the new cluster from the Node Type dropdown list and configure the number of nodes used to match the existing (EC2-Classic) cluster configuration.

08 Click Continue to load the next page.

09 On the Additional Configuration page, perform the following actions:

  1. Within the first configuration section, select the parameter group to associate with the cluster from the Cluster Parameter Group dropdown list and make sure that the cluster database encryption configuration does match the existing EC2-Classic cluster configuration.
  2. Within the Configure Networking Options section, provide the following information:
    • Select the name of the Virtual Private Cloud in which you want to launch the cluster from the Choose a VPC dropdown list.
    • Select the name of the subnet group that you want to assign to your cluster from the Cluster Subnet Group dropdown list. Choose default to use the default subnet group created automatically for your EC2-VPC Redshift clusters.
    • For Publicly Accessible, choose whether or not you want the cluster to be publicly accessible on the Internet. If you select Yes, you can also choose to attach an elastic IP (EIP) using the Choose a Public IP Address setting.
    • For Enhanced VPC Routing, you can choose whether or not to enable the Enhanced VPC Routing feature that provides the capability to force all COPY/UNLOAD traffic between the cluster and your data repository through the VPC network selected above.
    • Select the name of the availability zone in which you want to launch the cluster from the Availability Zone dropdown list.
  3. Select the appropriate security group(s) to associate with your new cluster from the VPC Security Groups list.
  4. (Optional) For Create CloudWatch Alarm, choose whether or not you want to create an AWS CloudWatch alarm to monitor the cluster disk usage.
  5. (Optional) Select an existing role from the AvailableRoles dropdown list if you need to associate an IAM role with your Redshift cluster.

10 Click Continue to load the next page.

11 On the Review page, review the new cluster properties, its database details and the VPC environment configuration details where it will be provisioned, then click Launch Cluster to launch the cluster.

12 On the confirmation page click Close to return to the Redshift dashboard. Once the Cluster Status value changes to available and the DB Health status changes to healthy, the new cluster can used to load the existing data from the one created with the EC2-Classic platform.

13 Unload your data from the EC2-Classic Redshift cluster and reload it into the newly created cluster using the Amazon Redshift Unload/Copy utility. With this utility tool you can unload (export) your data from the unencrypted cluster (source) to an AWS S3 bucket, then import it into your new cluster (destination) and clean up the S3 bucket used. All the necessary instructions to install, configure and use the Amazon Redshift Unload/Copy tool can be found at this URL.

14 As soon as the migration process is completed and all the data is loaded into the new Redshift cluster, launched within your Virtual Private Cloud, you can update your application configuration to refer to the new cluster endpoint:

You can update your application configuration to refer to the new cluster endpoint

15 Once the Redshift cluster endpoint is changed within your application configuration, you can remove the EC2-Classic cluster from your AWS account by performing the following actions:

  1. In the navigation panel, under Redshift Dashboard, click Clusters.
  2. Choose the Redshift cluster that you want to remove then click on its identifier link available in the Cluster column.
  3. On the selected cluster Configuration tab, click the Cluster dropdown button from the dashboard main menu then select Delete from the dropdown list.

16 Repeat steps no. 4 - 15 to migrate other Redshift clusters launched with the EC2-Classic platform to a Virtual Private Cloud within the current region.

17 Change the AWS region from the navigation bar and repeat the entire process for other regions.

Using AWS CLI

01 Run describe-clusters command (OSX/Linux/UNIX) to describe the configuration metadata available for the selected EC2-Classic Redshift cluster:

aws redshift describe-clusters
	--region us-east-1
	--cluster-identifier cc-cluster

02 The command output should return the requested configuration metadata which will be useful later when the cluster will be recreated:

{
    "Clusters": [
        {
            "PubliclyAccessible": true,
            "NumberOfNodes": 1,
            "MasterUsername": "ccclusteruser",
            "DBName": "ccclusterdb",
            "ClusterVersion": "1.0",
            "Tags": [],

            ...

            "AutomatedSnapshotRetentionPeriod": 1,
            "NodeType": "dc1.large",
            "Encrypted": false,
            "ClusterRevisionNumber": "1022",
            "ClusterStatus": "available"
        }
    ]
}

03 Run create-cluster command (OSX/Linux/UNIX) using the existing (EC2-Classic) cluster configuration details returned at the previous step to launch a new Amazon Redshift cluster within a Virtual Private Cloud available in your AWS account:

aws redshift create-cluster
	--region us-east-1
	--cluster-identifier cc-vpc-cluster
	--cluster-type single-node
	--node-type dc1.large
	--db-name ccclusterdb
	--master-username ccclusteruser
	--master-user-password DATAclusterpwd5
	--vpc-security-group-ids sg-45dca012
	--availability-zone us-east-1a
	--port 5439
	--cluster-subnet-group-name default
	--cluster-parameter-group-name default.redshift-1.0
	--publicly-accessible
	--allow-version-upgrade

04 The command output should return the new cluster configuration metadata:

{
    "Cluster": {
        "PubliclyAccessible": true,
        "MasterUsername": "ccclusteruser",
        "VpcSecurityGroups": [
            {
                "Status": "active",
                "VpcSecurityGroupId": "sg-45dca012"
            }
        ],
        "NumberOfNodes": 1,
        "PendingModifiedValues": {
            "MasterUserPassword": "****"
        },
        "VpcId": "vpc-2fb56548",
        "ClusterVersion": "1.0",
        "Tags": [],
        "AutomatedSnapshotRetentionPeriod": 1,
        "ClusterParameterGroups": [
            {
                "ParameterGroupName": "default.redshift-1.0",
                "ParameterApplyStatus": "in-sync"
            }
        ],
        "DBName": "ccclusterdb",
        "PreferredMaintenanceWindow": "fri:06:00-fri:06:30",
        "IamRoles": [],
        "AllowVersionUpgrade": true,
        "ClusterSubnetGroupName": "default",
        "ClusterSecurityGroups": [],
        "ClusterIdentifier": "cc-vpc-cluster",
        "AvailabilityZone": "us-east-1a",
        "NodeType": "dc1.large",
        "Encrypted": false,
        "ClusterStatus": "creating"
    }
}

05 Run again describe-clusters command (OSX/Linux/UNIX) using the appropriate query filters to expose the new Redshift cluster endpoint:

aws redshift describe-clusters
	--region us-east-1
	--cluster-identifier cc-vpc-cluster
	--query 'Clusters[*].Endpoint.Address'

06 The command output should return the new cluster endpoint URL:

[
    "cc-vpc-cluster.cmfpsgvyjhfo.us-east-1.redshift.amazonaws.com"
]

07 Unload your data from the EC2-Classic Redshift cluster and reload it into the newly created cluster using the Amazon Redshift Unload/Copy utility. With this utility tool you can unload (export) your data from the unencrypted cluster (source) to an AWS S3 bucket, then import it into your new cluster (destination) and clean up the S3 bucket used. The necessary instructions to install, configure and use the Amazon Redshift Unload/Copy tool can be found on this page.

08 As soon as the migration process is completed and all the data is loaded into your new Redshift cluster, you can update your application configuration to refer to the new cluster endpoint address returned at step no. 6.

09 Once the Redshift cluster endpoint is changed within your application configuration, run delete-cluster command (OSX/Linux/UNIX) to remove the EC2-Classic cluster from your AWS account:

aws redshift delete-cluster
	--region us-east-1
	--cluster-identifier cc-cluster
	--final-cluster-snapshot-identifier cc-ec2classic-cluster-finalsnapshot

10 The command output should return the metadata of the cluster selected for deletion:

{
    "Cluster": {
        "PubliclyAccessible": true,
        "MasterUsername": "ccclusteruser",
        "DBName": "ccclusterdb",
        "NumberOfNodes": 1,
        "PendingModifiedValues": {},
        "Tags": [],

        ...

        "AutomatedSnapshotRetentionPeriod": 1,
        "ClusterIdentifier": "cc-cluster",
        "AvailabilityZone": "us-east-1a",
        "NodeType": "dc1.large",
        "Encrypted": false,
        "ClusterStatus": "final-snapshot"
    }
}

11 Repeat steps no. 1 - 10 to migrate other Redshift clusters launched with the EC2-Classic platform to a Virtual Private Cloud within the current region.

12 Change the AWS region by updating the --region command parameter value and repeat steps no. 1 - 11 for other regions.

References

Publication date Oct 10, 2016