Enable Secure Boot for GKE Cluster Nodes

Trend Micro Cloud One™ – Conformity is a continuous assurance tool that provides peace of mind for your cloud infrastructure, delivering over 750 automated best practice checks.

Risk level: Medium (should be achieved)

Ensure that Secure Boot security feature is enabled for your GKE cluster nodes in order to protect them against malware and rootkits. Secure Boot helps ensure that the system runs only authentic software by verifying the digital signature of all boot components, and halting the boot process if the signature verification fails.

Security

Secure Boot is disabled by default because of the third-party unsigned kernel modules that cannot be loaded when the feature is enabled. If you don't use third-party unsigned kernel modules, it is highly recommended to enable Secure Boot for all your GKE cluster nodes. Enabling this security feature helps you protect your GKE workloads from boot-level and kernel-level malware and rootkits.


Audit

To determine if your Google Kubernetes Engine (GKE) cluster nodes are protected with Secure Boot, perform the following actions:

Using GCP Console

01 Sign in to Google Cloud Management Console.

02 Select the Google Cloud Platform (GCP) project that you want to access from the console top navigation bar.

03 Navigate to Google Kubernetes Engine (GKE) console at https://console.cloud.google.com/kubernetes.

04 In the navigation panel, select Clusters to access the list of the GKE clusters deployed within the selected project.

05 Click on the name of the GKE cluster that you want to examine and select the Details tab to access the cluster configuration information.

06 Under Node pools, click on the name of the cluster node pool that you want to examine.

07 In the Security section, check the Secure boot configuration setting status. If the setting status is set to Disabled, the Secure Boot feature is not enabled for the nodes created within the selected Google Kubernetes Engine (GKE) cluster node pool.

08 Repeat step no. 6 and 7 for each node pool provisioned for the selected GKE cluster.

09 Repeat step no. 5 – 8 for each GKE cluster created for the selected GCP project.

10 Repeat steps no. 2 – 9 for each project deployed within your Google Cloud account.

Using GCP CLI

01 Run projects list command (Windows/macOS/Linux) using custom query filters to list the IDs of all the Google Cloud Platform (GCP) projects available in your cloud account:

gcloud projects list
    --format="table(projectId)"

02 The command output should return the requested GCP project identifiers:

PROJECT_ID
cc-bigdata-project-123123
cc-application-project-112233

03 Run container clusters list command (Windows/macOS/Linux) using custom query filters to describe the name and the region of each GKE cluster provisioned for the selected Google Cloud project:

gcloud container clusters list
    --project cc-bigdata-project-123123
    --format="(NAME,LOCATION)"

04 The command output should return the requested GKE cluster names and their regions:

NAME                       LOCATION
cc-gke-operations-cluster  us-central1
cc-gke-analytics-cluster   us-central1

05 Run container node-pools list command (Windows/macOS/Linux) using the name of the Google Cloud GKE cluster that you want to examine as identifier parameter and custom query filters to describe the name of each node pool provisioned for the selected cluster:

gcloud container node-pools list
    --cluster=cc-gke-operations-cluster
    --region=us-central1
    --format="(NAME)"

06 The command output should return the requested cluster node pool name(s):

NAME
cc-gke-ops-pool-001
cc-gke-ops-pool-002

07 Run container node-pools describe command (Windows/macOS/Linux) using the name of the cluster node pool that you want to examine as identifier parameter and custom output filtering to describe the Secure Boot feature configuration status for the selected node pool:

gcloud container node-pools describe cc-gke-ops-pool-001
    --cluster=cc-gke-operations-cluster
    --region=us-central1
    --format="yaml(config.shieldedInstanceConfig.enableSecureBoot)"

08 The command output should return the requested feature configuration status:

config:
  shieldedInstanceConfig: {}

If the container node-pools describe command output returns null, or an empty object for the config.shieldedInstanceConfig property, as shown in the example above, the Secure Boot security feature is not enabled for the nodes running within the selected Google Kubernetes Engine (GKE) cluster node pool.

09 Repeat step no. 7 and 8 for each node pool provisioned for the selected GKE cluster.

10 Repeat step no. 5 – 9 for each GKE cluster created for the selected GCP project.

11 Repeat steps no. 3 – 10 for each GCP project deployed in your Google Cloud account.

Remediation / Resolution

To enable Secure Boot feature for your Google Kubernetes Engine (GKE) cluster nodes, you have to re-create the existing GKE cluster node pools with the appropriate security configuration by performing the following actions:

Note: Secure Boot should not be used if you need third-party unsigned kernel modules for your GKE cluster nodes.

Using GCP Console

01 Sign in to Google Cloud Management Console.

02 Select the GCP project that you want to access from the console top navigation bar.

03 Navigate to Google Kubernetes Engine (GKE) console at https://console.cloud.google.com/kubernetes.

04 In the navigation panel, select Clusters to access the list of the GKE clusters available within the selected project.

05 Click on the name of the GKE cluster that you want to reconfigure and select the Details tab to access the cluster configuration information.

06 Under Node pools, click on the name of the cluster node pool that you want to re-create and collect all the configuration information available for the selected resource.

07 Go back to the cluster configuration page and click on the ADD NODE POOL button from the console top menu to initiate the node pool setup process:

  1. On the Node pool details panel, provide a unique name for the new node pool in the Name box, choose the GKE version from the Node version dropdown list, and select the number of nodes for the new pool from the Size dropdown list. Configure the rest of the node pool settings based on the configuration information taken at step no. 7.
  2. On the Nodes panel, configure the hardware and network configuration for the new node pool based on the configuration information collected at step no. 7. Ensure that the new node pool has the same network, compute and storage configuration as the source pool. Check the Enable customer-managed encryption for boot disk checkbox to enable encryption at rest with Customer-Managed Key (CMK).
  3. On the Security panel, select the appropriate service account from the Service account dropdown list, select the type and level of API access to grant the nodes from the Access scopes, and check Enable secure boot setting checkbox to enable the Secure Boot feature for the nodes provisioned within the selected cluster node pool.
  4. On the Metadata panel, configure the metadata settings such as GCE instance metadata based on the configuration information taken from the source node pool at step no. 7.
  5. Click CREATE to launch your new Google Kubernetes Engine (GKE) cluster node pool.

08 Once the new cluster node pool is operating successfully, you can remove the source node pool in order to stop adding charges to your Google Cloud bill. Click on the name of the node pool that you want to delete (see Audit section part I to identify the source pool) and perform the following:

  1. Click on the DELETE button from the console top menu to initiate the removal process.
  2. Within Are you sure you want to delete <pool-name>? dialog box, click DELETE to confirm the node pool deletion.

09 Repeat steps no. 7 – 9 to enable Secure Boot feature for other node pools created for the selected GKE cluster.

10 Repeat step no. 6 – 10 to reconfigure other GKE clusters deployed for the selected GCP project.

11 Repeat steps no. 2 – 11 for each GCP project available in your Google Cloud account.

Using GCP CLI

01 Run container node-pools describe command (Windows/macOS/Linux) using the name of the cluster node pool that you want to re-create as identifier parameter and custom output filtering to describe the configuration information available for the selected node pool:

gcloud container node-pools describe cc-gke-ops-pool-001
    --cluster=cc-gke-operations-cluster
    --region=us-central1
    --format=json

02 The command output should return the return the requested configuration metadata:

{
  "config": {
    "diskSizeGb": 100,
    "diskType": "pd-standard",
    "imageType": "COS",
    "metadata": {
      "disable-legacy-endpoints": "true"
    },
    "serviceAccount": "default",
    "shieldedInstanceConfig": {
      "enableIntegrityMonitoring": true
    }
  },
  "locations": [
    "us-central1-b",
    "us-central1-c"
  ],

  ...

  "management": {
    "autoRepair": true,
    "autoUpgrade": true
  },
  "maxPodsConstraint": {
    "maxPodsPerNode": "110"
  },
  "name": "cc-gke-ops-pool-001",
  "podIpv4CidrSize": 24,
  "status": "RUNNING",
  "upgradeSettings": {
    "maxSurge": 1
  },
  "version": "1.15.12-gke.2"
}

03 Run container node-pools create command (Windows/macOS/Linux) using the information returned at the previous step as configuration data for the command parameters, to create a new GKE cluster node pool and enable the Secure Boot feature for the new resource by including the --shielded-secure-boot parameter in the command request:

gcloud beta container node-pools create cc-gke-ops-secure-pool-001
    --cluster=cc-gke-operations-cluster
    --region=us-central1
    --node-locations=us-central1-b,us-central1-c
    --machine-type=e2-medium
    --disk-type=pd-standard
    --disk-size=100
    --shielded-secure-boot

04 The command output should return the URL of the newly created GKE cluster node pool:

Created [https://dataproc.googleapis.com/v1/projects/cc-bigdata-project-123123/regions/us-central1/clusters/cc-gke-ops-secure-pool-001]

05 Once the new cluster node pool is operating successfully, you can remove the source node pool in order to stop adding charges to your Google Cloud bill. Run container node-pools delete command (Windows/macOS/Linux) using the name of the resource that you want to remove as identifier parameter (see Audit section part II to identify the right resource), to delete the specified GKE cluster node pool:

gcloud container node-pools delete cc-gke-ops-pool-001
    --cluster=cc-gke-operations-cluster
    --region=us-central1

06 Type Y to confirm the Google Cloud GKE resource removal:

The following node pool will be deleted.
[cc-gke-ops-pool-001] in cluster [cc-gke-operations-cluster] in [us-central1]
Do you want to continue (Y/n)?  Y

07 The output should return the container node-pools delete command request status:

Deleting node pool cc-gke-ops-pool-001...done.
Deleted [https://container.googleapis.com/v1/projects/cc-bigdata-project-123123/zones/us-central1/clusters/cc-gke-operations-cluster/nodePools/cc-gke-ops-pool-001].

08 Repeat steps no. 1 – 7 to enable Secure Boot feature for other node pools provisioned for the selected GKE cluster.

09 Repeat step no. 1 – 8 to reconfigure other GKE clusters created for the selected GCP project.

10 Repeat steps no. 1 – 9 for each GCP project deployed in your Google Cloud account.

References

Publication date May 10, 2021

Unlock the Remediation Steps


Gain free unlimited access
to our full Knowledge Base


Over 750 rules & best practices
for AWS and Azure

You are auditing:

Enable Secure Boot for GKE Cluster Nodes

Risk level: Medium