Enable Integrity Monitoring for GKE Cluster Nodes

Trend Micro Cloud One™ – Conformity is a continuous assurance tool that provides peace of mind for your cloud infrastructure, delivering over 750 automated best practice checks.

Risk level: Medium (should be achieved)

Ensure that Integrity Monitoring feature is enabled for your Google Kubernetes Engine (GKE) cluster nodes in order to monitor and automatically check the runtime boot integrity of your shielded cluster nodes using Cloud Monitoring service.

Security

Integrity Monitoring enables monitoring and attestation of the boot integrity for your GKE cluster nodes. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the cluster node is created. To protect your application data and ensure that the boot loader on your Google Kubernetes Engine (GKE) cluster nodes remains untampered, it is strongly recommended to enable Integrity Monitoring for all GKE cluster nodes.


Audit

To determine if the Integrity Monitoring feature is enabled for all your GKE cluster nodes, perform the following operations:

Using GCP Console

01 Sign in to Google Cloud Management Console.

02 Select the Google Cloud Platform (GCP) project that you want to access from the console top navigation bar.

03 Navigate to Google Kubernetes Engine (GKE) console at https://console.cloud.google.com/kubernetes.

04 In the navigation panel, select Clusters to access the list of the GKE clusters deployed within the selected project.

05 Click on the name of the GKE cluster that you want to examine and select the Details tab to access the cluster configuration information.

06 Under Node pools, click on the name of the cluster node pool that you want to examine.

07 In the Security section, check the Integrity monitoring configuration setting status. If the setting status is set to Disabled, the Integrity Monitoring feature is not enabled for the nodes created within the selected Google Kubernetes Engine (GKE) cluster node pool.

08 Repeat step no. 6 and 7 for each node pool created for the selected GKE cluster.

09 Repeat step no. 5 – 8 for each GKE cluster available within the selected GCP project.

10 Repeat steps no. 2 – 9 for each project deployed in your Google Cloud account.

Using GCP CLI

01 Run projects list command (Windows/macOS/Linux) using custom query filters to list the IDs of all the Google Cloud Platform (GCP) projects available in your cloud account:

gcloud projects list
    --format="table(projectId)"

02 The command output should return the requested GCP project identifiers:

PROJECT_ID
cc-bigdata-project-123123
cc-appdata-project-112233

03 Run container clusters list command (Windows/macOS/Linux) using custom query filters to describe the name and the region of each GKE cluster provisioned for the selected Google Cloud project:

gcloud container clusters list
    --project cc-bigdata-project-123123
    --format="(NAME,LOCATION)"

04 The command output should return the requested GKE cluster names and their regions:

NAME                       LOCATION
cc-gke-analytics-cluster   us-central1
cc-gke-operations-cluster  us-central1

05 Run container node-pools list command (Windows/macOS/Linux) using the name of the Google Cloud GKE cluster that you want to examine as identifier parameter and custom query filters to describe the name of each node pool provisioned for the selected cluster:

gcloud container node-pools list
    --cluster=cc-gke-analytics-cluster
    --region=us-central1
    --format="(NAME)"

06 The command output should return the requested cluster node pool name(s):

NAME
cc-gke-dev-pool-001
cc-gke-dev-pool-002

07 Run container node-pools describe command (Windows/macOS/Linux) using the name of the cluster node pool that you want to examine as identifier parameter and custom filtering to describe the Integrity Monitoring feature configuration status:

gcloud container node-pools describe cc-gke-dev-pool-001
    --cluster=cc-gke-analytics-cluster
    --region=us-central1
    --format="yaml(config.shieldedInstanceConfig.enableIntegrityMonitoring)"

08 The command output should return the requested feature configuration status:

config:
  shieldedInstanceConfig: {}

If the container node-pools describe command output returns null, or an empty object for the config.shieldedInstanceConfig property, as shown in the example above, the Integrity Monitoring feature is not enabled for the nodes running within the selected Google Kubernetes Engine (GKE) cluster node pool.

09 Repeat step no. 7 and 8 for each node pool created for the selected GKE cluster.

10 Repeat step no. 5 – 9 for each GKE cluster provisioned within the selected GCP project.

11 Repeat steps no. 3 – 10 for each GCP project deployed in your Google Cloud account.

Remediation / Resolution

To enable Integrity Monitoring feature for your Google Kubernetes Engine (GKE) cluster nodes, you have to re-create the existing GKE cluster node pools with the appropriate monitoring configuration by performing the following operations:

Using GCP Console

01 Sign in to Google Cloud Management Console.

02 Select the GCP project that you want to access from the console top navigation bar.

03 Navigate to Google Kubernetes Engine (GKE) console at https://console.cloud.google.com/kubernetes.

04 In the navigation panel, select Clusters to access the list of the GKE clusters available within the selected project.

05 Click on the name of the GKE cluster that you want to reconfigure and select the Details tab to access the cluster configuration information.

06 Under Node pools, click on the name of the cluster node pool that you want to re-create and collect all the configuration information available for the selected GKE resource.

07 Go back to the cluster configuration page and click on the ADD NODE POOL button from the console top menu to initiate the node pool setup process:

  1. On the Node pool details panel, provide a unique name for the new node pool in the Name box, choose the GKE version from the Node version dropdown list, and select the number of nodes for the new pool from the Size dropdown list. Configure the rest of the node pool settings based on the configuration information collected at step no. 7.
  2. On the Nodes panel, configure the hardware and network configuration for the new node pool based on the configuration information taken from the pool at step no. 7. Ensure that the new node pool has the same network, compute and storage configuration as the source pool. Check the Enable customer-managed encryption for boot disk checkbox to enable encryption with Customer-Managed Key (CMK).
  3. On the Security panel, select the appropriate service account from the Service account dropdown list, choose the type and the level of API access to grant the nodes from the Access scopes, and select the Enable integrity monitoring checkbox to enable the Integrity Monitoring feature for all the nodes created within the selected GKE cluster node pool.
  4. On the Metadata panel, configure the metadata settings such as GCE instance metadata based on the configuration information collected from the source node pool at step no. 7.
  5. Click CREATE to launch your new Google Kubernetes Engine (GKE) cluster node pool.

08 Once the new cluster node pool is operational, you can remove the source node pool in order to stop adding charges to your Google Cloud bill. Click on the name of the node pool that you want to delete (see Audit section part I to identify the source pool) and perform the following:

  1. Click on the DELETE button from the console top menu to initiate the removal process.
  2. Within Are you sure you want to delete <pool-name>? dialog box, click DELETE to confirm the node pool deletion.

09 Repeat steps no. 7 – 9 to enable Integrity Monitoring feature for other node pools created for the selected GKE cluster.

10 Repeat step no. 6 – 10 to reconfigure other GKE clusters deployed for the selected GCP project.

11 Repeat steps no. 2 – 11 for each GCP project available in your Google Cloud account.

Using GCP CLI

01 Run container node-pools describe command (Windows/macOS/Linux) using the name of the cluster node pool that you want to re-create as identifier parameter and custom filtering to describe the configuration information available for the selected node pool:

gcloud container node-pools describe cc-gke-dev-pool-001
    --cluster=cc-gke-analytics-cluster
    --region=us-central1
    --format=json

02 The command output should return the return the requested configuration metadata:

{
  "config": {
    "diskSizeGb": 150,
    "diskType": "pd-standard",
    "imageType": "COS",
    "metadata": {
      "disable-legacy-endpoints": "true"
    },
    "serviceAccount": "default",
    "shieldedInstanceConfig": {
      "enableSecureBoot": true
    }
  },
  "locations": [
    "us-central1-b",
    "us-central1-c"
  ],

  ...

  "management": {
    "autoRepair": true,
    "autoUpgrade": true
  },
  "maxPodsConstraint": {
    "maxPodsPerNode": "110"
  },
  "name": "cc-gke-dev-pool-001",
  "podIpv4CidrSize": 24,
  "status": "RUNNING",
  "upgradeSettings": {
    "maxSurge": 1
  },
  "version": "1.15.12-gke.2"
}

03 Run container node-pools create command (Windows/macOS/Linux) using the information returned at the previous step as configuration data for the command parameters, to create a new GKE cluster node pool and enable the Integrity Monitoring feature for the new resource by including the --shielded-integrity-monitoring parameter in the command request:

gcloud beta container node-pools create cc-gke-new-dev-pool-001
    --cluster=cc-gke-analytics-cluster
    --region=us-central1
    --node-locations=us-central1-b,us-central1-c
    --machine-type=e2-standard-2
    --disk-type=pd-standard
    --disk-size=150
    --shielded-integrity-monitoring

04 The command output should return the URL of the newly created GKE cluster node pool:

Created [https://dataproc.googleapis.com/v1/projects/cc-bigdata-project-123123/regions/us-central1/clusters/cc-gke-new-dev-pool-001]
 

05 Once the new cluster node pool is fully operational, you can remove the source node pool in order to stop adding charges to your Google Cloud bill. Run container node-pools delete command (Windows/macOS/Linux) using the name of the resource that you want to remove as identifier parameter (see Audit section part II to identify the right resource), to delete the specified GKE cluster node pool:

gcloud container node-pools delete cc-gke-dev-pool-001
    --cluster=cc-gke-analytics-cluster
    --region=us-central1

06 Type Y to confirm the Google Cloud GKE resource removal:

The following node pool will be deleted.
[cc-gke-dev-pool-001] in cluster [cc-gke-analytics-cluster] in [us-central1]
Do you want to continue (Y/n)?  Y

07 The output should return the container node-pools delete command request status:

Deleting node pool cc-gke-dev-pool-001...done.
Deleted [https://container.googleapis.com/v1/projects/cc-bigdata-project-123123/zones/us-central1/clusters/cc-gke-analytics-cluster/nodePools/cc-gke-dev-pool-001].

08 Repeat steps no. 1 – 7 to enable Integrity Monitoring feature for other node pools available in the selected GKE cluster.

09 Repeat step no. 1 – 8 to reconfigure other GKE clusters created for the selected GCP project.

10 Repeat steps no. 1 – 9 for each GCP project deployed in your Google Cloud account.

References

Publication date May 10, 2021

Unlock the Remediation Steps


Gain free unlimited access
to our full Knowledge Base


Over 750 rules & best practices
for AWS and Azure

You are auditing:

Enable Integrity Monitoring for GKE Cluster Nodes

Risk level: Medium