Table: gcp_kubernetes_node_pool - Query GCP Kubernetes Node Pools using SQL
A Kubernetes Node Pool is a group of nodes within a Google Cloud Kubernetes Engine (GKE) cluster that all have the same configuration. Node pools use a NodeConfig specification. They create a group of Google Compute Engine VM instances that serve as worker nodes for the applications running on your GKE clusters.
Table Usage Guide
The gcp_kubernetes_node_pool
table provides insights into the node pools within GCP Kubernetes Engine. As a DevOps or cloud engineer, explore node pool-specific details through this table, including node configurations, statuses, and associated metadata. Utilize it to uncover information about node pools, such as their configurations, the statuses of the nodes, and the details of the instances running the nodes.
Examples
Basic info
Explore the status and details of your Google Cloud Platform's Kubernetes node pools, such as the initial node count, version, and location. This can help you manage your resources effectively and understand the configuration of your clusters better.
select name, cluster_name, initial_node_count, version, status, locationfrom gcp_kubernetes_node_pool;
select name, cluster_name, initial_node_count, version, status, locationfrom gcp_kubernetes_node_pool;
List configuration info of each node
Explore the configuration details of each node within a Kubernetes cluster. This is useful to assess and manage resource allocation, such as disk size, machine type, and image type, and to review specific configurations like legacy endpoint usage and integrity monitoring settings.
select name, cluster_name, config ->> 'diskSizeGb' as disk_size_gb, config ->> 'diskType' as disk_type, config ->> 'imageType' as image_type, config ->> 'machineType' as machine_type, config -> 'metadata' ->> 'disable-legacy-endpoints' as disable_legacy_endpoints, config ->> 'serviceAccount' as machine_type, config -> 'shieldedInstanceConfig' ->> 'enableIntegrityMonitoring' as enable_integrity_monitoringfrom gcp_kubernetes_node_pool;
select name, cluster_name, json_extract(config, '$.diskSizeGb') as disk_size_gb, json_extract(config, '$.diskType') as disk_type, json_extract(config, '$.imageType') as image_type, json_extract(config, '$.machineType') as machine_type, json_extract(config, '$.metadata.disable-legacy-endpoints') as disable_legacy_endpoints, json_extract(config, '$.serviceAccount') as machine_type, json_extract( config, '$.shieldedInstanceConfig.enableIntegrityMonitoring' ) as enable_integrity_monitoringfrom gcp_kubernetes_node_pool;
List maximum pods for each node
Determine the capacity of each node in your Kubernetes cluster by identifying the maximum number of pods each node can run. This helps in efficient resource allocation and load balancing within the cluster.
select name, cluster_name, max_pods_constraint ->> 'maxPodsPerNode' as max_mods_per_nodefrom gcp_kubernetes_node_pool;
select name, cluster_name, json_extract(max_pods_constraint, '$.maxPodsPerNode') as max_mods_per_nodefrom gcp_kubernetes_node_pool;
List of all zonal node pools
Explore which node pools in your Google Cloud Platform Kubernetes service are zonal. This can help you manage and optimize your resources, as zonal node pools can offer different benefits and limitations compared to regional ones.
select name, location_typefrom gcp_kubernetes_node_poolwhere location_type = 'ZONAL';
select name, location_typefrom gcp_kubernetes_node_poolwhere location_type = 'ZONAL';
Query examples
Schema for gcp_kubernetes_node_pool
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
akas | jsonb | Array of globally unique identifier strings (also known as) for the resource. | |
autoscaling | jsonb | Autoscaler configuration for this node pool. | |
cluster_name | text | = | Cluster in which the Node pool is located. |
conditions | jsonb | Which conditions caused the current node pool state. | |
config | jsonb | The node configuration of the pool. | |
initial_node_count | bigint | The initial node count for the pool. | |
instance_group_urls | jsonb | The resource URLs of the managed instance groups associated with this node pool. | |
location | text | = | The GCP multi-region, region, or zone in which the resource is located. |
location_type | text | Location type of the cluster. Possible values are: 'REGIONAL', 'ZONAL'. | |
locations | jsonb | The list of Google Compute Engine zones. | |
management | jsonb | Node management configuration for this node pool. | |
max_pods_constraint | jsonb | The constraint on the maximum number of pods that can be run simultaneously on a node in the node pool. | |
name | text | = | The name of the node pool. |
pod_ipv4_cidr_size | bigint | The pod CIDR block size per node in this node pool. | |
project | text | =, !=, ~~, ~~*, !~~, !~~* | The GCP Project in which the resource is located. |
self_link | text | Server-defined URL for the resource. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
status | text | The status of the nodes in this pool instance. | |
title | text | Title of the resource. | |
upgrade_settings | jsonb | Upgrade settings control disruption and speed of the upgrade. | |
version | text | The version of the Kubernetes of this node. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp
You can pass the configuration to the command with the --config
argument:
steampipe_export_gcp --config '<your_config>' gcp_kubernetes_node_pool