Table: gcp_compute_disk_metric_write_ops_daily - Query GCP Compute Engine Disk Metrics using SQL
Google Cloud's Compute Engine Disk is a block storage system for Google Compute Engine virtual machines. The compute engine disks provide persistent disk storage for instances in any zone or region. These disks are integrated with Google's infrastructure, ensuring data durability and security.
Table Usage Guide
The gcp_compute_disk_metric_write_ops_daily
table provides insights into Compute Engine Disk Metrics within Google Cloud Platform (GCP). As a system administrator, you can explore disk-specific details through this table, including daily write operations count. Utilize it to uncover information about disk usage, such as those with high write operations, thereby allowing you to identify potential areas for efficiency improvements.
GCP Monitoring metrics provide data about the performance of your systems. The gcp_compute_disk_metric_write_ops_daily
table provides metric statistics at 24 hour intervals for the last year.
Examples
Basic info
Explore which Google Cloud Platform compute disk has the highest and lowest average daily write operations. This can help to identify instances where disks may be under or over-utilized, allowing for better resource allocation and performance optimization.
select name, minimum, maximum, average, sample_countfrom gcp_compute_disk_metric_write_ops_dailyorder by name;
select name, minimum, maximum, average, sample_countfrom gcp_compute_disk_metric_write_ops_dailyorder by name;
Intervals averaging over 100 write ops
Analyze the settings to understand which disks are experiencing a high average of write operations. This can be useful in identifying potential bottlenecks or performance issues in the system.
select name, round(minimum :: numeric, 2) as min_write_ops, round(maximum :: numeric, 2) as max_write_ops, round(average :: numeric, 2) as avg_write_ops, sample_countfrom gcp_compute_disk_metric_write_ops_dailywhere average > 10order by name;
select name, round(minimum, 2) as min_write_ops, round(maximum, 2) as max_write_ops, round(average, 2) as avg_write_ops, sample_countfrom gcp_compute_disk_metric_write_ops_dailywhere average > 10order by name;
Intervals averaging fewer than 1 write ops
Analyze the usage patterns of your Google Cloud Platform compute disk by identifying those instances where the average daily write operations fall below 1. This can help in optimizing resource allocation by pinpointing under-utilized disks.
select name, round(minimum :: numeric, 2) as min_write_ops, round(maximum :: numeric, 2) as max_write_ops, round(average :: numeric, 2) as avg_write_ops, sample_countfrom gcp_compute_disk_metric_write_ops_dailywhere average < 1order by name;
select name, round(minimum, 2) as min_write_ops, round(maximum, 2) as max_write_ops, round(average, 2) as avg_write_ops, sample_countfrom gcp_compute_disk_metric_write_ops_dailywhere average < 1order by name;
Query examples
Schema for gcp_compute_disk_metric_write_ops_daily
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
location | text | The GCP multi-region, region, or zone in which the resource is located. | |
maximum | double precision | The maximum metric value for the data point. | |
metadata | jsonb | The associated monitored resource metadata. | |
metric_kind | text | The metric type. | |
metric_labels | jsonb | The set of label values that uniquely identify this metric. | |
metric_type | text | The associated metric. A fully-specified metric used to identify the time series. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | = | The name of the disk. |
project | text | =, !=, ~~, ~~*, !~~, !~~* | The GCP Project in which the resource is located. |
resource | jsonb | The associated monitored resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The data points of this time series. When listing time series, points are returned in reverse time order.When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric. If the associated metric's descriptor must be auto-created, then the value type of the descriptor is determined by the point's type, which must be BOOL, INT64, DOUBLE, or DISTRIBUTION. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp
You can pass the configuration to the command with the --config
argument:
steampipe_export_gcp --config '<your_config>' gcp_compute_disk_metric_write_ops_daily