Table: gcp_compute_disk_metric_read_ops_daily - Query GCP Compute Disk Metrics using SQL
Google Cloud Compute Disks are persistent, high-performance block storage for Google Cloud's virtual machines (VMs). These disks are designed to offer reliable and efficient storage for your VMs, with the added benefit of easy integration with Google Cloud's suite of data management tools. They are suitable for both boot and non-boot purposes, and come in a variety of types to suit different needs.
Table Usage Guide
The gcp_compute_disk_metric_read_ops_daily
table provides insights into Compute Disk Metrics within Google Cloud Platform. As a system administrator, explore disk-specific details through this table, such as daily read operations count, to monitor disk usage patterns and identify potential anomalies. Utilize it to uncover information about disks, such as those with high read operations, suggesting high disk usage and potential need for resource optimization.
GCP Monitoring metrics provide data about the performance of your systems. The gcp_compute_disk_metric_read_ops_daily
table provides metric statistics at 24 hour intervals for the last year.
Examples
Basic info
Explore the daily read operations of your Google Cloud Platform compute disk. This query helps you understand the operational range and average count, allowing you to optimize disk usage and performance.
select name, minimum, maximum, average, sample_countfrom gcp_compute_disk_metric_read_ops_dailyorder by name;
select name, minimum, maximum, average, sample_countfrom gcp_compute_disk_metric_read_ops_dailyorder by name;
Intervals averaging over 100 read ops
Explore which disk operations in your GCP Compute environment are averaging over 100 read operations, allowing you to identify potential bottlenecks and optimize for better performance. This can be particularly useful in identifying high-usage disks that may need additional resources or configuration changes.
select name, round(minimum :: numeric, 2) as min_read_ops, round(maximum :: numeric, 2) as max_read_ops, round(average :: numeric, 2) as avg_read_ops, sample_countfrom gcp_compute_disk_metric_read_ops_dailywhere average > 10order by name;
select name, round(minimum, 2) as min_read_ops, round(maximum, 2) as max_read_ops, round(average, 2) as avg_read_ops, sample_countfrom gcp_compute_disk_metric_read_ops_dailywhere average > 10order by name;
Intervals averaging fewer than 10 read ops
Analyze the settings to understand the performance of your GCP compute disks, specifically those with an average of fewer than 10 read operations. This can help in identifying underutilized resources and optimizing your storage configuration.
select name, round(minimum :: numeric, 2) as min_read_ops, round(maximum :: numeric, 2) as max_read_ops, round(average :: numeric, 2) as avg_read_ops, sample_countfrom gcp_compute_disk_metric_read_ops_dailywhere average < 1order by name;
select name, round(minimum, 2) as min_read_ops, round(maximum, 2) as max_read_ops, round(average, 2) as avg_read_ops, sample_countfrom gcp_compute_disk_metric_read_ops_dailywhere average < 1order by name;
Query examples
Schema for gcp_compute_disk_metric_read_ops_daily
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
location | text | The GCP multi-region, region, or zone in which the resource is located. | |
maximum | double precision | The maximum metric value for the data point. | |
metadata | jsonb | The associated monitored resource metadata. | |
metric_kind | text | The metric type. | |
metric_labels | jsonb | The set of label values that uniquely identify this metric. | |
metric_type | text | The associated metric. A fully-specified metric used to identify the time series. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | = | The name of the disk. |
project | text | =, !=, ~~, ~~*, !~~, !~~* | The GCP Project in which the resource is located. |
resource | jsonb | The associated monitored resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The data points of this time series. When listing time series, points are returned in reverse time order.When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric. If the associated metric's descriptor must be auto-created, then the value type of the descriptor is determined by the point's type, which must be BOOL, INT64, DOUBLE, or DISTRIBUTION. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp
You can pass the configuration to the command with the --config
argument:
steampipe_export_gcp --config '<your_config>' gcp_compute_disk_metric_read_ops_daily