steampipe plugin install gcp

Table: gcp_compute_disk_metric_read_ops_hourly - Query Google Cloud Platform Compute Engine Disks using SQL

Google Cloud Compute Engine Disks are persistent, high-performance block storage for Google Cloud Platform virtual machines. They are designed to offer reliable and efficient storage for your workloads, with features such as automatic encryption, snapshot capabilities, and seamless integration with Google Cloud Platform services. Compute Engine Disks provide the flexibility to balance cost and performance for your storage needs.

Table Usage Guide

The gcp_compute_disk_metric_read_ops_hourly table provides insights into Compute Engine Disks within Google Cloud Platform. As a DevOps engineer, explore disk-specific details through this table, including hourly read operations metrics. Utilize it to uncover information about disk usage patterns, such as high read operations, which could indicate potential performance issues.

GCP Monitoring metrics provide data about the performance of your systems. The gcp_compute_disk_metric_read_ops_hourly table provides metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Explore the range and average of read operations on your Google Cloud Platform compute disks over an hourly period. This can help you understand disk usage patterns and identify potential areas for performance optimization.

select
name,
minimum,
maximum,
average,
sample_count
from
gcp_compute_disk_metric_read_ops_hourly
order by
name;
select
name,
minimum,
maximum,
average,
sample_count
from
gcp_compute_disk_metric_read_ops_hourly
order by
name;

Intervals averaging over 100 read operations

Explore which disk operations have an average of over 100 read operations. This can be helpful in identifying areas where resource usage may be high, potentially indicating a need for optimization or increased capacity.

select
name,
round(minimum :: numeric, 2) as min_read_ops,
round(maximum :: numeric, 2) as max_read_ops,
round(average :: numeric, 2) as avg_read_ops,
sample_count
from
gcp_compute_disk_metric_read_ops_hourly
where
average > 100
order by
name;
select
name,
round(minimum, 2) as min_read_ops,
round(maximum, 2) as max_read_ops,
round(average, 2) as avg_read_ops,
sample_count
from
gcp_compute_disk_metric_read_ops_hourly
where
average > 100
order by
name;

Intervals averaging fewer than 10 read operations

Determine the areas in which disk operations in Google Cloud Compute are underutilized by identifying intervals where read operations average less than ten. This can help in optimizing resource allocation and managing costs effectively.

select
name,
round(minimum :: numeric, 2) as min_read_ops,
round(maximum :: numeric, 2) as max_read_ops,
round(average :: numeric, 2) as avg_read_ops,
sample_count
from
gcp_compute_disk_metric_read_ops_hourly
where
average < 10
order by
name;
select
name,
round(minimum, 2) as min_read_ops,
round(maximum, 2) as max_read_ops,
round(average, 2) as avg_read_ops,
sample_count
from
gcp_compute_disk_metric_read_ops_hourly
where
average < 10
order by
name;

Schema for gcp_compute_disk_metric_read_ops_hourly

NameTypeOperatorsDescription
_ctxjsonbSteampipe context in JSON form.
averagedouble precisionThe average of the metric values that correspond to the data point.
locationtextThe GCP multi-region, region, or zone in which the resource is located.
maximumdouble precisionThe maximum metric value for the data point.
metadatajsonbThe associated monitored resource metadata.
metric_kindtextThe metric type.
metric_labelsjsonbThe set of label values that uniquely identify this metric.
metric_typetextThe associated metric. A fully-specified metric used to identify the time series.
minimumdouble precisionThe minimum metric value for the data point.
nametext=The name of the disk.
projecttext=, !=, ~~, ~~*, !~~, !~~*The GCP Project in which the resource is located.
resourcejsonbThe associated monitored resource.
sample_countdouble precisionThe number of metric values that contributed to the aggregate value of this data point.
sp_connection_nametext=, !=, ~~, ~~*, !~~, !~~*Steampipe connection name.
sp_ctxjsonbSteampipe context in JSON form.
sumdouble precisionThe sum of the metric values for the data point.
timestamptimestamp with time zoneThe time stamp used for the data point.
unittextThe data points of this time series. When listing time series, points are returned in reverse time order.When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric. If the associated metric's descriptor must be auto-created, then the value type of the descriptor is determined by the point's type, which must be BOOL, INT64, DOUBLE, or DISTRIBUTION.

Export

This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.

You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh script:

/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp

You can pass the configuration to the command with the --config argument:

steampipe_export_gcp --config '<your_config>' gcp_compute_disk_metric_read_ops_hourly