Table: aws_ebs_volume_metric_write_ops_hourly - Query AWS EBS Volume Metrics using SQL
The AWS EBS (Elastic Block Store) Volume Metrics is a feature that allows you to monitor the performance of your EBS volumes for analysis and troubleshooting. With the 'write_ops' metric, you can track the number of write operations performed on a specified EBS volume per hour. This data can be queried using SQL, providing an accessible way to monitor and manage the performance of your EBS volumes.
Table Usage Guide
The aws_ebs_volume_metric_write_ops_hourly
table in Steampipe provides you with information about the hourly write operations metrics of AWS Elastic Block Store (EBS) volumes. This table allows you, as a cloud engineer, a member of a DevOps team, or a data analyst, to query and analyze the hourly write operation details of EBS volumes, including the number of write operations and the timestamp of the data points. You can utilize this table to track write operations, monitor EBS performance, and plan capacity. The schema outlines the various attributes of the EBS volume metrics for you, including the volume ID, timestamp, and the number of write operations.
The aws_ebs_volume_metric_write_ops_hourly
table provides you with metric statistics at 1 hour intervals for the most recent 60 days.
Examples
Basic info
Gain insights into the performance of your AWS EBS volumes by analyzing write operations over time. This can assist in identifying potential issues, optimizing resource usage, and planning capacity.
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_ops_hourlyorder by volume_id, timestamp;
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_ops_hourlyorder by volume_id, timestamp;
Intervals where volumes exceed 1000 average write ops
Discover the instances where the average write operations on your AWS EBS volumes exceed 1000 per hour. This can be useful to identify potential performance issues or unusual activity on your volumes.
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_ops_hourlywhere average > 1000order by volume_id, timestamp;
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_ops_hourlywhere average > 1000order by volume_id, timestamp;
Intervals where volumes exceed 8000 max write ops
Identify instances where the maximum write operations on AWS EBS volumes exceed 8000 within an hour. This can help monitor and manage storage performance, ensuring optimal operation and preventing potential issues.
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_ops_hourlywhere maximum > 8000order by volume_id, timestamp;
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_ops_hourlywhere maximum > 8000order by volume_id, timestamp;
Intervals where volume average iops exceeds provisioned iops
Identify instances where the average input/output operations per second (IOPS) surpasses the provisioned IOPS on your AWS EBS volumes. This is crucial for optimizing your storage performance and preventing any potential bottlenecks.
select r.volume_id, r.timestamp, v.iops as provisioned_iops, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avgfrom aws_ebs_volume_metric_read_ops_hourly as r, aws_ebs_volume_metric_write_ops_hourly as w, aws_ebs_volume as vwhere r.volume_id = w.volume_id and r.timestamp = w.timestamp and v.volume_id = r.volume_id and r.average + w.average > v.iopsorder by r.volume_id, r.timestamp;
select r.volume_id, r.timestamp, v.iops as provisioned_iops, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avgfrom aws_ebs_volume_metric_read_ops_hourly as r, aws_ebs_volume_metric_write_ops_hourly as w, aws_ebs_volume as vwhere r.volume_id = w.volume_id and r.timestamp = w.timestamp and v.volume_id = r.volume_id and r.average + w.average > v.iopsorder by r.volume_id, r.timestamp;
Read, Write, and Total IOPS
Analyze the settings to understand the average, maximum, and minimum input/output operations per second (IOPS) for both read and write operations on AWS EBS volumes. This helps in assessing the performance and identifying any potential bottlenecks in data transfer.
select r.volume_id, r.timestamp, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avg, round(r.maximum) + round(w.maximum) as iops_max, round(r.maximum) as read_ops_max, round(w.maximum) as write_ops_max, round(r.minimum) + round(w.minimum) as iops_min, round(r.minimum) as read_ops_min, round(w.minimum) as write_ops_minfrom aws_ebs_volume_metric_read_ops_hourly as r, aws_ebs_volume_metric_write_ops_hourly as wwhere r.volume_id = w.volume_id and r.timestamp = w.timestamporder by r.volume_id, r.timestamp;
select r.volume_id, r.timestamp, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avg, round(r.maximum) + round(w.maximum) as iops_max, round(r.maximum) as read_ops_max, round(w.maximum) as write_ops_max, round(r.minimum) + round(w.minimum) as iops_min, round(r.minimum) as read_ops_min, round(w.minimum) as write_ops_minfrom aws_ebs_volume_metric_read_ops_hourly as r, aws_ebs_volume_metric_write_ops_hourly as wwhere r.volume_id = w.volume_id and r.timestamp = w.timestamporder by r.volume_id, r.timestamp;
Query examples
Schema for aws_ebs_volume_metric_write_ops_hourly
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
account_id | text | =, !=, ~~, ~~*, !~~, !~~* | The AWS Account ID in which the resource is located. |
average | double precision | The average of the metric values that correspond to the data point. | |
maximum | double precision | The maximum metric value for the data point. | |
metric_name | text | The name of the metric. | |
minimum | double precision | The minimum metric value for the data point. | |
namespace | text | The metric namespace. | |
partition | text | The AWS partition in which the resource is located (aws, aws-cn, or aws-us-gov). | |
region | text | The AWS Region in which the resource is located. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The standard unit for the data point. | |
volume_id | text | The EBS Volume ID. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- aws
You can pass the configuration to the command with the --config
argument:
steampipe_export_aws --config '<your_config>' aws_ebs_volume_metric_write_ops_hourly