Table: aws_ebs_volume_metric_write_ops - Query AWS Elastic Block Store (EBS) using SQL
The AWS Elastic Block Store (EBS) is a high-performance block storage service designed for use with Amazon EC2 for both throughput and transaction intensive workloads at any scale. It provides persistent block level storage volumes for use with EC2 instances. The "write_ops" metric represents the number of write operations performed on the EBS volume.
Table Usage Guide
The aws_ebs_volume_metric_write_ops
table in Steampipe provides you with information about the write operations metrics of EBS volumes within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query volume-specific details, including the number of write operations, the timestamp of the data point, and the statistical value of the data point. You can utilize this table to gather insights on EBS volumes, such as volume performance, write load, and more. The schema outlines the various attributes of the EBS volume write operations metrics for you, including the volume ID, timestamp, and statistical values.
The aws_ebs_volume_metric_write_ops
table provides you with metric statistics at 5 minute intervals for the most recent 5 days.
Examples
Basic info
Gain insights into the performance of your AWS EBS volumes over time. This query helps in monitoring the write operations, which aids in identifying potential bottlenecks or performance issues.
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_opsorder by volume_id, timestamp;
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_opsorder by volume_id, timestamp;
Intervals where volumes exceed 1000 average write ops
Identify instances where the average write operations on AWS EBS volumes exceed 1000. This can be useful in monitoring performance and identifying potential bottlenecks or areas for optimization.
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_opswhere average > 1000order by volume_id, timestamp;
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_opswhere average > 1000order by volume_id, timestamp;
Intervals where volumes exceed 8000 max write ops
Identify instances where the maximum write operations on AWS EBS volumes exceed 8000. This can be useful in understanding the load on your EBS volumes, and may help you optimize your resources for better performance.
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_opswhere maximum > 8000order by volume_id, timestamp;
select volume_id, timestamp, minimum, maximum, average, sum, sample_countfrom aws_ebs_volume_metric_write_opswhere maximum > 8000order by volume_id, timestamp;
Read, Write, and Total IOPS
Explore the performance of your storage volumes by analyzing the average, maximum, and minimum Input/Output operations per second (IOPS). This allows you to monitor and optimize your storage efficiency, ensuring smooth operations.
select r.volume_id, r.timestamp, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avg, round(r.maximum) + round(w.maximum) as iops_max, round(r.maximum) as read_ops_max, round(w.maximum) as write_ops_max, round(r.minimum) + round(w.minimum) as iops_min, round(r.minimum) as read_ops_min, round(w.minimum) as write_ops_minfrom aws_ebs_volume_metric_read_ops as r, aws_ebs_volume_metric_write_ops as wwhere r.volume_id = w.volume_id and r.timestamp = w.timestamporder by r.volume_id, r.timestamp;
select r.volume_id, r.timestamp, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avg, round(r.maximum) + round(w.maximum) as iops_max, round(r.maximum) as read_ops_max, round(w.maximum) as write_ops_max, round(r.minimum) + round(w.minimum) as iops_min, round(r.minimum) as read_ops_min, round(w.minimum) as write_ops_minfrom aws_ebs_volume_metric_read_ops as r join aws_ebs_volume_metric_write_ops as w on r.volume_id = w.volume_id and r.timestamp = w.timestamporder by r.volume_id, r.timestamp;
Schema for aws_ebs_volume_metric_write_ops
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
account_id | text | =, !=, ~~, ~~*, !~~, !~~* | The AWS Account ID in which the resource is located. |
average | double precision | The average of the metric values that correspond to the data point. | |
maximum | double precision | The maximum metric value for the data point. | |
metric_name | text | The name of the metric. | |
minimum | double precision | The minimum metric value for the data point. | |
namespace | text | The metric namespace. | |
partition | text | The AWS partition in which the resource is located (aws, aws-cn, or aws-us-gov). | |
region | text | The AWS Region in which the resource is located. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The standard unit for the data point. | |
volume_id | text | The EBS Volume ID. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- aws
You can pass the configuration to the command with the --config
argument:
steampipe_export_aws --config '<your_config>' aws_ebs_volume_metric_write_ops