steampipe plugin install aws

Table: aws_ebs_volume_metric_write_ops - Query AWS Elastic Block Store (EBS) using SQL

The AWS Elastic Block Store (EBS) is a high-performance block storage service designed for use with Amazon EC2 for both throughput and transaction intensive workloads at any scale. It provides persistent block level storage volumes for use with EC2 instances. The "write_ops" metric represents the number of write operations performed on the EBS volume.

Table Usage Guide

The aws_ebs_volume_metric_write_ops table in Steampipe provides you with information about the write operations metrics of EBS volumes within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query volume-specific details, including the number of write operations, the timestamp of the data point, and the statistical value of the data point. You can utilize this table to gather insights on EBS volumes, such as volume performance, write load, and more. The schema outlines the various attributes of the EBS volume write operations metrics for you, including the volume ID, timestamp, and statistical values.

The aws_ebs_volume_metric_write_ops table provides you with metric statistics at 5 minute intervals for the most recent 5 days.

Examples

Basic info

Gain insights into the performance of your AWS EBS volumes over time. This query helps in monitoring the write operations, which aids in identifying potential bottlenecks or performance issues.

select
volume_id,
timestamp,
minimum,
maximum,
average,
sum,
sample_count
from
aws_ebs_volume_metric_write_ops
order by
volume_id,
timestamp;
select
volume_id,
timestamp,
minimum,
maximum,
average,
sum,
sample_count
from
aws_ebs_volume_metric_write_ops
order by
volume_id,
timestamp;

Intervals where volumes exceed 1000 average write ops

Identify instances where the average write operations on AWS EBS volumes exceed 1000. This can be useful in monitoring performance and identifying potential bottlenecks or areas for optimization.

select
volume_id,
timestamp,
minimum,
maximum,
average,
sum,
sample_count
from
aws_ebs_volume_metric_write_ops
where
average > 1000
order by
volume_id,
timestamp;
select
volume_id,
timestamp,
minimum,
maximum,
average,
sum,
sample_count
from
aws_ebs_volume_metric_write_ops
where
average > 1000
order by
volume_id,
timestamp;

Intervals where volumes exceed 8000 max write ops

Identify instances where the maximum write operations on AWS EBS volumes exceed 8000. This can be useful in understanding the load on your EBS volumes, and may help you optimize your resources for better performance.

select
volume_id,
timestamp,
minimum,
maximum,
average,
sum,
sample_count
from
aws_ebs_volume_metric_write_ops
where
maximum > 8000
order by
volume_id,
timestamp;
select
volume_id,
timestamp,
minimum,
maximum,
average,
sum,
sample_count
from
aws_ebs_volume_metric_write_ops
where
maximum > 8000
order by
volume_id,
timestamp;

Read, Write, and Total IOPS

Explore the performance of your storage volumes by analyzing the average, maximum, and minimum Input/Output operations per second (IOPS). This allows you to monitor and optimize your storage efficiency, ensuring smooth operations.

select
r.volume_id,
r.timestamp,
round(r.average) + round(w.average) as iops_avg,
round(r.average) as read_ops_avg,
round(w.average) as write_ops_avg,
round(r.maximum) + round(w.maximum) as iops_max,
round(r.maximum) as read_ops_max,
round(w.maximum) as write_ops_max,
round(r.minimum) + round(w.minimum) as iops_min,
round(r.minimum) as read_ops_min,
round(w.minimum) as write_ops_min
from
aws_ebs_volume_metric_read_ops as r,
aws_ebs_volume_metric_write_ops as w
where
r.volume_id = w.volume_id
and r.timestamp = w.timestamp
order by
r.volume_id,
r.timestamp;
select
r.volume_id,
r.timestamp,
round(r.average) + round(w.average) as iops_avg,
round(r.average) as read_ops_avg,
round(w.average) as write_ops_avg,
round(r.maximum) + round(w.maximum) as iops_max,
round(r.maximum) as read_ops_max,
round(w.maximum) as write_ops_max,
round(r.minimum) + round(w.minimum) as iops_min,
round(r.minimum) as read_ops_min,
round(w.minimum) as write_ops_min
from
aws_ebs_volume_metric_read_ops as r
join aws_ebs_volume_metric_write_ops as w on r.volume_id = w.volume_id
and r.timestamp = w.timestamp
order by
r.volume_id,
r.timestamp;

Schema for aws_ebs_volume_metric_write_ops

NameTypeOperatorsDescription
_ctxjsonbSteampipe context in JSON form.
account_idtext=, !=, ~~, ~~*, !~~, !~~*The AWS Account ID in which the resource is located.
averagedouble precisionThe average of the metric values that correspond to the data point.
maximumdouble precisionThe maximum metric value for the data point.
metric_nametextThe name of the metric.
minimumdouble precisionThe minimum metric value for the data point.
namespacetextThe metric namespace.
partitiontextThe AWS partition in which the resource is located (aws, aws-cn, or aws-us-gov).
regiontextThe AWS Region in which the resource is located.
sample_countdouble precisionThe number of metric values that contributed to the aggregate value of this data point.
sp_connection_nametext=, !=, ~~, ~~*, !~~, !~~*Steampipe connection name.
sp_ctxjsonbSteampipe context in JSON form.
sumdouble precisionThe sum of the metric values for the data point.
timestamptimestamp with time zoneThe time stamp used for the data point.
unittextThe standard unit for the data point.
volume_idtextThe EBS Volume ID.

Export

This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.

You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh script:

/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- aws

You can pass the configuration to the command with the --config argument:

steampipe_export_aws --config '<your_config>' aws_ebs_volume_metric_write_ops