Table: azure_compute_disk_metric_read_ops_daily - Query Azure Compute Disk Metrics using SQL
Azure Compute Disk Metrics is a feature within Microsoft Azure that provides data about the performance of Azure managed disks. It provides detailed information about disk read operations, write operations, and other disk performance metrics. This feature helps users monitor and optimize the performance of their Azure managed disks.
Table Usage Guide
The azure_compute_disk_metric_read_ops_daily
table provides insights into the daily read operations of Azure managed disks. As a system administrator or DevOps engineer, use this table to monitor disk performance and identify potential bottlenecks or performance issues. This table can be particularly useful in optimizing disk usage and ensuring efficient operation of your Azure resources.
Examples
Basic info
Explore the daily read operations on Azure compute disks to gain insights into the average, minimum, and maximum operations, along with the sample count. This is useful for tracking disk usage patterns and identifying any unusual activity or potential bottlenecks.
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_read_ops_dailyorder by name, timestamp;
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_read_ops_dailyorder by name, timestamp;
Operations Over 10 Bytes average
This query is used to monitor the performance of Azure Compute Disk operations by identifying those with an average read operation count exceeding 10 per day. It allows for effective resource management by highlighting areas where usage may be higher than expected.
select name, timestamp, round(minimum :: numeric, 2) as min_read_ops, round(maximum :: numeric, 2) as max_read_ops, round(average :: numeric, 2) as avg_read_ops, sample_countfrom azure_compute_disk_metric_read_ops_dailywhere average > 10order by name, timestamp;
select name, timestamp, round(minimum, 2) as min_read_ops, round(maximum, 2) as max_read_ops, round(average, 2) as avg_read_ops, sample_countfrom azure_compute_disk_metric_read_ops_dailywhere average > 10order by name, timestamp;
Schema for azure_compute_disk_metric_read_ops_daily
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
cloud_environment | text | The Azure Cloud Environment. | |
maximum | double precision | The maximum metric value for the data point. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | The name of the disk. | |
resource_group | text | The resource group which holds this resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
subscription_id | text | =, !=, ~~, ~~*, !~~, !~~* | The Azure Subscription ID in which the resource is located. |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The units in which the metric value is reported. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- azure
You can pass the configuration to the command with the --config
argument:
steampipe_export_azure --config '<your_config>' azure_compute_disk_metric_read_ops_daily