Table: azure_compute_disk_metric_read_ops - Query Azure Compute Disk Metrics using SQL
Azure Compute Disk Metrics is a resource within Microsoft Azure that allows you to monitor and analyze the performance of your Azure managed disks. It provides detailed information about read and write operations, throughput, and latency for your disks. Azure Compute Disk Metrics helps you understand disk performance and identify potential bottlenecks or performance issues.
Table Usage Guide
The azure_compute_disk_metric_read_ops
table provides insights into read operations on Azure managed disks. As a system administrator or DevOps engineer, explore disk-specific details through this table, including the number of read operations, the time of the operations, and associated metadata. Utilize it to monitor and analyze disk performance, identify potential bottlenecks, and optimize disk usage.
Examples
Basic info
Explore the performance of Azure compute disks over time by assessing the minimum, maximum, and average read operations. This can help determine potential bottlenecks and optimize disk usage for better system performance.
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_read_opsorder by name, timestamp;
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_read_opsorder by name, timestamp;
Operations Over 10 Bytes average
Determine the performance of Azure Compute Disks by identifying instances where the average read operations exceed 10 bytes. This is useful to monitor and optimize disk usage for improved system performance.
select name, timestamp, round(minimum :: numeric, 2) as min_read_ops, round(maximum :: numeric, 2) as max_read_ops, round(average :: numeric, 2) as avg_read_ops, sample_countfrom azure_compute_disk_metric_read_opswhere average > 10order by name, timestamp;
select name, timestamp, round(minimum, 2) as min_read_ops, round(maximum, 2) as max_read_ops, round(average, 2) as avg_read_ops, sample_countfrom azure_compute_disk_metric_read_opswhere average > 10order by name, timestamp;
Schema for azure_compute_disk_metric_read_ops
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
cloud_environment | text | The Azure Cloud Environment. | |
maximum | double precision | The maximum metric value for the data point. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | The name of the disk. | |
resource_group | text | The resource group which holds this resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
subscription_id | text | =, !=, ~~, ~~*, !~~, !~~* | The Azure Subscription ID in which the resource is located. |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The units in which the metric value is reported. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- azure
You can pass the configuration to the command with the --config
argument:
steampipe_export_azure --config '<your_config>' azure_compute_disk_metric_read_ops