Table: azure_compute_disk_metric_write_ops_hourly - Query Azure Compute Disk Metrics using SQL
Azure Compute Disk Metrics is a service within Microsoft Azure that allows users to monitor and track the performance of their Azure disks. It provides detailed data about the number of read and write operations, the amount of data transferred, and the latency of these operations. This service is crucial for understanding disk usage patterns, identifying potential bottlenecks, and optimizing performance.
Table Usage Guide
The azure_compute_disk_metric_write_ops_hourly
table provides insights into the hourly write operations of Azure Compute Disks. As a system administrator or DevOps engineer, explore disk-specific details through this table, including the number of write operations and the time of these operations. Utilize it to understand disk usage patterns, identify potential performance bottlenecks, and optimize your Azure disk configurations.
Examples
Basic info
Explore the performance of Azure compute disks over time by tracking the minimum, maximum, and average write operations per hour. This can help in identifying usage patterns, planning capacity, and troubleshooting performance issues.
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_write_ops_hourlyorder by name, timestamp;
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_write_ops_hourlyorder by name, timestamp;
Operations Over 10 Bytes average
This query is used to track the performance of Azure compute disks, specifically focusing on those with an average of more than 10 write operations per hour. By doing so, it helps in identifying potential bottlenecks and ensuring optimal disk performance.
select name, timestamp, round(minimum :: numeric, 2) as min_write_ops, round(maximum :: numeric, 2) as max_write_ops, round(average :: numeric, 2) as avg_write_ops, sample_countfrom azure_compute_disk_metric_write_ops_hourlywhere average > 10order by name, timestamp;
select name, timestamp, round(minimum, 2) as min_write_ops, round(maximum, 2) as max_write_ops, round(average, 2) as avg_write_ops, sample_countfrom azure_compute_disk_metric_write_ops_hourlywhere average > 10order by name, timestamp;
Schema for azure_compute_disk_metric_write_ops_hourly
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
cloud_environment | text | The Azure Cloud Environment. | |
maximum | double precision | The maximum metric value for the data point. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | The name of the disk. | |
resource_group | text | The resource group which holds this resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
subscription_id | text | =, !=, ~~, ~~*, !~~, !~~* | The Azure Subscription ID in which the resource is located. |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The units in which the metric value is reported. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- azure
You can pass the configuration to the command with the --config
argument:
steampipe_export_azure --config '<your_config>' azure_compute_disk_metric_write_ops_hourly