Table: azure_compute_disk_metric_write_ops - Query Azure Compute Disk Metrics using SQL
Azure Compute Disk Metrics is a feature within Microsoft Azure that allows monitoring and analysis of disk performance and usage. It offers detailed information on various metrics, such as write operations, enabling users to understand disk behavior and identify potential performance issues. This feature is crucial for maintaining optimal disk performance and managing storage resources efficiently.
Table Usage Guide
The azure_compute_disk_metric_write_ops
table provides insights into write operations on Azure Compute Disks. As a system administrator or DevOps engineer, you can explore disk-specific details through this table, including the number of write operations, to understand disk usage patterns and potential performance bottlenecks. Utilize it to monitor and optimize disk performance, and ensure efficient resource management in your Azure environment.
Examples
Basic info
Explore which Azure compute disk has the most write operations over time. This can help optimize disk usage by identifying high-usage periods and potentially underutilized resources.
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_write_opsorder by name, timestamp;
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_write_opsorder by name, timestamp;
Operations Over 10 Bytes average
Determine the areas in which the average write operations on Azure compute disks exceed 10 bytes. This can be useful in identifying potential bottlenecks or high usage periods, enabling proactive management and optimization of disk resources.
select name, timestamp, round(minimum :: numeric, 2) as min_write_ops, round(maximum :: numeric, 2) as max_write_ops, round(average :: numeric, 2) as avg_write_ops, sample_countfrom azure_compute_disk_metric_write_opswhere average > 10order by name, timestamp;
select name, timestamp, round(minimum, 2) as min_write_ops, round(maximum, 2) as max_write_ops, round(average, 2) as avg_write_ops, sample_countfrom azure_compute_disk_metric_write_opswhere average > 10order by name, timestamp;
Schema for azure_compute_disk_metric_write_ops
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
cloud_environment | text | The Azure Cloud Environment. | |
maximum | double precision | The maximum metric value for the data point. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | The name of the disk. | |
resource_group | text | The resource group which holds this resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
subscription_id | text | =, !=, ~~, ~~*, !~~, !~~* | The Azure Subscription ID in which the resource is located. |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The units in which the metric value is reported. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- azure
You can pass the configuration to the command with the --config
argument:
steampipe_export_azure --config '<your_config>' azure_compute_disk_metric_write_ops