Table: azure_compute_disk_metric_read_ops_hourly - Query Azure Compute Disk Metrics using SQL
Azure Compute Disks are data storage units available in Microsoft Azure. They are used to store data for Azure Virtual Machines and other services. Azure Compute Disks provide high-performance, durable storage for I/O-intensive workloads.
Table Usage Guide
The azure_compute_disk_metric_read_ops_hourly
table provides insights into read operations of Azure Compute Disks on an hourly basis. As a system administrator or a DevOps engineer, explore disk-specific details through this table, including the number of read operations, the time of operations, and associated metadata. Utilize it to monitor disk performance, identify usage patterns, and detect potential performance issues.
Examples
Basic info
Assess the elements within the Azure compute disk's read operations on an hourly basis. This can help in identifying patterns, understanding usage trends, and planning for capacity or performance optimization.
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_read_ops_hourlyorder by name, timestamp;
select name, timestamp, minimum, maximum, average, sample_countfrom azure_compute_disk_metric_read_ops_hourlyorder by name, timestamp;
Operations Over 10 Bytes average
This query is used to monitor disk read operations on Azure, specifically focusing on instances where the average read operation exceeds 10 bytes. This is useful for identifying potential performance issues or bottlenecks in the system, allowing for proactive management and optimization of resources.
select name, timestamp, round(minimum :: numeric, 2) as min_read_ops, round(maximum :: numeric, 2) as max_read_ops, round(average :: numeric, 2) as avg_read_ops, sample_countfrom azure_compute_disk_metric_read_ops_hourlywhere average > 10order by name, timestamp;
select name, timestamp, round(minimum, 2) as min_read_ops, round(maximum, 2) as max_read_ops, round(average, 2) as avg_read_ops, sample_countfrom azure_compute_disk_metric_read_ops_hourlywhere average > 10order by name, timestamp;
Schema for azure_compute_disk_metric_read_ops_hourly
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
cloud_environment | text | The Azure Cloud Environment. | |
maximum | double precision | The maximum metric value for the data point. | |
minimum | double precision | The minimum metric value for the data point. | |
name | text | The name of the disk. | |
resource_group | text | The resource group which holds this resource. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
subscription_id | text | =, !=, ~~, ~~*, !~~, !~~* | The Azure Subscription ID in which the resource is located. |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The units in which the metric value is reported. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- azure
You can pass the configuration to the command with the --config
argument:
steampipe_export_azure --config '<your_config>' azure_compute_disk_metric_read_ops_hourly