Table: oci_core_boot_volume_metric_read_ops_hourly - Query OCI Core Boot Volume Metrics using SQL
Oracle Cloud Infrastructure (OCI) Core Boot Volume is a persistent, block-level storage volume that you can attach to a single instance. The boot volume contains the image of the operating system running on your instance. The oci_core_boot_volume_metric_read_ops_hourly
table provides data related to the read operations performed on the boot volumes, aggregated on an hourly basis.
Table Usage Guide
The oci_core_boot_volume_metric_read_ops_hourly
table provides insights into the read operations metrics of OCI Core Boot Volumes. As a cloud engineer or system administrator, you can use this table to monitor and analyze the read operations on boot volumes, which can be crucial for performance tuning and troubleshooting. This table can be particularly useful in identifying volumes with high read operations, which might indicate a need for capacity planning or performance optimization.
Examples
Basic info
Analyze the settings to understand the performance of boot volumes in Oracle Cloud Infrastructure over time. This query can be used to monitor the read operations, allowing you to pinpoint any unusual activity or potential bottlenecks.
select id, timestamp, minimum, maximum, average, sum, sample_countfrom oci_core_boot_volume_metric_read_ops_hourlyorder by id, timestamp;
select id, timestamp, minimum, maximum, average, sum, sample_countfrom oci_core_boot_volume_metric_read_ops_hourlyorder by id, timestamp;
Intervals where volumes exceed 1000 average read ops
Identify instances where the average read operations on boot volumes surpass 1000 within an hour. This can help pinpoint potential areas of high workload and facilitate proactive system management.
select id, timestamp, minimum, maximum, average, sum, sample_countfrom oci_core_boot_volume_metric_read_ops_hourlywhere average > 1000order by id, timestamp;
select id, timestamp, minimum, maximum, average, sum, sample_countfrom oci_core_boot_volume_metric_read_ops_hourlywhere average > 1000order by id, timestamp;
Intervals where volumes exceed 8000 max read ops
Assess the instances where the maximum read operations on boot volumes exceed a threshold of 8000. This can be beneficial in identifying periods of high traffic or potential performance issues within your server infrastructure.
select id, timestamp, minimum, maximum, average, sum, sample_countfrom oci_core_boot_volume_metric_read_ops_hourlywhere maximum > 8000order by id, timestamp;
select id, timestamp, minimum, maximum, average, sum, sample_countfrom oci_core_boot_volume_metric_read_ops_hourlywhere maximum > 8000order by id, timestamp;
Read, Write, and Total IOPS
Determine the areas in which input/output operations per second (IOPS) are occurring, providing a comprehensive view of both read and write operations. This can help optimize system performance by identifying potential bottlenecks or areas for improvement.
select r.id, r.timestamp, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avg, round(r.maximum) + round(w.maximum) as iops_max, round(r.maximum) as read_ops_max, round(w.maximum) as write_ops_max, round(r.minimum) + round(w.minimum) as iops_min, round(r.minimum) as read_ops_min, round(w.minimum) as write_ops_minfrom oci_core_boot_volume_metric_read_ops_hourly as r, oci_core_boot_volume_metric_write_ops_hourly as wwhere r.id = w.id and r.timestamp = w.timestamporder by r.id, r.timestamp;
select r.id, r.timestamp, round(r.average) + round(w.average) as iops_avg, round(r.average) as read_ops_avg, round(w.average) as write_ops_avg, round(r.maximum) + round(w.maximum) as iops_max, round(r.maximum) as read_ops_max, round(w.maximum) as write_ops_max, round(r.minimum) + round(w.minimum) as iops_min, round(r.minimum) as read_ops_min, round(w.minimum) as write_ops_minfrom oci_core_boot_volume_metric_read_ops_hourly as r, oci_core_boot_volume_metric_write_ops_hourly as wwhere r.id = w.id and r.timestamp = w.timestamporder by r.id, r.timestamp;
Query examples
Schema for oci_core_boot_volume_metric_read_ops_hourly
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
average | double precision | The average of the metric values that correspond to the data point. | |
compartment_id | text | The ID of the compartment. | |
id | text | The OCID of the boot volume. | |
maximum | double precision | The maximum metric value for the data point. | |
metric_name | text | The name of the metric. | |
minimum | double precision | The minimum metric value for the data point. | |
namespace | text | The metric namespace. | |
region | text | The OCI region in which the resource is located. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
sum | double precision | The sum of the metric values for the data point. | |
tenant_id | text | =, !=, ~~, ~~*, !~~, !~~* | The OCID of the Tenant in which the resource is located. |
tenant_name | text | The name of the Tenant in which the resource is located. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The standard unit for the data point. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- oci
You can pass the configuration to the command with the --config
argument:
steampipe_export_oci --config '<your_config>' oci_core_boot_volume_metric_read_ops_hourly