Table: aws_ecs_cluster_metric_cpu_utilization - Query AWS ECS Cluster Metrics using SQL
The AWS ECS Cluster Metrics service allows you to monitor and collect data on CPU utilization in your Amazon Elastic Container Service (ECS) clusters. This feature provides insights into the efficiency of your clusters and can be used to optimize resource usage. You can query these metrics using SQL, allowing for easy integration and analysis of the data.
Table Usage Guide
The aws_ecs_cluster_metric_cpu_utilization
table in Steampipe provides you with information about CPU utilization metrics of AWS Elastic Container Service (ECS) clusters. This table allows you, as a DevOps engineer, system administrator, or other technical professional, to query CPU utilization-specific details, including the average, maximum, and minimum CPU utilization, along with the corresponding timestamps. You can utilize this table to monitor CPU usage trends, identify potential performance issues, and optimize resource allocation. The schema outlines various attributes of the CPU utilization metric, including the cluster name, period, timestamp, and average, minimum, and maximum CPU utilization.
The aws_ecs_cluster_metric_cpu_utilization
table provides you with metric statistics at 5-minute intervals for the most recent 5 days.
Examples
Basic info
Analyze the CPU utilization metrics of AWS ECS clusters over time to understand resource usage trends and optimize cluster performance. This information could be useful in identifying patterns, planning capacity, and managing costs effectively.
select cluster_name, timestamp, minimum, maximum, average, sample_countfrom aws_ecs_cluster_metric_cpu_utilizationorder by cluster_name, timestamp;
select cluster_name, timestamp, minimum, maximum, average, sample_countfrom aws_ecs_cluster_metric_cpu_utilizationorder by cluster_name, timestamp;
CPU Over 80% average
Identify instances where the average CPU utilization of your AWS ECS clusters exceeds 80%. This can help in managing resources effectively, ensuring optimal performance and avoiding potential bottlenecks.
select cluster_name, timestamp, round(minimum :: numeric, 2) as min_cpu, round(maximum :: numeric, 2) as max_cpu, round(average :: numeric, 2) as avg_cpu, sample_countfrom aws_ecs_cluster_metric_cpu_utilizationwhere average > 80order by cluster_name, timestamp;
select cluster_name, timestamp, round(minimum, 2) as min_cpu, round(maximum, 2) as max_cpu, round(average, 2) as avg_cpu, sample_countfrom aws_ecs_cluster_metric_cpu_utilizationwhere average > 80order by cluster_name, timestamp;
Schema for aws_ecs_cluster_metric_cpu_utilization
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
account_id | text | =, !=, ~~, ~~*, !~~, !~~* | The AWS Account ID in which the resource is located. |
average | double precision | The average of the metric values that correspond to the data point. | |
cluster_name | text | A user-generated string that you use to identify your cluster. | |
maximum | double precision | The maximum metric value for the data point. | |
metric_name | text | The name of the metric. | |
minimum | double precision | The minimum metric value for the data point. | |
namespace | text | The metric namespace. | |
partition | text | The AWS partition in which the resource is located (aws, aws-cn, or aws-us-gov). | |
region | text | The AWS Region in which the resource is located. | |
sample_count | double precision | The number of metric values that contributed to the aggregate value of this data point. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
sum | double precision | The sum of the metric values for the data point. | |
timestamp | timestamp with time zone | The time stamp used for the data point. | |
unit | text | The standard unit for the data point. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- aws
You can pass the configuration to the command with the --config
argument:
steampipe_export_aws --config '<your_config>' aws_ecs_cluster_metric_cpu_utilization