Table: azure_stream_analytics_job - Query Azure Stream Analytics Jobs using SQL
Azure Stream Analytics is a real-time analytics service that allows you to analyze and visualize streaming data from various sources such as devices, sensors, websites, social media feeds, and applications. It enables you to set up real-time analytic computations on streaming data which can be used for anomaly detection, live dashboarding, and alerts among other scenarios. This service is designed to process and analyze high volumes of fast streaming data from multiple streams simultaneously.
Table Usage Guide
The azure_stream_analytics_job
table provides insights into Stream Analytics Jobs within Azure. As a data analyst or data scientist, you can explore job-specific details through this table, including job configurations, input and output details, and transformation queries. Utilize it to monitor the status and health of your Stream Analytics Jobs, understand their configurations, and ensure they are processing data as expected.
Examples
Basic info
Explore which stream analytics jobs are currently running in your Azure environment. This allows you to gain insights on job states and distribution across different regions and subscriptions, helping you manage resource allocation and monitor job performance.
select name, id, job_id, job_state, region, subscription_idfrom azure_stream_analytics_job;
select name, id, job_id, job_state, region, subscription_idfrom azure_stream_analytics_job;
List failed stream analytics jobs
Determine the areas in which stream analytics jobs have failed, enabling you to focus on troubleshooting and rectifying those specific regions. This query is particularly useful for maintaining the efficiency of your Azure Stream Analytics.
select name, id, type, provisioning_state, regionfrom azure_stream_analytics_jobwhere provisioning_state = 'Failed';
select name, id, type, provisioning_state, regionfrom azure_stream_analytics_jobwhere provisioning_state = 'Failed';
Control examples
Schema for azure_stream_analytics_job
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
akas | jsonb | Array of globally unique identifier strings (also known as) for the resource. | |
cloud_environment | text | The Azure Cloud Environment. | |
compatibility_level | text | Controls certain runtime behaviors of the streaming job. | |
created_date | timestamp with time zone | Specifies the time when the stream analytics job was created. | |
data_locale | text | The data locale of the stream analytics job. | |
diagnostic_settings | jsonb | A list of active diagnostic settings for the streaming job. | |
etag | text | An unique read-only string that changes whenever the resource is updated. | |
events_late_arrival_max_delay_in_seconds | bigint | The maximum tolerable delay in seconds where events arriving late could be included. | |
events_out_of_order_max_delay_in_seconds | bigint | The maximum tolerable delay in seconds where out-of-order events can be adjusted to be back in order. | |
events_out_of_order_policy | text | Indicates the policy to apply to events that arrive out of order in the input event stream. | |
functions | jsonb | A list of one or more functions for the streaming job. | |
id | text | The resource identifier. | |
inputs | jsonb | A list of one or more inputs to the streaming job. | |
job_id | text | A GUID uniquely identifying the streaming job. | |
job_state | text | Describes the state of the streaming job. | |
last_output_event_time | timestamp with time zone | Indicating the last output event time of the streaming job or null indicating that output has not yet been produced. | |
name | text | = | The resource name. |
output_error_policy | text | Indicates the policy to apply to events that arrive at the output and cannot be written to the external storage due to being malformed (missing column values, column values of wrong type or size). | |
output_start_mode | text | This property should only be utilized when it is desired that the job be started immediately upon creation. | |
output_start_time | timestamp with time zone | Indicates the starting point of the output event stream, or null to indicate that the output event stream will start whenever the streaming job is started. | |
outputs | jsonb | A list of one or more outputs for the streaming job. | |
provisioning_state | text | Describes the provisioning status of the streaming job. | |
region | text | The Azure region/location in which the resource is located. | |
resource_group | text | = | The resource group which holds this resource. |
sku_name | text | Describes the sku name of the streaming job. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
subscription_id | text | =, !=, ~~, ~~*, !~~, !~~* | The Azure Subscription ID in which the resource is located. |
tags | jsonb | A map of tags for the resource. | |
title | text | Title of the resource. | |
transformation | jsonb | Indicates the query and the number of streaming units to use for the streaming job. | |
type | text | The resource type. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- azure
You can pass the configuration to the command with the --config
argument:
steampipe_export_azure --config '<your_config>' azure_stream_analytics_job