Table: gcp_bigquery_job - Query GCP BigQuery Jobs using SQL
BigQuery Jobs in Google Cloud Platform (GCP) represent actions such as SQL queries, load, export, copy, or other types of jobs. These jobs are used to manage asynchronous tasks such as SQL statements and data import/export. The jobs are primarily used for query and load operations and can be used to monitor the status of various operations.
Table Usage Guide
The gcp_bigquery_job
table provides insights into BigQuery Jobs within Google Cloud Platform (GCP). As a data analyst or data engineer, explore job-specific details through this table, including job configuration, statistics, and status. Utilize it to monitor the progress of data operations, understand the configuration of specific jobs, and analyze the overall performance of BigQuery operations.
Examples
Basic info
Explore the creation times and locations of jobs within Google Cloud's BigQuery service. This can help understand when and where specific tasks were initiated, providing valuable insight into resource usage and operational trends.
select job_id, self_link, creation_time, locationfrom gcp_bigquery_job;
select job_id, self_link, creation_time, locationfrom gcp_bigquery_job;
List running jobs
Explore which jobs are currently active in your Google Cloud BigQuery environment. This can help you manage resources effectively and monitor ongoing processes.
select job_id, self_link, creation_time, locationfrom gcp_bigquery_jobwhere state = 'RUNNING';
select job_id, self_link, creation_time, locationfrom gcp_bigquery_jobwhere state = 'RUNNING';
Schema for gcp_bigquery_job
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
akas | jsonb | Array of globally unique identifier strings (also known as) for the resource. | |
completion_ratio | text | Job progress (0.0 -> 1.0) for LOAD and EXTRACT jobs. | |
configuration | jsonb | Describes the job configuration. | |
creation_time | timestamp with time zone | Creation time of this job. | |
end_time | timestamp with time zone | End time of this job. | |
error_result | jsonb | A result object that will be present only if the job has failed. | |
errors | jsonb | he first errors encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful. | |
etag | text | A hash of the resource. | |
extract | jsonb | Statistics for an extract job. | |
id | text | Unique opaque ID of the job. | |
job_id | text | = | The ID of the job. |
kind | text | The type of the resource. | |
labels | jsonb | The labels associated with this job. | |
load | jsonb | Statistics for a load job. | |
location | text | The GCP multi-region, region, or zone in which the resource is located. | |
num_child_jobs | bigint | Number of child jobs executed. | |
parent_job_id | text | If this is a child job, the id of the parent. | |
project | text | =, !=, ~~, ~~*, !~~, !~~* | The GCP Project in which the resource is located. |
query | jsonb | Statistics for a query job. | |
quota_deferments | jsonb | Quotas which delayed this job's start time. | |
reservation_id | text | Name of the primary reservation assigned to this job. | |
reservation_usage | jsonb | Job resource usage breakdown by reservation. | |
row_level_security_statistics | jsonb | Statistics for row-level security. Present only for query and extract jobs. | |
script_statistics | jsonb | Statistics for a child job of a script. | |
self_link | text | An URL that can be used to access the resource again. | |
session_info_template | jsonb | Information of the session if this job is part of one. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
start_time | timestamp with time zone | Start time of this job. | |
state | text | Running state of the job. When the state is DONE, errorResult can be checked to determine whether the job succeeded or failed. | |
tags | jsonb | A map of tags for the resource. | |
title | text | Title of the resource. | |
total_bytes_processed | bigint | Use the bytes processed in the query statistics instead. | |
total_slot_ms | bigint | Slot-milliseconds for the job. | |
transaction_info | jsonb | Information of the multi-statement transaction if this job is part of one. | |
user_email | text | Email address of the user who ran the job. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp
You can pass the configuration to the command with the --config
argument:
steampipe_export_gcp --config '<your_config>' gcp_bigquery_job