steampipe plugin install gcp

Table: gcp_bigquery_job - Query GCP BigQuery Jobs using SQL

BigQuery Jobs in Google Cloud Platform (GCP) represent actions such as SQL queries, load, export, copy, or other types of jobs. These jobs are used to manage asynchronous tasks such as SQL statements and data import/export. The jobs are primarily used for query and load operations and can be used to monitor the status of various operations.

Table Usage Guide

The gcp_bigquery_job table provides insights into BigQuery Jobs within Google Cloud Platform (GCP). As a data analyst or data engineer, explore job-specific details through this table, including job configuration, statistics, and status. Utilize it to monitor the progress of data operations, understand the configuration of specific jobs, and analyze the overall performance of BigQuery operations.

Examples

Basic info

Explore the creation times and locations of jobs within Google Cloud's BigQuery service. This can help understand when and where specific tasks were initiated, providing valuable insight into resource usage and operational trends.

select
job_id,
self_link,
creation_time,
location
from
gcp_bigquery_job;
select
job_id,
self_link,
creation_time,
location
from
gcp_bigquery_job;

List running jobs

Explore which jobs are currently active in your Google Cloud BigQuery environment. This can help you manage resources effectively and monitor ongoing processes.

select
job_id,
self_link,
creation_time,
location
from
gcp_bigquery_job
where
state = 'RUNNING';
select
job_id,
self_link,
creation_time,
location
from
gcp_bigquery_job
where
state = 'RUNNING';

Schema for gcp_bigquery_job

NameTypeOperatorsDescription
_ctxjsonbSteampipe context in JSON form.
akasjsonbArray of globally unique identifier strings (also known as) for the resource.
completion_ratiotextJob progress (0.0 -> 1.0) for LOAD and EXTRACT jobs.
configurationjsonbDescribes the job configuration.
creation_timetimestamp with time zoneCreation time of this job.
end_timetimestamp with time zoneEnd time of this job.
error_resultjsonbA result object that will be present only if the job has failed.
errorsjsonbhe first errors encountered during the running of the job. The final message includes the number of errors that caused the process to stop. Errors here do not necessarily mean that the job has completed or was unsuccessful.
etagtextA hash of the resource.
extractjsonbStatistics for an extract job.
idtextUnique opaque ID of the job.
job_idtext=The ID of the job.
kindtextThe type of the resource.
labelsjsonbThe labels associated with this job.
loadjsonbStatistics for a load job.
locationtextThe GCP multi-region, region, or zone in which the resource is located.
num_child_jobsbigintNumber of child jobs executed.
parent_job_idtextIf this is a child job, the id of the parent.
projecttext=, !=, ~~, ~~*, !~~, !~~*The GCP Project in which the resource is located.
queryjsonbStatistics for a query job.
quota_defermentsjsonbQuotas which delayed this job's start time.
reservation_idtextName of the primary reservation assigned to this job.
reservation_usagejsonbJob resource usage breakdown by reservation.
row_level_security_statisticsjsonbStatistics for row-level security. Present only for query and extract jobs.
script_statisticsjsonbStatistics for a child job of a script.
self_linktextAn URL that can be used to access the resource again.
session_info_templatejsonbInformation of the session if this job is part of one.
sp_connection_nametext=, !=, ~~, ~~*, !~~, !~~*Steampipe connection name.
sp_ctxjsonbSteampipe context in JSON form.
start_timetimestamp with time zoneStart time of this job.
statetextRunning state of the job. When the state is DONE, errorResult can be checked to determine whether the job succeeded or failed.
tagsjsonbA map of tags for the resource.
titletextTitle of the resource.
total_bytes_processedbigintUse the bytes processed in the query statistics instead.
total_slot_msbigintSlot-milliseconds for the job.
transaction_infojsonbInformation of the multi-statement transaction if this job is part of one.
user_emailtextEmail address of the user who ran the job.

Export

This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.

You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh script:

/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp

You can pass the configuration to the command with the --config argument:

steampipe_export_gcp --config '<your_config>' gcp_bigquery_job