Table: gcp_storage_bucket - Query Google Cloud Storage Buckets using SQL
Google Cloud Storage is a service within Google Cloud Platform that provides scalable, durable, and highly available object storage. It offers multiple storage classes, versioning, fine-grained access controls, and other features for managing data. Google Cloud Storage is designed to help organizations of all sizes securely store and retrieve any amount of data at any time.
Table Usage Guide
The gcp_storage_bucket
table provides insights into Storage Buckets within Google Cloud Storage. As a Cloud Engineer, explore bucket-specific details through this table, including configurations, access controls, and associated metadata. Utilize it to uncover information about buckets, such as their storage class, location, versioning status, and access control policies.
Examples
List of buckets where versioning is not enabled
Discover the segments that have not enabled versioning within their storage buckets. This is particularly useful to identify potential risk areas where data loss could occur due to overwriting or deleting of files.
select name, location, versioning_enabledfrom gcp_storage_bucketwhere not versioning_enabled;
select name, location, versioning_enabledfrom gcp_storage_bucketwhere versioning_enabled is not 1;
List of members and their associated iam roles for the bucket
Discover the segments that illustrate the relationship between members and their respective roles for a specific storage bucket in GCP. This could be useful in assessing access permissions and managing security within your cloud storage environment.
select name, location, p -> 'members' as member, p ->> 'role' as rolefrom gcp_storage_bucket, jsonb_array_elements(iam_policy -> 'bindings') as p;
select name, location, json_extract(p.value, '$.members') as member, json_extract(p.value, '$.role') as rolefrom gcp_storage_bucket, json_each(iam_policy, '$.bindings') as p;
Lifecycle rule of each storage bucket
Explore the lifecycle rules of your storage buckets to understand how they're configured. This can help in managing resources more effectively by determining when certain actions, such as transitioning to a different storage class or deleting objects, are set to occur.
select name, p -> 'action' ->> 'storageClass' as storage_class, p -> 'action' ->> 'type' as action_type, p -> 'condition' ->> 'age' as age_in_daysfrom gcp_storage_bucket, jsonb_array_elements(lifecycle_rules) as p;
select name, json_extract(p.value, '$.action.storageClass') as storage_class, json_extract(p.value, '$.action.type') as action_type, json_extract(p.value, '$.condition.age') as age_in_daysfrom gcp_storage_bucket, json_each(lifecycle_rules) as p;
List of storage buckets whose retention period is less than 7 days
Explore which storage buckets have a retention period of less than a week. This can be useful in identifying potential data loss risks due to short retention periods.
select name, retention_policy ->> 'retentionPeriod' as retention_periodfrom gcp_storage_bucketwhere retention_policy ->> 'retentionPeriod' < 604800 :: text;
select name, json_extract(retention_policy, '$.retentionPeriod') as retention_periodfrom gcp_storage_bucketwhere cast( json_extract(retention_policy, '$.retentionPeriod') as integer ) < 604800;
Query examples
- compute_backend_buckets_for_storage_bucket
- kms_keys_for_storage_bucket
- logging_buckets_for_storage_bucket
- storage_bucket_1_year_count
- storage_bucket_24_hours_count
- storage_bucket_30_90_days_count
- storage_bucket_30_days_count
- storage_bucket_90_365_days_count
- storage_bucket_age_table
- storage_bucket_by_creation_month
- storage_bucket_by_location
- storage_bucket_by_project
- storage_bucket_by_storage_class
- storage_bucket_class
- storage_bucket_compute_backend_bucket_detail
- storage_bucket_count
- storage_bucket_customer_managed_encryption
- storage_bucket_encryption_detail
- storage_bucket_encryption_table
- storage_bucket_google_managed_encryption
- storage_bucket_input
- storage_bucket_logging
- storage_bucket_logging_detail
- storage_bucket_logging_disabled_count
- storage_bucket_no_retention_policy_count
- storage_bucket_overview
- storage_bucket_public_access
- storage_bucket_public_access_count
- storage_bucket_retention_policy
- storage_bucket_tags_detail
- storage_bucket_uniform_bucket_level_access
- storage_bucket_uniform_bucket_level_access_disabled_count
- storage_bucket_versioning_disabled
- storage_bucket_versioning_disabled_count
- storage_buckets_for_kms_key
Control examples
- All Controls > Compute > Compute Backend Bucket should not have dangling storage bucket
- All Controls > Logging > Ensure that retention policies on log buckets are configured using Bucket Lock
- All Controls > Storage > Ensure that Cloud Storage buckets have uniform bucket-level access enabled
- Check if Cloud Storage buckets have Bucket Only Policy turned on
- CIS v1.2.0 > 2 Logging and Monitoring > 2.3 Ensure that retention policies on log buckets are configured using Bucket Lock
- CIS v1.2.0 > 5 Storage > 5.1 Ensure that Cloud Storage bucket is not anonymously or publicly accessible
- CIS v1.2.0 > 5 Storage > 5.2 Ensure that Cloud Storage buckets have uniform bucket-level access enabled
- CIS v1.3.0 > 2 Logging and Monitoring > 2.3 Ensure that retention policies on log buckets are configured using Bucket Lock
- CIS v1.3.0 > 5 Storage > 5.1 Ensure that Cloud Storage bucket is not anonymously or publicly accessible
- CIS v1.3.0 > 5 Storage > 5.2 Ensure that Cloud Storage buckets have uniform bucket-level access enabled
- CIS v2.0.0 > 2 Logging and Monitoring > 2.3 Ensure that retention policies on log buckets are configured using Bucket Lock
- CIS v2.0.0 > 5 Storage > 5.1 Ensure that Cloud Storage bucket is not anonymously or publicly accessible
- CIS v2.0.0 > 5 Storage > 5.2 Ensure that Cloud Storage buckets have uniform bucket-level access enabled
- CIS v3.0.0 > 2 Logging and Monitoring > 2.3 Ensure That Retention Policies on Cloud Storage Buckets Used for Exporting Logs Are Configured Using Bucket Lock
- CIS v3.0.0 > 5 Storage > 5.1 Ensure That Cloud Storage Bucket Is Not Anonymously or Publicly Accessible
- CIS v3.0.0 > 5 Storage > 5.2 Ensure That Cloud Storage Buckets Have Uniform BucketLevel Access Enabled
- Ensure that Cloud Storage bucket is not anonymously or publicly accessible
- Ensure that Cloud Storage bucket used for exporting logs is not anonymously or publicly accessible
- Ensure that Cloud Storage buckets used for exporting logs are configured using bucket lock
- Ensure that Cloud Storage buckets used for exporting logs have object versioning enabled
- Ensure that Cloud Storage buckets used for exporting logs have retention policy enabled
Schema for gcp_storage_bucket
Name | Type | Operators | Description |
---|---|---|---|
_ctx | jsonb | Steampipe context in JSON form. | |
acl | jsonb | An access-control list | |
akas | jsonb | Array of globally unique identifier strings (also known as) for the resource. | |
billing_requester_pays | boolean | When set to true, Requester Pays is enabled for this bucket. | |
cors | jsonb | The bucket's Cross-Origin Resource Sharing (CORS) configuration. | |
default_event_based_hold | boolean | The default value for event-based hold on newly created objects in this bucket. Event-based hold is a way to retain objects indefinitely until an event occurs, signified by the hold's release. After being released, such objects will be subject to bucket-level retention (if any). | |
default_kms_key_name | text | A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified. | |
default_object_acl | jsonb | Lists of object access control entries | |
etag | text | HTTP 1.1 Entity tag for the bucket. | |
iam_configuration_bucket_policy_only_enabled | boolean | The bucket's uniform bucket-level access configuration. The feature was formerly known as Bucket Policy Only. For backward compatibility, this field will be populated with identical information as the uniformBucketLevelAccess field. | |
iam_configuration_public_access_prevention | text | The bucket's Public Access Prevention configuration. Currently, 'unspecified' and 'enforced' are supported. | |
iam_configuration_uniform_bucket_level_access_enabled | boolean | The bucket's uniform bucket-level access configuration. | |
iam_policy | jsonb | An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. A `Policy` is a collection of `bindings`. A `binding` binds one or more `members` to a single `role`. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A `role` is a named list of permissions; each `role` can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a `binding` can also specify a `condition`, which is a logical expression that allows access to a resource only if the expression evaluates to `true`. | |
id | text | The ID of the bucket. For buckets, the id and name properties are the same. | |
kind | text | The kind of item this is. For buckets, this is always storage#bucket. | |
labels | jsonb | Labels that apply to this bucket. | |
lifecycle_rules | jsonb | The bucket's lifecycle configuration. See lifecycle management for more information. | |
location | text | The GCP multi-region, region, or zone in which the resource is located. | |
location_type | text | The type of the bucket location. | |
log_bucket | text | The destination bucket where the current bucket's logs should be placed. | |
log_object_prefix | text | A prefix for log object names. | |
metageneration | bigint | The metadata generation of this bucket. | |
name | text | = | The name of the bucket. |
owner_entity | text | The entity, in the form project-owner-projectId. This is always the project team's owner group. | |
owner_entity_id | text | The ID for the entity. | |
project | text | =, !=, ~~, ~~*, !~~, !~~* | The GCP Project in which the resource is located. |
project_number | double precision | The project number of the project the bucket belongs to. | |
retention_policy | jsonb | The bucket's retention policy. The retention policy enforces a minimum retention time for all objects contained in the bucket, based on their creation time. Any attempt to overwrite or delete objects younger than the retention period will result in a PERMISSION_DENIED error. | |
self_link | text | The URI of this bucket. | |
sp_connection_name | text | =, !=, ~~, ~~*, !~~, !~~* | Steampipe connection name. |
sp_ctx | jsonb | Steampipe context in JSON form. | |
storage_class | text | The bucket's default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage. Values include MULTI_REGIONAL, REGIONAL, STANDARD, NEARLINE, COLDLINE, ARCHIVE, and DURABLE_REDUCED_AVAILABILITY. If this value is not specified when the bucket is created, it will default to STANDARD. | |
tags | jsonb | A map of tags for the resource. | |
time_created | timestamp with time zone | The creation time of the bucket in RFC 3339 format. | |
title | text | Title of the resource. | |
updated | timestamp with time zone | The modification time of the bucket. | |
versioning_enabled | boolean | While set to true, versioning is fully enabled for this bucket. | |
website_main_page_suffix | text | If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages. | |
website_not_found_page | text | If the requested object path is missing, and any mainPageSuffix object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result. |
Export
This table is available as a standalone Exporter CLI. Steampipe exporters are stand-alone binaries that allow you to extract data using Steampipe plugins without a database.
You can download the tarball for your platform from the Releases page, but it is simplest to install them with the steampipe_export_installer.sh
script:
/bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)" -- gcp
You can pass the configuration to the command with the --config
argument:
steampipe_export_gcp --config '<your_config>' gcp_storage_bucket