Skip to content

GCP Acquisition

Phil Hagen edited this page Feb 1, 2025 · 8 revisions

GCP logging is achieved through a combination of configuration settings and filters. The two main locations to store logs are logging buckets and storage buckets.

Logs exists at numerous levels: organization, folder(s), and project(s). For a large organization this could entail hundreds of potential log locations and is beyond the scope of this document.

In this document we will focus on creating a consolidated log that includes all org, folder, and project logs.

The GCP SOF-ELK parser is written to process logs extracted from logging buckets via the gcloud CLI. This parser will handle every type of GCP log except for VPC flow logs. VPC flow logs are processed by the NetFlow parser.

While the GCP logs and VPC flow logs can be combined, it’s easier to keep them separate so we don’t have to write complicated filters when extracting the data from the logging bucket.

Step 1: Create logging buckets

In this step we want to create two logging buckets. One for VPC flow logs and one for all other GCP logs.

Type Name Description
Logging bucket VPCFlowLogs Dedicated Flow Logs
Logging bucket XConOrgLogs All org logs except Flow Logs

(Note: The bucket names above are examples that will be used in this document. You can name them anything you prefer.)

To create these logging buckets, you will need to select a project. It might be advantageous to create a dedicated project for that purpose.

When creating these buckets, make sure to set a retention period appropriate for your company and take into account the cost of an extended retention period.

Step 2: Create log routers (also called log sinks)

Our goal is to consolidate all logs, so we will be creating these log routers are the organization level.

The key setting is to ensure the includes child resources setting is Yes and to set an appropriate inclusion and/or exclusion filter.

Sink details for the VPC flow logs

  • Name: VPCFlowLogs
  • Resource Name: organizations/<redacted>/sinks/VPCFlowLogs
  • Description: Dedicated Flow Logs
  • Service: Logging bucket
  • Includes child resources: Yes
  • Intercepts child resources: No
  • Destination: logging.googleapis.com/projects/<redacted>/locations/global/buckets/VPCFlowLogs
  • Writer identity: serviceAccount:service-org-<redacted>@gcp-sa-logging.iam.gserviceaccount.com
  • Inclusion filter: log_id("compute.googleapis.com/vpc_flows")
  • Exclusion filter(s): None

Sink details for all other GCP logs

  • Name: XConOrgLogs
  • Resource Name: organizations/<redacted>/sinks/XConOrgLogs
  • Description: All org logs except Flow Logs
  • Service: Logging bucket
  • Includes child resources: Yes
  • Intercepts child resources: No
  • Destination: logging.googleapis.com/projects/<redacted>/locations/global/buckets/XConOrgLogBucket
  • Writer identity: serviceAccount:service-org-<redacted>@gcp-sa-logging.iam.gserviceaccount.com
  • Inclusion filter: Empty
  • Exclusion filter(s): Exclude FlowLogs log_id("compute.googleapis.com/vpc_flows")

It’s important to leave the inclusion filter blank to capture all log entries.

Step 3: Extracting the logs with the gcloud CLI

The command is gcloud logging read. This command has many options, but we will use the simplest version. The full documentation can be found here.

Extract logs from the VPC flow logging bucket:

gcloud logging read --bucket=VPCFlowLogs --format="json" --freshness=30d --location=global --view=_AllLogs > vpc_flow_logs.json

Extract logs from the main GCP logging bucket:

gcloud logging read --bucket= XConOrgLogBucket --format="json" --freshness=30d --location=global --view=_AllLogs > gcp_all_logs.json

Adjust the freshness parameter to retrieve more or fewer logs. You can also write more complex time filters if you wish. Please refer to the Google documentation linked above for details and examples.

There are two other buckets that will contain logs. These buckets exist at many different levels (org, folder, project) and aren’t consolidated. These buckets are called _Required and _Default.

Beyond the Google documentation, the SANS FOR509 course is recommended to gain an in-depth understanding of cloud logging (including Azure, AWS, and Google).

Step 4: Loading the exported data

  1. To load the VPC Flow Logs, place the file generated into the /logstash/nfarch/ directory.
  2. To load the non-VPC Flow Log records, place the file generated into the /logstash/gcp/ directory.

Content on this page originally created by Pierre Lidome, FOR509 co-author.