This document describes how to create aggregated sinks. Aggregated sinks let you combine and route logs that are generated by the Google Cloud resources in your organization or folder to a centralized location.
Before you begin
Before you create a sink, ensure the following:
You are familiar with the behavior of aggregated sinks. To learn about these sinks, see Aggregated sinks overview.
You have a Google Cloud folder or organization with log entries that you can see in the Logs Explorer.
You have one of the following IAM roles for the Google Cloud organization or folder from which you're routing log entries.
- Owner (
roles/owner
) - Logging Admin (
roles/logging.admin
) - Logs Configuration Writer (
roles/logging.configWriter
)
The permissions contained in these roles let you create, delete, or modify sinks. For information about setting IAM roles, see the Logging Access control guide.
- Owner (
The destination of the aggregated sink exists or you have the ability to create it.
When the destination is a Google Cloud project, the project can be in any organization. All other destinations can be in any project in any organization.
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
After installing the Google Cloud CLI, initialize it by running the following command:
gcloud init
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
Create an aggregated sink
To configure an aggregated sink, create the sink and then grant the sink the permissions to write to the destination. This section describes how to create an aggregated sink. For information about granting permissions to the sink, see the section of this page titled Set destination permissions.
You can create up to 200 sinks per folder or organization.
Console
To create an aggregated sink for your folder or organization, do the following:
-
In the Google Cloud console, go to the Log Router page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
Select an existing folder or organization.
Select Create sink.
In the Sink details panel, enter the following details:
Sink name: Provide an identifier for the sink; note that after you create the sink, you can't rename the sink but you can delete it and create a new sink.
Sink description (optional): Describe the purpose or use case for the sink.
In the Select sink service menu, select the type of destination, and then complete the dialog to specify the destination. You can select an existing destination or create the destination.
For an intercepting sink, select Google Cloud project, and then enter the fully-qualified name of the destination Google Cloud project:
logging.googleapis.com/projects/DESTINATION_PROJECT_ID
For a non-intercepting sink, select the destination, and then enter the fully-qualified name of the destination. The following destinations are supported:
Google Cloud project
logging.googleapis.com/projects/DESTINATION_PROJECT_ID
Cloud Logging bucket
logging.googleapis.com/projects/DESTINATION_PROJECT_ID/locations/LOCATION/buckets/BUCKET_NAME
BigQuery dataset
You must enter the fully-qualified name of a write-enabled dataset. The dataset can be a date-sharded or partitioned table. Don't enter the name of a linked dataset. Linked datasets are read only.
bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID
Cloud Storage bucket
storage.googleapis.com/BUCKET_NAME
Pub/Sub topic
pubsub.googleapis.com/projects/PROJECT_ID/topics/TOPIC_ID
Splunk
Enter the Pub/Sub topic for your Splunk service.
In the Choose logs to include in sink panel, select the resources to include in the sink.
For an intercepting sink, select Intercept logs ingested by this organization and all child resources.
For a non-intercepting sink, select Include logs ingested by this resource and all child resources.
In the Build inclusion filter field, enter a filter expression that matches the log entries you want to include. If you don't set a filter, all log entries from your selected resource are routed to the destination.
For example, you might want to build a filter to route all Data Access audit logs to a single Logging bucket. This filter looks like the following:
LOG_ID("cloudaudit.googleapis.com/data_access") OR LOG_ID("externalaudit.googleapis.com/data_access")
For filter examples, see Create filters for aggregated sinks section of this page.
Note that the length of a filter can't exceed 20,000 characters.
Optional: To verify you entered the correct filter, select Preview logs. This opens the Logs Explorer in a new tab with the filter pre-populated.
Optional: In the Choose logs to exclude from the sink panel, do the following:
In the Exclusion filter name field, enter a name.
In the Build an exclusion filter field, enter a filter expression that matches the log entries you want to exclude. You can also use the
sample
function to select a portion of the log entries to exclude.For example, to exclude the log entries from a specific project from being routed to the destination, add the following exclusion filter:
logName:projects/PROJECT_ID
To exclude log entries from multiple projects, use the logical-OR operator to join
logName
clauses.
You can create up to 50 exclusion filters per sink. Note that the length of a filter can't exceed 20,000 characters.
Select Create sink.
To complete the configuration of your aggregated sink, grant the service account for the sink the permission to write log entries to your sink's destination. For more information, see Set destination permissions.
gcloud
To create an aggregated sink, use the
logging sinks create
command:
To create a sink, call the
gcloud logging sinks create
command, and ensure that you include the--include-children
option.Before using the following command, make the following replacements:
- SINK_NAME: The name of the log sink. You can't change the name of a sink after you create it.
- SINK_DESTINATION: The service or project to where you want your log entries routed. For information about the format of these destinations, see Destination path formats.
- INCLUSION_FILTER: The inclusion filter for a sink. For filter examples, see Create filters for aggregated sinks.
- FOLDER_ID: The ID of the folder. If you want to create a sink
at the organization level, then replace
--folder=FOLDER_ID
with-- organization=ORGANIZATION_ID
.
Execute the
gcloud logging sinks create
command:gcloud logging sinks create SINK_NAME \ SINK_DESTINATION --include-children \ --folder=FOLDER_ID --log-filter="INCLUSION_FILTER"
You can also provide the following options:
- To create an intercepting sink, include the
--intercept-children
option.
For example, if you're creating an aggregated sink at the folder level and whose destination is a Pub/Sub topic, your command might look like the following:
gcloud logging sinks create SINK_NAME \ pubsub.googleapis.com/projects/PROJECT_ID/topics/TOPIC_ID --include-children \ --folder=FOLDER_ID --log-filter="logName:activity"
Grant the service account for the sink permission to write to your sink destination. For more information, see Set destination permissions.
REST
To create an aggregated sink, use the
organizations.sinks.create
or
folders.sinks.create
Logging API method.
Prepare the arguments to the method as follows:
Set the
parent
field to be the Google Cloud organization or folder in which to create the sink. The parent must be one of the following:organizations/ORGANIZATION_ID
folders/FOLDER_ID
In the
LogSink
object in the method request body, do one of the following:Set
includeChildren
toTrue
.To create an intercepting sink, also set the
interceptChildren
field toTrue
.
Set the
filter
field to match the log entries you want to include.For filter examples, see Create filters for aggregated sinks.
The length of a filter can't exceed 20,000 characters.
Set the remaining
LogSink
fields as you would for any sink. For more information, see Route logs to supported destinations.Call
organizations.sinks.create
orfolders.sinks.create
to create the sink.Grant the service account for the sink permission to write to your sink destination. For more information, see Set destination permissions.
Any changes made to a sink might take a few minutes to apply.
Filters for aggregated sinks
This section provides examples of filters that you might use in an aggregated sink. For more examples, see Sample queries using the Logs Explorer.
Some examples use the following notation:
:
is the substring operator. Don't substitute the=
operator....
represents any additional filter comparisons.- Variables are indicated by colored text. Replace them with valid values.
The length of a filter is restricted to 20,000 characters.
For more details about the filtering syntax, see Logging query language.
Select the log source
To route log entries from all child resources, don't specify a project, folder, or organization in your sink's inclusion and exclusion filters. For example, suppose you configure an aggregated sink for an organization with the following filter:
resource.type="gce_instance"
With the previous filter, log entries with a resource type of Compute Engine instances that are written to any child of that organization are routed by the aggregated sink to the destination.
However, there might be situations where you want to use an aggregated sink
to route log entries only from specific child resources. For example, for
compliance
reasons you might want to store audit logs from specific folders or projects
in their own Cloud Storage bucket. In these situations, configure your
inclusion filter to specify each child resource whose log entries you want
routed. If you want to route log entries from a folder and all projects within
that folder,
then the filter must list the folder and each of the projects contained by
that folder, and also join the statements with an OR
clause.
The following filters restrict log entries to specific Google Cloud projects, folders, or organizations:
logName:"projects/PROJECT_ID/logs/" AND ...
logName:("projects/PROJECT_A_ID/logs/" OR "projects/PROJECT_B_ID/logs/") AND ...
logName:"folders/FOLDER_ID/logs/" AND ...
logName:"organizations/ORGANIZATION_ID/logs/" AND ...
For example, to route only log entries written to Compute Engine instances
that were written to the folder my-folder
, use the following filter:
logName:"folders/my-folder/logs/" AND resource.type="gce_instance"
With the previous filter, log entries written to any resource other than
my-folder
, including log entries written to Google Cloud projects that are
children of my-folder
, aren't routed to the destination.
Select the monitored resource
To route log entries from only a specific monitored resource in a Google Cloud project, use multiple comparisons to specify the resource exactly:
logName:"projects/PROJECT_ID/logs" AND resource.type=RESOURCE_TYPE AND resource.labels.instance_id=INSTANCE_ID
For a list of resource types, see Monitored resource types.
Select a sample of log entries
To route a random sample of log entries, add the sample
built-in
function. For example, to route only ten percent of the log entries matching
your current filter, use this addition:
sample(insertId, 0.10) AND ...
For more information, see the
sample
function.
For more information about Cloud Logging filters, see Logging query language.
Set destination permissions
This section describes how to grant Logging the Identity and Access Management permissions to write log entries to your sink's destination. For the full list of Logging roles and permissions, see Access control.
When you create or update a sink that routes log entries to any destination other than a log bucket in the current project, a service account for that sink is required. Logging automatically creates and manages the service account for you:
- As of May 22, 2023, when you create a sink and no service account for the underlying resource exists, Logging creates the service account. Logging uses the same service account for all sinks in the underlying resource. Resources can be a Google Cloud project, an organization, a folder, or a billing account.
- Before May 22, 2023, Logging created a service account for each sink. As of May 22, 2023, Logging uses a shared service account for all sinks in the underlying resource.
The writer identity of a sink is the identifier of the service account associated with that sink. All sinks have a writer identity unless they write to a log bucket in the current Google Cloud project. The email address in the writer identity identifies the principal that must have access to write data to the destination.
To route log entries to a resource protected by a service perimeter, you must add the service account for that sink to an access level and then assign it to the destination service perimeter. This isn't necessary for non-aggregated sinks. For details, see VPC Service Controls: Cloud Logging.
To set permissions for your sink to route to its destination, do the following:
Console
To get information about the service account for your sink, do the following:
-
In the Google Cloud console, go to the Log Router page:
If you use the search bar to find this page, then select the result whose subheading is Logging.
Select more_vert Menu and then select View sink details. The writer identity appears in the Sink details panel.
If the value of the
writerIdentity
field contains an email address, then proceed to the next step. When the value isNone
, you don't need to configure destination permissions.Copy the sink's writer identity into your clipboard. The following illustrates a writer identity:
serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.com
-
Grant the principal specified by the sink's writer identity the permission to write log data to the destination:
-
In the Google Cloud console, go to the IAM page:
If you use the search bar to find this page, then select the result whose subheading is IAM & Admin.
In the toolbar of the Google Cloud console, select the project which stores the destination of the aggregated sink. When the destination is a project, select that project.
Click
Grant access.Enter the principal specified by the sink's writer identity and then grant an IAM role:
- Google Cloud project: Grant the
Logs Writer role
(
roles/logging.logWriter
). Specifically, a principal needs thelogging.logEntries.route
permission. - Log bucket: Grant the
Logs Bucket Writer role
(
roles/logging.bucketWriter
). - Cloud Storage bucket: Grant the
Storage Object Creator role
(
roles/storage.objectCreator
). - BigQuery dataset: Grant the
BigQuery Data Editor role
(
roles/bigquery.dataEditor
). - Pub/Sub topic, including Splunk: Grant the
Pub/Sub Publisher role
(
roles/pubsub.publisher
).
- Google Cloud project: Grant the
Logs Writer role
(
-
gcloud
Ensure that you have Owner access on the Google Cloud project that contains the destination. If you don't have Owner access to the destination of the sink, then ask a project owner to add the writer identity as a principal.
To get information about the service account for your sink, call the
gcloud logging sinks describe
method.Before using the following command, make the following replacements:
- SINK_NAME: The name of the log sink. You can't change the name of a sink after you create it.
Execute the
gcloud logging sinks describe
command:gcloud logging sinks describe SINK_NAME
If the sink details contain a field labeled
writerIdentity
, then proceed to the next step. When the details don't include awriterIdentity
field, you don't need to configure destination permissions for the sink.Copy the sink's writer identity into your clipboard. The following illustrates a writer identity:
serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.com
Grant the sink's writer identity the permission to write log data to the destination by calling the
gcloud projects add-iam-policy-binding
command.Before using the following command, make the following replacements:
- PROJECT_ID: The identifier of the project. Select the project which stores the destination of the aggregated sink. When the destination is a project, select that project.
- PRINCIPAL: An identifier for the principal that you want to
grant the role to. Principal identifiers usually have the following form:
PRINCIPAL-TYPE:ID
. For example,user:my-user@example.com
. For a full list of the formats thatPRINCIPAL
can have, see Principal identifiers. ROLE: An IAM role. Grant the sink's writer identity an IAM role based on the destination of the log sink:
- Google Cloud project: Grant the
Logs Writer role
(
roles/logging.logWriter
). Specifically, a principal needs thelogging.logEntries.route
permission. - Log bucket: Grant the
Logs Bucket Writer role
(
roles/logging.bucketWriter
). - Cloud Storage bucket: Grant the
Storage Object Creator role
(
roles/storage.objectCreator
). - BigQuery dataset: Grant the
BigQuery Data Editor role
(
roles/bigquery.dataEditor
). - Pub/Sub topic, including Splunk: Grant the
Pub/Sub Publisher role
(
roles/pubsub.publisher
).
- Google Cloud project: Grant the
Logs Writer role
(
Execute the
gcloud projects add-iam-policy-binding
command:gcloud projects add-iam-policy-binding PROJECT_ID --member=PRINCIPAL --role=ROLE
REST
We recommend that you use the Google Cloud console or the Google Cloud CLI to grant a role to the service account.
What's next
Learn how to create log views on a log bucket. Log views let you grant principals read-access to a subset of the log entries stored in a log bucket.
For information about managing existing sinks, see Route logs to supported destinations: Manage sinks.
If you encounter issues as you use sinks to route logs, see Troubleshoot routing and sinks.
To learn how to view your logs in their destinations, as well as how the logs are formatted and organized, see View logs in sink destinations