diff --git a/libbeat/docs/command-reference.asciidoc b/libbeat/docs/command-reference.asciidoc
index bf1bc206e673..b58dbd6c7a41 100644
--- a/libbeat/docs/command-reference.asciidoc
+++ b/libbeat/docs/command-reference.asciidoc
@@ -399,7 +399,7 @@ ifeval::["{beatname_lc}"=="functionbeat"]
Shows help for the `package` command.
*`-o, --output`*::
-Specifies the full path to the zip file that will contain the package.
+Specifies the full path pattern to use when creating the packages.
{global-flags}
@@ -407,7 +407,7 @@ Specifies the full path to the zip file that will contain the package.
["source","sh",subs="attributes"]
-----
-{beatname_lc} package /path/to/file.zip
+{beatname_lc} package --output /path/to/folder/package-{{.Provider}}.zip
-----
[[remove-command]]
diff --git a/x-pack/functionbeat/docs/config-options.asciidoc b/x-pack/functionbeat/docs/config-options-aws.asciidoc
similarity index 98%
rename from x-pack/functionbeat/docs/config-options.asciidoc
rename to x-pack/functionbeat/docs/config-options-aws.asciidoc
index 4c613139d498..f0e7dcc05cf5 100644
--- a/x-pack/functionbeat/docs/config-options.asciidoc
+++ b/x-pack/functionbeat/docs/config-options-aws.asciidoc
@@ -1,9 +1,9 @@
[id="configuration-{beatname_lc}-options"]
[role="xpack"]
-== Configure functions
+== Configure AWS functions
++++
-Configure functions
+Configure AWS functions
++++
{beatname_uc} runs as a function in your serverless environment.
diff --git a/x-pack/functionbeat/docs/config-options-gcp.asciidoc b/x-pack/functionbeat/docs/config-options-gcp.asciidoc
new file mode 100644
index 000000000000..e81a1b8dc182
--- /dev/null
+++ b/x-pack/functionbeat/docs/config-options-gcp.asciidoc
@@ -0,0 +1,198 @@
+[id="configuration-{beatname_lc}-gcp-options"]
+[role="xpack"]
+== Configure Google Functions
+
+++++
+Configure Google functions
+++++
+
+beta[]
+
+{beatname_uc} runs as a Google Function on Google Cloud Platform (GCP).
+
+Before deploying {beatname_uc}, you need to configure one or more functions and
+specify details about the services that will trigger the functions.
+
+You configure the functions in the the +{beatname_lc}.yml+ configuration file.
+When you're done, you can <>
+to your serverless environment.
+
+The following example configures two functions: `pubsub` and `storage`. The
+`pubsub` function collects log events from https://cloud.google.com/pubsub/[Google
+Pub/Sub]. The `storage` function collects log events from
+https://cloud.google.com/storage/[Google Cloud Storage]. Both functions in the
+example forward the events to {es}.
+
+["source","sh",subs="attributes"]
+----
+functionbeat.provider.gcp.location_id: "europe-west2"
+functionbeat.provider.gcp.project_id: "my-project-123456"
+functionbeat.provider.gcp.storage_name: "functionbeat-deploy"
+functionbeat.provider.gcp.functions:
+ - name: pubsub
+ enabled: true
+ type: pubsub
+ description: "Google Cloud Function for Pub/Sub"
+ trigger:
+ resource: "projects/_/pubsub/myPubSub"
+ #service: "pubsub.googleapis.com"
+ - name: storage
+ enabled: true
+ type: storage
+ description: "Google Cloud Function for Cloud Storage"
+ trigger:
+ resource: "projects/my-project/buckets/my-storage"
+ event_type: "google.storage.object.finalize"
+
+cloud.id: "MyESDeployment:SomeLongString=="
+cloud.auth: "elastic:mypassword"
+----
+
+[id="{beatname_lc}-gcp-options"]
+[float]
+=== Configuration options
+Specify the following options to configure the functions
+that you want to deploy to Google Cloud Platform (GCP).
+
+TIP: If you change the configuration after deploying the function, use
+the <> to update your deployment.
+
+[float]
+[id="{beatname_lc}-gcp-location_id"]
+==== `provider.gcp.location_id`
+
+The region where your GCP project is located.
+
+[float]
+[id="{beatname_lc}-gcp-project_id"]
+==== `provider.gcp.project_id`
+
+The ID of the GCP project where the function artifacts will be deployed. See the
+https://cloud.google.com/about/locations/[Google Cloud Function documentation]
+to verify that Cloud Functions are supported in the region you specify.
+
+[float]
+[id="{beatname_lc}-gcp-storage_name"]
+==== `provider.gcp.storage_name`
+
+The name of the Google Cloud storage bucket where the function artifacts will be
+deployed. If the bucket doesn't exist, it will be created, if you have the
+correct project permissions (`storage.objects.create`).
+
+[float]
+[id="{beatname_lc}-gcp-functions"]
+==== `functionbeat.provider.gcp.functions`
+A list of functions that are available for deployment.
+
+[float]
+[id="{beatname_lc}-gcp-name"]
+===== `name`
+
+A unique name for the Google function.
+
+[float]
+[id="{beatname_lc}-gcp--type"]
+===== `type`
+
+The type of GCP service to monitor. For this release, the supported types
+are:
+
+[horizontal]
+`pubsub`:: Collect log events from Google Pub/Sub.
+`storage`:: Collect log events from Google Cloud storage buckets.
+
+[float]
+[id="{beatname_lc}-gcp-description"]
+===== `description`
+
+A description of the function. This description is useful when you are running
+multiple functions and need more context about how each function is used.
+
+[float]
+[id="{beatname_lc}-gcp-memory-size"]
+==== `memory_size`
+
+The maximum amount of memory to allocate for this function.
+The default is `256MB`.
+
+[float]
+[id="{beatname_lc}-gcp-timeout"]
+==== `timeout`
+
+The execution timeout in seconds. If the function does not finish in time,
+it is considered failed and terminated. The default is `60s`. Increase this
+value if you see timeout messages is the Google Stackdriver logs.
+
+[float]
+[id="{beatname_lc}-gcp-service_account_email"]
+==== `service_account_email`
+
+The email of the service account that the function will assume as its identity.
+The default is {projectid}@appspot.gserviceaccount.com.email.
+
+[float]
+[id="{beatname_lc}-gcp-labels"]
+==== `labels`
+
+One or more labels to apply to the function. A label is a key-value pair that
+helps you organize your Google Cloud resources.
+
+[float]
+[id="{beatname_lc}-gcp-vpc_connector"]
+==== `vpc_connector`
+
+A VPC connector that the function can connect to when sending requests to
+resources in your VPC network.
+
+Use the format `projects/*/locations/*/connectors/*` or a fully qualified
+URI.
+
+[float]
+[id="{beatname_lc}-gcp-maximum_instances"]
+==== `maximum_instances`
+
+The maximum instances that can be running at the same time. The default is
+unlimited.
+
+[float]
+[id="{beatname_lc}-gcp-triggers"]
+===== `trigger`
+
+The trigger that will cause the function to execute.
+
+* If `type` is `pubsub`, specify the name of the Pub/Sub topic to watch for
+messages.
+
+* If `type` is `storage`, specify the Cloud Storage bucket to watch for object
+events. For `event_type`, specify the type of object event that will trigger the
+function. See the https://cloud.google.com/functions/docs/calling/storage[Google Cloud
+docs] for a list of available event types.
+
+[float]
+[id="{beatname_lc}-gcp-keep_null"]
+==== `keep_null`
+
+If `true`, fields with null values will be published in the output document. By
+default, `keep_null` is `false`.
+
+[float]
+[id="{beatname_lc}-gcp-fields"]
+==== `fields`
+
+Optional fields that you can specify to add additional information to the
+output. Fields can be scalar values, arrays, dictionaries, or any nested
+combination of these.
+
+[float]
+[id="{beatname_lc}-gcp-processors"]
+==== `processors`
+
+Define custom processors for this function. For example, you can specify a
+dissect processor to tokenize a string:
+
+[source,yaml]
+----
+processors:
+ - dissect:
+ tokenizer: "%{key1} %{key2}"
+----
diff --git a/x-pack/functionbeat/docs/configuring-howto.asciidoc b/x-pack/functionbeat/docs/configuring-howto.asciidoc
index cb7ba0b2e0fd..a1f41d4fde07 100644
--- a/x-pack/functionbeat/docs/configuring-howto.asciidoc
+++ b/x-pack/functionbeat/docs/configuring-howto.asciidoc
@@ -13,6 +13,7 @@ include::{libbeat-dir}/shared-configuring.asciidoc[]
The following topics describe how to configure {beatname_uc}:
* <>
+* <>
* <>
* <>
* <>
@@ -31,7 +32,9 @@ The following topics describe how to configure {beatname_uc}:
--
-include::./config-options.asciidoc[]
+include::./config-options-aws.asciidoc[]
+
+include::./config-options-gcp.asciidoc[]
include::./general-options.asciidoc[]
diff --git a/x-pack/functionbeat/docs/deploying.asciidoc b/x-pack/functionbeat/docs/deploying.asciidoc
index 2750fb03f261..ac0a40234c52 100644
--- a/x-pack/functionbeat/docs/deploying.asciidoc
+++ b/x-pack/functionbeat/docs/deploying.asciidoc
@@ -4,11 +4,8 @@
After configuring {beatname_uc} and defining cloud functions for the services
you want to monitor, deploy the functions to your cloud provider. To do this,
-you can:
-
-* <> (good for getting
- started),
-* Or <>.
+you can use the {beatname_uc} manager (good for getting started), or use your
+own deployment infrastructure.
[[manager-deployment]]
==== Use the {beatname_uc} manager
@@ -18,8 +15,9 @@ when you don't have your own deployment infrastructure or process in place.
During deployment, the {beatname_uc} manager:
-* Exports an AWS CloudFormation template. You can inspect the template by
-running the <> command.
+* Exports a function template to use for deployment. For AWS, it exports an
+{cloudformation-ref} template. For Google Cloud, it exports a YAML configuration
+file. To inspect the template, run the <> command.
* Creates a zip package that includes the function code and +{beatname_lc}.yml+
config file.
* Uploads the package to the specified cloud provider.
@@ -38,8 +36,8 @@ provider:
+
`BEAT_STRICT_PERMS=false`:: This setting makes the function skip the ownership
check on the configuration file.
-`ENABLED_FUNCTIONS=function-name-1,function-name-2`:: Specifies a comma-
-separated list of functions that are enabled in the configuration file. For
+`ENABLED_FUNCTIONS=function-name-1,function-name-2`:: Specifies a
+comma-separated list of functions that are enabled in the configuration file. For
example, to package functions called `my-kinesis` and `my-cloudwatch-logs`, run:
+
*linux and mac*:
@@ -67,26 +65,51 @@ archive. For example:
+
["source","sh",subs="attributes"]
----------------------------------------------------------------------
-./{beatname_lc} -v -e -d "*" package --output /path/to/file.zip
+./{beatname_lc} -v -e -d "*" package --output /path/to/folder/package-{{.Provider}}.zip
----------------------------------------------------------------------
+
*win:*
+
["source","sh",subs="attributes"]
----------------------------------------------------------------------
-.{backslash}{beatname_lc}.exe -v -e -d "*" package --output /path/to/file.zip
+.{backslash}{beatname_lc}.exe -v -e -d "*" package --output /path/to/folder/package-{{.Provider}}.zip
----------------------------------------------------------------------
+
-This command generates a deployment package (called `file.zip` in the example)
-that contains:
+For `--output` specify a full path pattern.
++
+The `package` command generates deployment packages for each provider specified
+in the configuration. Each package contains:
+
-* a binary, called `functionbeat-aws`, that contains the function code
+* a binary with the function code
* the `functionbeat.yml` config file
. If certificates are required, add the cert files to the zip package under the
-same path as the configured +{beatname_lc}.yml+ file.
+same path as the configured +{beatname_lc}.yml+ file.
+
+. Export a function template to use for deployment:
++
+*linux and mac:*
++
+["source","sh",subs="attributes"]
+----------------------------------------------------------------------
+./{beatname_lc} export function FUNCTION_NAME
+----------------------------------------------------------------------
++
+*win:*
++
+["source","sh",subs="attributes"]
+----------------------------------------------------------------------
+.{backslash}{beatname_lc}.exe export function FUNCTION_NAME
+----------------------------------------------------------------------
++
+{beatname_uc} writes the template to stdout. For AWS functions, it writes an
+{cloudformation-ref} tempalte. For Google Cloud, it writes a YAML configuration
+file.
+
+. Modify the template to work with your infrastructure.
. Deploy the package, using the infrastructure and automation supported by your
-cloud provider. For example, to deploy the package to AWS,
-<>
-managed by {beatname_uc}, and modify it to work with your infrastructure.
+cloud provider, for example, {cloudformation-ref} or
+https://cloud.google.com/deployment-manager[Google Cloud Deployment manager].
++
+For more information about deployment, see your cloud provider's documentation.
diff --git a/x-pack/functionbeat/docs/export-cloudformation-template.asciidoc b/x-pack/functionbeat/docs/export-cloudformation-template.asciidoc
index 3e4fbe8fa12a..803ab5a6f8b4 100644
--- a/x-pack/functionbeat/docs/export-cloudformation-template.asciidoc
+++ b/x-pack/functionbeat/docs/export-cloudformation-template.asciidoc
@@ -1,6 +1,6 @@
[[export-cloudformation-template]]
[role="xpack"]
-=== Export AWS CloudFormation template
+=== Export AWS CloudFormation template
You can use {beatname_uc} to export an {cloudformation-ref} template then use
the template with automation software to deploy {beatname_uc} code to your cloud
diff --git a/x-pack/functionbeat/docs/getting-started.asciidoc b/x-pack/functionbeat/docs/getting-started.asciidoc
index f62b2981f04e..66b177fe7fbe 100644
--- a/x-pack/functionbeat/docs/getting-started.asciidoc
+++ b/x-pack/functionbeat/docs/getting-started.asciidoc
@@ -93,62 +93,64 @@ TIP: See the
{beats-ref}/config-file-format.html[Config File Format] section of the
_Beats Platform Reference_ for more about the structure of the config file.
-The following example configures a function called `cloudwatch` that collects
-events from CloudWatch Logs and forwards the events to {es}.
-
+. Configure the functions that you want to deploy. The configuration settings
+vary depending on the type of function and cloud provider you're using. This
+section provides a couple of example configurations.
++
+--
+* *AWS example*: This example configures a function called `cloudwatch` that
+collects events from CloudWatch Logs. When a message is sent to the specified
+log group, the cloud function executes and sends message events to the
+configured output:
++
["source","sh",subs="attributes"]
-------------------------------------------------------------------------------------
{beatname_lc}.provider.aws.endpoint: "s3.amazonaws.com"
-{beatname_lc}.provider.aws.deploy_bucket: "functionbeat-deploy"
+{beatname_lc}.provider.aws.deploy_bucket: "functionbeat-deploy" <1>
{beatname_lc}.provider.aws.functions:
- - name: cloudwatch
+ - name: cloudwatch <2>
enabled: true
type: cloudwatch_logs
description: "lambda function for cloudwatch logs"
triggers:
- log_group_name: /aws/lambda/my-lambda-function
-cloud.id: "MyESDeployment:SomeLongString=="
-cloud.auth: "elastic:SomeLongString"
-------------------------------------------------------------------------------------
-
-To configure {beatname_uc}:
-
-. Specify a unique name for the S3 bucket to which the functions will be
-uploaded. For example:
+<1> A unique name for the S3 bucket to which the functions will be
+uploaded.
+<2> Details about the function you want to deploy, including the name of the
+function, the type of service to monitor, and the log groups that trigger the
+function.
+
-["source","sh",subs="attributes"]
-----
-{beatname_lc}.provider.aws.deploy_bucket: "functionbeat-deploy"
-----
+See <> for more examples.
-. Define the functions that you want to deploy. Define a function for each
-service you want to monitor. For each function, you must specify:
-+
-[horizontal]
-`name`:: A unique name for the Lambda function.
-`type`:: The type of service to monitor. For this release, the supported types
-are:
-* `cloudwatch_logs` to collect data from CloudWatch logs
-* `sqs` to collect messages from Amazon Simple Queue Service (SQS)
-* `kinesis` to collect data from Kinesis data streams
-`triggers`:: The triggers that will cause the function to execute. If `type`
-is `cloudwatch_logs` logs, specify a list of log groups. If `type` is `sqs` or
-`kinesis`, specify a list of Amazon Resource Names (ARNs).
-+
-When a message is sent to the specified log group or queue, the cloud function
-executes and sends message events to the output configured for {beatname_uc}.
-+
-The following example configures a function called `sqs` that collects data
-from Amazon SQS:
+* *Google cloud example*: This example configures a function called
+`storage` that collects log events from Google Cloud Storage. When the specified
+event type occurs on the Cloud Storage bucket, the cloud function executes and
+sends events to the configured output:
+
["source","sh",subs="attributes"]
----
-- name: sqs
- enabled: true
- type: sqs
- triggers:
- - event_source_arn: arn:aws:sqs:us-east-1:123456789012:myevents
+functionbeat.provider.gcp.location_id: "europe-west2"
+functionbeat.provider.gcp.project_id: "my-project-123456"
+functionbeat.provider.gcp.storage_name: "functionbeat-deploy" <1>
+functionbeat.provider.gcp.functions:
+ - name: storage <2>
+ enabled: true
+ type: storage
+ description: "Google Cloud Function for Cloud Storage"
+ trigger:
+ resource: "projects/my-project/buckets/my-storage"
+ event_type: "google.storage.object.finalize"
----
+<1> The name of the GCP storage bucket where the function artifacts will be
+deployed.
+<2> Details about the function you want to deploy, including the name of the
+function, the type of resource to monitor, and the resource event that triggers
+the function.
++
+See <> for more examples.
+
+--
include::{libbeat-dir}/step-configure-output.asciidoc[]
@@ -169,13 +171,18 @@ include::{libbeat-dir}/shared-template-load.asciidoc[]
[role="xpack"]
=== Step 4: Deploy {beatname_uc}
-To deploy the cloud functions to your cloud provider, either use the
+To deploy {beatname_uc} functions to your cloud provider, either use the
{beatname_uc} manager, as described here, or <>.
-. Make sure the user has the credentials required to authenticate with your
-cloud service provider. For example, if you're deploying an AWS Lambda
-function, you can set environment variables that contain your credentials:
+TIP: If you change the configuration after deploying the function, use
+the <> to update your deployment.
+
+[[deploy-to-aws]]
+==== Deploy to AWS
+
+. Make sure you have the credentials required to authenticate with AWS. You can
+set environment variables that contain your credentials:
+
*linux and mac*:
+
@@ -218,13 +225,64 @@ For example, the following command deploys a function called `cloudwatch`:
.{backslash}{beatname_lc}.exe -v -e -d "*" deploy cloudwatch
----------------------------------------------------------------------
+
-The function is deployed in your cloud environment and ready to send log events
-to the configured output.
-
+The function is deployed to AWS and ready to send log events to the configured
+output.
++
If deployment fails, see <> for help troubleshooting.
-TIP: If you change the configuration after deploying the function, use
-the <> to update your deployment.
+[[deploy-to-gcp]]
+==== Deploy to Google Cloud Platform
+
+beta[]
+
+. In Google Cloud, create a service account that has these required roles:
++
+--
+include::iam-permissions.asciidoc[tag=gcp-roles-deployment]
+--
++
+See the https://cloud.google.com/docs/authentication/getting-started[Google
+Cloud documentation] for more information about creating a service account.
+
+. Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to point to
+the JSON file that contains your service account key. For example:
++
+*linux and mac*:
++
+[source, shell]
+----
+export GOOGLE_APPLICATION_CREDENTIALS="/path/to/myproject-5a90ee91d102.json"
+----
++
+*win*:
++
+[source, shell]
+----
+set GOOGLE_APPLICATION_CREDENTIALS="C:\path\to\myproject-5a90ee91d102.json"
+----
+
+. Deploy the cloud functions.
++
+For example, the following command deploys a function called `storage`:
++
+*linux and mac:*
++
+["source","sh",subs="attributes"]
+----------------------------------------------------------------------
+./{beatname_lc} -v -e -d "*" deploy storage
+----------------------------------------------------------------------
++
+*win:*
++
+["source","sh",subs="attributes"]
+----------------------------------------------------------------------
+.{backslash}{beatname_lc}.exe -v -e -d "*" deploy storage
+----------------------------------------------------------------------
++
+The function is deployed to Google Cloud Platform and ready to send events
+to the configured output.
++
+If deployment fails, see <> for help troubleshooting.
[[view-kibana-dashboards]]
[role="xpack"]
diff --git a/x-pack/functionbeat/docs/iam-permissions.asciidoc b/x-pack/functionbeat/docs/iam-permissions.asciidoc
index e354f2aa6422..25dc16a236c8 100644
--- a/x-pack/functionbeat/docs/iam-permissions.asciidoc
+++ b/x-pack/functionbeat/docs/iam-permissions.asciidoc
@@ -1,20 +1,26 @@
[[iam-permissions]]
[role="xpack"]
-=== IAM permissions required for {beatname_uc} deployment
+=== IAM permissions required to deploy {beatname_uc}
++++
IAM permissions required for deployment
++++
-The role used to deploy {beatname_uc} to AWS must have the minimum privileges
-required to deploy and run the Lambda function.
+This section describes the minimum privileges or roles required to deploy
+functions to your cloud provider:
-The following sections show example policies that grant the required
-permissions.
-
+* <>
+* <>
+
+
+[[iam-permissions-aws]]
+==== Permissions required by AWS
+
+The list of required permissions depends on the type of events that you are
+collecting. Here are some example policies that grant the required privileges.
[[iam-permissions-cloudwatch]]
-==== CloudWatch logs
+===== CloudWatch logs
The following policy grants the permissions required to deploy and run a Lambda
function that collects events from CloudWatch logs.
@@ -70,7 +76,7 @@ function that collects events from CloudWatch logs.
----
[[iam-permissions-sqs-kinesis]]
-==== SQS and Kinesis
+===== SQS and Kinesis
The following policy grants the permissions required to deploy and run a Lambda
function that reads from SQS queues or Kinesis data streams.
@@ -124,3 +130,17 @@ function that reads from SQS queues or Kinesis data streams.
]
}
----
+
+[[iam-permissions-gcp]]
+===== Roles required by Google Cloud Platform
+
+The following roles are required to deploy Cloud Functions to Google Cloud
+Platform:
+
+// tag::gcp-roles-deployment[]
+* Cloud Functions Developer
+* Cloud Functions Service Agent
+* Service Account User
+* Storage Admin
+* Storage Object Admin
+// end::gcp-roles-deployment[]
diff --git a/x-pack/functionbeat/docs/overview.asciidoc b/x-pack/functionbeat/docs/overview.asciidoc
index 6837140f48e3..f5b39bf674c2 100644
--- a/x-pack/functionbeat/docs/overview.asciidoc
+++ b/x-pack/functionbeat/docs/overview.asciidoc
@@ -7,15 +7,18 @@
++++
{beatname_uc} is an Elastic https://www.elastic.co/products/beats[Beat] that you
-deploy on your serverless environment to collect data from cloud services and
-ship it to the {stack}.
+deploy as a function in your serverless environment to collect data from cloud
+services and ship it to the {stack}.
-Version {version} supports deploying {beatname_uc} as an AWS Lambda service and
-responds to the triggers defined for the following event sources:
+Version {version} supports deploying {beatname_uc} as an AWS Lambda service or
+Google Cloud Function. It responds to triggers defined for the following event
+sources:
-* https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html[CloudWatch Logs]
+* https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html[Amazon CloudWatch Logs]
* https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html[Amazon Simple Queue Service (SQS)]
-* https://docs.aws.amazon.com/kinesis/latest/APIReference/Welcome.html[Kinesis]
+* https://docs.aws.amazon.com/kinesis/latest/APIReference/Welcome.html[Amazon Kinesis]
+* https://cloud.google.com/pubsub[Google Cloud Pub/Sub]
+* https://cloud.google.com/storage[Google Cloud Storage]
image::./images/diagram-functionbeat-architecture.svg["{beatname_uc} collects events generated by cloud services"]