Skip to content

Commit

Permalink
Inspec dataproc cluster
Browse files Browse the repository at this point in the history
Signed-off-by: Modular Magician <magic-modules@google.com>
  • Loading branch information
slevenick authored and modular-magician committed Sep 24, 2019
1 parent ff7bd00 commit b2bd746
Show file tree
Hide file tree
Showing 61 changed files with 1,606 additions and 37 deletions.
2 changes: 1 addition & 1 deletion docs/resources/google_appengine_standard_app_version.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ end
## Properties
Properties that can be accessed from the `google_appengine_standard_app_version` resource:


* `name`: Full path to the Version resource in the API. Example, "v1".

* `version_id`: Relative name of the version within the service. For example, `v1`. Version names can contain only lowercase letters, numbers, or hyphens. Reserved names,"default", "latest", and any name with the prefix "ah-".
Expand All @@ -28,7 +29,6 @@ Properties that can be accessed from the `google_appengine_standard_app_version`
* `threadsafe`: Whether multiple requests can be dispatched to this version at once.



## GCP Permissions

Ensure the [App Engine Admin API](https://console.cloud.google.com/apis/library/appengine.googleapis.com/) is enabled for the current project.
8 changes: 7 additions & 1 deletion docs/resources/google_bigquery_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ end
## Properties
Properties that can be accessed from the `google_bigquery_dataset` resource:


* `access`: An array of objects that define dataset access for one or more entities.

* `domain`: A domain to grant access to. Any users signed in with the domain specified will be granted the specified access
Expand All @@ -49,6 +50,12 @@ Properties that can be accessed from the `google_bigquery_dataset` resource:

* `view`: A view from a different dataset to grant access to. Queries executed against that view will have read access to tables in this dataset. The role field is not required when this field is set. If that view is updated by any user, access to the view needs to be granted again via an update operation.

* `dataset_id`: The ID of the dataset containing this table.

* `project_id`: The ID of the project containing this table.

* `table_id`: The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

* `creation_time`: The time when this dataset was created, in milliseconds since the epoch.

* `dataset_reference`: A reference that identifies the dataset.
Expand Down Expand Up @@ -76,7 +83,6 @@ Properties that can be accessed from the `google_bigquery_dataset` resource:
* `location`: The geographic location where the dataset should reside. See [official docs](https://cloud.google.com/bigquery/docs/dataset-locations). There are two types of locations, regional or multi-regional. A regional location is a specific geographic place, such as Tokyo, and a multi-regional location is a large geographic area, such as the United States, that contains at least two geographic places. Possible regional values include: `asia-east1`, `asia-northeast1`, `asia-southeast1`, `australia-southeast1`, `europe-north1`, `europe-west2` and `us-east4`. Possible multi-regional values: `EU` and `US`. The default value is multi-regional location `US`. Changing this forces a new resource to be created.



## GCP Permissions

Ensure the [BigQuery API](https://console.cloud.google.com/apis/library/bigquery-json.googleapis.com/) is enabled for the current project.
68 changes: 67 additions & 1 deletion docs/resources/google_bigquery_table.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ end
## Properties
Properties that can be accessed from the `google_bigquery_table` resource:


* `table_reference`: Reference describing the ID of this table

* `dataset_id`: The ID of the dataset containing this table
Expand Down Expand Up @@ -66,6 +67,10 @@ Properties that can be accessed from the `google_bigquery_table` resource:

* `user_defined_function_resources`: Describes user-defined function resources used in the query.

* `inline_code`: An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

* `resource_uri`: A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

* `time_partitioning`: If specified, configures time-based partitioning for this table.

* `expiration_ms`: Number of milliseconds for which to keep the storage for a partition.
Expand All @@ -86,6 +91,16 @@ Properties that can be accessed from the `google_bigquery_table` resource:

* `fields`: Describes the fields in a table.

* `description`: The field description. The maximum length is 1,024 characters.

* `fields`: Describes the nested schema fields if the type property is set to RECORD.

* `mode`: The field mode

* `name`: The field name

* `type`: The field data type

* `encryption_configuration`: Custom encryption configuration

* `kms_key_name`: Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
Expand All @@ -108,14 +123,65 @@ Properties that can be accessed from the `google_bigquery_table` resource:

* `schema`: The schema for the data. Schema is required for CSV and JSON formats

* `fields`: Describes the fields in a table.

* `description`: The field description

* `fields`: Describes the nested schema fields if the type property is set to RECORD

* `mode`: Field mode.

* `name`: Field name

* `type`: Field data type

* `google_sheets_options`: Additional options if sourceFormat is set to GOOGLE_SHEETS.

* `skip_leading_rows`: The number of rows at the top of a Google Sheet that BigQuery will skip when reading the data.

* `csv_options`: Additional properties to set if sourceFormat is set to CSV.

* `allow_jagged_rows`: Indicates if BigQuery should accept rows that are missing trailing optional columns

* `allow_quoted_newlines`: Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file

* `encoding`: The character encoding of the data

* `field_delimiter`: The separator for fields in a CSV file

* `quote`: The value that is used to quote data sections in a CSV file

* `skip_leading_rows`: The number of rows at the top of a CSV file that BigQuery will skip when reading the data.

* `bigtable_options`: Additional options if sourceFormat is set to BIGTABLE.

* `dataset`: Name of the dataset
* `ignore_unspecified_column_families`: If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema

* `read_rowkey_as_string`: If field is true, then the rowkey column families will be read and converted to string.

* `column_families`: List of column families to expose in the table schema along with their types.

* `columns`: Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs.

* `encoding`: The encoding of the values when the type is not STRING

* `field_name`: If the qualifier is not a valid BigQuery field identifier, a valid identifier must be provided as the column field name and is used as field name in queries.

* `only_read_latest`: If this is set, only the latest version of value in this column are exposed

* `qualifier_string`: Qualifier of the column

* `type`: The type to convert the value in cells of this column

* `encoding`: The encoding of the values when the type is not STRING

* `family_id`: Identifier of the column family.

* `only_read_latest`: If this is set only the latest version of value are exposed for all columns in this column family

* `type`: The type to convert the value in cells of this column family

* `dataset`: Name of the dataset


## GCP Permissions
Expand Down
26 changes: 26 additions & 0 deletions docs/resources/google_cloudbuild_trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ end
## Properties
Properties that can be accessed from the `google_cloudbuild_trigger` resource:


* `id`: The unique identifier for the trigger.

* `description`: Human-readable description of the trigger.
Expand Down Expand Up @@ -63,6 +64,31 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `steps`: The operations to be performed on the workspace.

* `name`: The name of the container image that will run this particular build step. If the image is available in the host's Docker daemon's cache, it will be run directly. If not, the host will attempt to pull the image first, using the builder service account's credentials if necessary. The Docker daemon's cache will already have the latest versions of all of the officially supported build steps (https://github.com/GoogleCloudPlatform/cloud-builders). The Docker daemon will also have cached many of the layers for some popular images, like "ubuntu", "debian", but they will be refreshed at the time you attempt to use them. If you built an image in a previous build step, it will be stored in the host's Docker daemon's cache and is available to use as the name for a later build step.

* `args`: A list of arguments that will be presented to the step when it is started. If the image used to run the step's container has an entrypoint, the args are used as arguments to that entrypoint. If the image does not define an entrypoint, the first element in args is used as the entrypoint, and the remainder will be used as arguments.

* `env`: A list of environment variable definitions to be used when running a step. The elements are of the form "KEY=VALUE" for the environment variable "KEY" being given the value "VALUE".

* `id`: Unique identifier for this build step, used in `wait_for` to reference this build step as a dependency.

* `entrypoint`: Entrypoint to be used instead of the build step image's default entrypoint. If unset, the image's default entrypoint is used

* `dir`: Working directory to use when running this step's container. If this value is a relative path, it is relative to the build's working directory. If this value is absolute, it may be outside the build's working directory, in which case the contents of the path may not be persisted across build step executions, unless a `volume` for that path is specified. If the build specifies a `RepoSource` with `dir` and a step with a `dir`, which specifies an absolute path, the `RepoSource` `dir` is ignored for the step's execution.

* `secret_env`: A list of environment variables which are encrypted using a Cloud Key Management Service crypto key. These values must be specified in the build's `Secret`.

* `timeout`: Time limit for executing this build step. If not defined, the step has no time limit and will be allowed to continue to run until either it completes or the build itself times out.

* `timing`: Output only. Stores timing information for executing this build step.

* `volumes`: List of volumes to mount into the build step. Each volume is created as an empty volume prior to execution of the build step. Upon completion of the build, volumes and their contents are discarded. Using a named volume in only one step is not valid as it is indicative of a build request with an incorrect configuration.

* `name`: Name of the volume to mount. Volume names must be unique per build step and must be valid names for Docker volumes. Each named volume must be used by at least two build steps.

* `path`: Path at which to mount the volume. Paths must be absolute and cannot conflict with other volume paths on the same build step or with certain reserved volume paths.

* `wait_for`: The ID(s) of the step(s) that this build step depends on. This build step will not start until all the build steps in `wait_for` have completed successfully. If `wait_for` is empty, this build step will start when all previous build steps in the `Build.Steps` list have completed successfully.


## GCP Permissions
Expand Down
2 changes: 1 addition & 1 deletion docs/resources/google_cloudfunctions_cloud_function.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ end
## Properties
Properties that can be accessed from the `google_cloudfunctions_cloud_function` resource:


* `name`: A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*`.

* `description`: User-provided description of a function.
Expand Down Expand Up @@ -74,7 +75,6 @@ Properties that can be accessed from the `google_cloudfunctions_cloud_function`
* `location`: The location of this cloud function.



## GCP Permissions

Ensure the [Cloud Functions API](https://console.cloud.google.com/apis/library/cloudfunctions.googleapis.com/) is enabled for the current project.
12 changes: 11 additions & 1 deletion docs/resources/google_compute_autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ end
## Properties
Properties that can be accessed from the `google_compute_autoscaler` resource:


* `id`: Unique identifier for the resource.

* `creation_timestamp`: Creation timestamp in RFC3339 text format.
Expand All @@ -44,16 +45,25 @@ Properties that can be accessed from the `google_compute_autoscaler` resource:

* `cpu_utilization`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.

* `utilization_target`: The target CPU utilization that the autoscaler should maintain. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales down the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales up until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.

* `custom_metric_utilizations`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.

* `metric`: The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.

* `utilization_target`: The target value of the metric that autoscaler should maintain. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilizationTarget is www.googleapis.com/compute/instance/network/received_bytes_count. The autoscaler will work to keep this value constant for each of the instances.

* `utilization_target_type`: Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.

* `load_balancing_utilization`: Configuration parameters of autoscaling based on a load balancer.

* `utilization_target`: Fraction of backend capacity utilization (set in HTTP(s) load balancing configuration) that autoscaler should maintain. Must be a positive float value. If not defined, the default is 0.8.

* `target`: URL of the managed instance group that this autoscaler will scale.

* `zone`: URL of the zone where the instance group resides.



## GCP Permissions

Ensure the [Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com/) is enabled for the current project.
2 changes: 1 addition & 1 deletion docs/resources/google_compute_backend_bucket.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ end
## Properties
Properties that can be accessed from the `google_compute_backend_bucket` resource:


* `bucket_name`: Cloud Storage bucket name.

* `cdn_policy`: Cloud CDN configuration for this Backend Bucket.
Expand All @@ -40,7 +41,6 @@ Properties that can be accessed from the `google_compute_backend_bucket` resourc
* `name`: Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.



## GCP Permissions

Ensure the [Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com/) is enabled for the current project.
12 changes: 11 additions & 1 deletion docs/resources/google_compute_backend_service.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ end
## Properties
Properties that can be accessed from the `google_compute_backend_service` resource:


* `affinity_cookie_ttl_sec`: Lifetime of cookies in seconds if session_affinity is GENERATED_COOKIE. If set to 0, the cookie is non-persistent and lasts only until the end of the browser session (or equivalent). The maximum allowed value for TTL is one day. When the load balancing scheme is INTERNAL, this field is not used.

* `backends`: The set of backends that serve this BackendService.
Expand Down Expand Up @@ -55,6 +56,16 @@ Properties that can be accessed from the `google_compute_backend_service` resour

* `cache_key_policy`: The CacheKeyPolicy for this CdnPolicy.

* `include_host`: If true requests to different hosts will be cached separately.

* `include_protocol`: If true, http and https requests will be cached separately.

* `include_query_string`: If true, include query string parameters in the cache key according to query_string_whitelist and query_string_blacklist. If neither is set, the entire query string will be included. If false, the query string will be excluded from the cache key entirely.

* `query_string_blacklist`: Names of query string parameters to exclude in cache keys. All other parameters will be included. Either specify query_string_whitelist or query_string_blacklist, not both. '&' and '=' will be percent encoded and not treated as delimiters.

* `query_string_whitelist`: Names of query string parameters to include in cache keys. All other parameters will be excluded. Either specify query_string_whitelist or query_string_blacklist, not both. '&' and '=' will be percent encoded and not treated as delimiters.

* `signed_url_cache_max_age_sec`: Maximum number of seconds the response to a signed URL request will be considered fresh, defaults to 1hr (3600s). After this time period, the response will be revalidated before being served. When serving responses to signed URL requests, Cloud CDN will internally behave as though all responses from this backend had a "Cache-Control: public, max-age=[TTL]" header, regardless of any existing Cache-Control header. The actual headers served in responses will not be altered.

* `connection_draining`: Settings for connection draining
Expand Down Expand Up @@ -98,7 +109,6 @@ Properties that can be accessed from the `google_compute_backend_service` resour
* `timeout_sec`: How many seconds to wait for the backend before considering it a failed request. Default is 30 seconds. Valid range is [1, 86400].



## GCP Permissions

Ensure the [Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com/) is enabled for the current project.
2 changes: 1 addition & 1 deletion docs/resources/google_compute_disk.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ end
## Properties
Properties that can be accessed from the `google_compute_disk` resource:


* `label_fingerprint`: The fingerprint used for optimistic locking of this resource. Used internally during updates.

* `creation_timestamp`: Creation timestamp in RFC3339 text format.
Expand Down Expand Up @@ -95,7 +96,6 @@ Properties that can be accessed from the `google_compute_disk` resource:
* `source_snapshot_id`: The unique ID of the snapshot used to create this disk. This value identifies the exact snapshot that was used to create this persistent disk. For example, if you created the persistent disk from a snapshot that was later deleted and recreated under the same name, the source snapshot ID would identify the exact version of the snapshot that was used.



## GCP Permissions

Ensure the [Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com/) is enabled for the current project.
Loading

0 comments on commit b2bd746

Please sign in to comment.