README.md in fluent-plugin-bigquery-3.1.0 vs README.md in fluent-plugin-bigquery-3.2.0
- old
+ new
@@ -28,21 +28,21 @@
## With docker image
If you use official alpine based fluentd docker image (https://github.com/fluent/fluentd-docker-image),
You need to install `bigdecimal` gem on your own dockerfile.
Because alpine based image has only minimal ruby environment in order to reduce image size.
And in most case, dependency to embedded gem is not written on gemspec.
-Because embbeded gem dependency sometimes restricts ruby environment.
+Because embedded gem dependency sometimes restricts ruby environment.
## Configuration
### Options
#### common
| name | type | required? | placeholder? | default | description |
| :-------------------------------------------- | :------------ | :----------- | :---------- | :------------------------- | :----------------------- |
-| auth_method | enum | yes | no | private_key | `private_key` or `json_key` or `compute_engine` or `application_default` |
+| auth_method | enum | yes | no | private_key | `private_key` or `json_key` or `compute_engine` or `application_default` (GKE Workload Identity) |
| email | string | yes (private_key) | no | nil | GCP Service Account Email |
| private_key_path | string | yes (private_key) | no | nil | GCP Private Key file path |
| private_key_passphrase | string | yes (private_key) | no | nil | GCP Private Key Passphrase |
| json_key | string | yes (json_key) | no | nil | GCP JSON Key file path or JSON Key string |
| location | string | no | no | nil | BigQuery Data Location. The geographic location of the job. Required except for US and EU. |
@@ -57,11 +57,11 @@
| fetch_schema | bool | yes (either `schema_path`) | no | false | If true, fetch table schema definition from Bigquery table automatically. |
| fetch_schema_table | string | no | yes | nil | If set, fetch table schema definition from this table, If fetch_schema is false, this param is ignored |
| schema_cache_expire | integer | no | no | 600 | Value is second. If current time is after expiration interval, re-fetch table schema definition. |
| request_timeout_sec | integer | no | no | nil | Bigquery API response timeout |
| request_open_timeout_sec | integer | no | no | 60 | Bigquery API connection, and request timeout. If you send big data to Bigquery, set large value. |
-| time_partitioning_type | enum | no (either day) | no | nil | Type of bigquery time partitioning feature. |
+| time_partitioning_type | enum | no (either day or hour) | no | nil | Type of bigquery time partitioning feature. |
| time_partitioning_field | string | no | no | nil | Field used to determine how to create a time-based partition. |
| time_partitioning_expiration | time | no | no | nil | Expiration milliseconds for bigquery time partitioning. |
| clustering_fields | array(string) | no | no | nil | One or more fields on which data should be clustered. The order of the specified columns determines the sort order of the data. |
#### bigquery_insert
@@ -192,19 +192,19 @@
For high rate inserts over streaming inserts, you should specify flush intervals and buffer chunk options:
```apache
<match dummy>
@type bigquery_insert
-
+
<buffer>
flush_interval 0.1 # flush as frequent as possible
-
+
total_limit_size 10g
-
+
flush_thread_count 16
</buffer>
-
+
auth_method private_key # default
email xxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxx@developer.gserviceaccount.com
private_key_path /home/username/.keys/00000000000000000000000000000000-privatekey.p12
# private_key_passphrase notasecret # default
@@ -253,11 +253,11 @@
* `chunk_limit_size (default 1MB)` x `queue_length_limit (default 1024)`
* `buffer/flush_thread_count`
* threads for insert api calls in parallel
* specify this option for 100 or more records per seconds
* 10 or more threads seems good for inserts over internet
- * less threads may be good for Google Compute Engine instances (with low latency for BigQuery)
+ * fewer threads may be good for Google Compute Engine instances (with low latency for BigQuery)
* `buffer/flush_interval`
* interval between data flushes (default 0.25)
* you can set subsecond values such as `0.15` on Fluentd v0.10.42 or later
See [Quota policy](https://cloud.google.com/bigquery/streaming-data-into-bigquery#quota)
@@ -292,11 +292,11 @@
There are four methods supported to fetch access token for the service account.
1. Public-Private key pair of GCP(Google Cloud Platform)'s service account
2. JSON key of GCP(Google Cloud Platform)'s service account
3. Predefined access token (Compute Engine only)
-4. Google application default credentials (http://goo.gl/IUuyuX)
+4. [Google application default credentials](https://cloud.google.com/docs/authentication/application-default-credentials) / GKE Workload Identity
#### Public-Private key pair of GCP's service account
The examples above use the first one. You first need to create a service account (client ID),
download its private key and deploy the key with fluentd.
@@ -337,11 +337,11 @@
</match>
```
#### Predefined access token (Compute Engine only)
-When you run fluentd on Googlce Compute Engine instance,
+When you run fluentd on Google Compute Engine instance,
you don't need to explicitly create a service account for fluentd.
In this authentication method, you need to add the API scope "https://www.googleapis.com/auth/bigquery" to the scope list of your
Compute Engine instance, then you can configure fluentd like this.
```apache
@@ -358,18 +358,20 @@
</match>
```
#### Application default credentials
-The Application Default Credentials provide a simple way to get authorization credentials for use in calling Google APIs, which are described in detail at http://goo.gl/IUuyuX.
+The Application Default Credentials provide a simple way to get authorization credentials for use in calling Google APIs, which are described in detail at https://cloud.google.com/docs/authentication/application-default-credentials.
+**This is the method you should choose if you want to use Workload Identity on GKE**.
+
In this authentication method, the credentials returned are determined by the environment the code is running in. Conditions are checked in the following order:credentials are get from following order.
1. The environment variable `GOOGLE_APPLICATION_CREDENTIALS` is checked. If this variable is specified it should point to a JSON key file that defines the credentials.
-2. The environment variable `GOOGLE_PRIVATE_KEY` and `GOOGLE_CLIENT_EMAIL` are checked. If this variables are specified `GOOGLE_PRIVATE_KEY` should point to `private_key`, `GOOGLE_CLIENT_EMAIL` should point to `client_email` in a JSON key.
-3. Well known path is checked. If file is exists, the file used as a JSON key file. This path is `$HOME/.config/gcloud/application_default_credentials.json`.
-4. System default path is checked. If file is exists, the file used as a JSON key file. This path is `/etc/google/auth/application_default_credentials.json`.
+2. The environment variable `GOOGLE_PRIVATE_KEY` and `GOOGLE_CLIENT_EMAIL` are checked. If these variables are specified `GOOGLE_PRIVATE_KEY` should point to `private_key`, `GOOGLE_CLIENT_EMAIL` should point to `client_email` in a JSON key.
+3. Well known path is checked. If the file exists, it is used as a JSON key file. This path is `$HOME/.config/gcloud/application_default_credentials.json`.
+4. System default path is checked. If the file exists, it is used as a JSON key file. This path is `/etc/google/auth/application_default_credentials.json`.
5. If you are running in Google Compute Engine production, the built-in service account associated with the virtual machine instance will be used.
6. If none of these conditions is true, an error will occur.
### Table id formatting
@@ -541,24 +543,24 @@
```apache
<match dummy>
@type bigquery_insert
...
-
+
schema_path /path/to/httpd.schema
</match>
```
-where /path/to/httpd.schema is a path to the JSON-encoded schema file which you used for creating the table on BigQuery. By using external schema file you are able to write full schema that does support NULLABLE/REQUIRED/REPEATED, this feature is really useful and adds full flexbility.
+where /path/to/httpd.schema is a path to the JSON-encoded schema file which you used for creating the table on BigQuery. By using external schema file you are able to write full schema that does support NULLABLE/REQUIRED/REPEATED, this feature is really useful and adds full flexibility.
The third method is to set `fetch_schema` to `true` to enable fetch a schema using BigQuery API. In this case, your fluent.conf looks like:
```apache
<match dummy>
@type bigquery_insert
...
-
+
fetch_schema true
# fetch_schema_table other_table # if you want to fetch schema from other table
</match>
```
@@ -592,7 +594,7 @@
* check row size limits
## Authors
* @tagomoris: First author, original version
-* KAIZEN platform Inc.: Maintener, Since 2014.08.19
+* KAIZEN platform Inc.: Maintainer, Since 2014.08.19
* @joker1007