README.rdoc in fluent-plugin-s3-0.4.3 vs README.rdoc in fluent-plugin-s3-0.5.0
- old
+ new
@@ -19,11 +19,11 @@
type s3
aws_key_id YOUR_AWS_KEY_ID
aws_sec_key YOUR_AWS_SECRET/KEY
s3_bucket YOUR_S3_BUCKET_NAME
- s3_endpoint s3-ap-northeast-1.amazonaws.com
+ s3_region ap-northeast-1
s3_object_key_format %{path}%{time_slice}_%{index}.%{file_extension}
path logs/
buffer_path /var/log/fluent/s3
time_slice_format %Y%m%d-%H
@@ -37,12 +37,10 @@
[s3_bucket (required)] S3 bucket name.
[s3_region] s3 region name. For example, US West (Oregon) Region is "us-west-2". The full list of regions are available here. > http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region. We recommend using `s3_region` instead of `s3_endpoint`.
-[s3_endpoint] s3 endpoint name. For example, US West (Oregon) Region is "s3-us-west-2.amazonaws.com". The full list of endpoints are available here. > http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
-
[s3_object_key_format] The format of S3 object keys. You can use several built-in variables:
- %{path}
- %{time_slice}
- %{index}
@@ -86,10 +84,12 @@
- gzip (default)
- json
- text
- lzo (Need lzop command)
+See 'Use your compression algorithm' section for adding another format.
+
[format] Change one line format in the S3 object. Supported formats are "out_file", "json", "ltsv" and "single_value".
- out_file (default).
time\ttag\t{..json1..}
@@ -104,11 +104,11 @@
At this format, "time" and "tag" are omitted.
But you can set these information to the record by setting "include_tag_key" / "tag_key" and "include_time_key" / "time_key" option.
If you set following configuration in S3 output:
- format_json true
+ format json
include_time_key true
time_key log_time # default is time
then the record has log_time field.
@@ -162,9 +162,40 @@
Note that the bucket must already exist and *auto_create_bucket* has no effect in this case.
Refer to the {AWS documentation}[http://docs.aws.amazon.com/IAM/latest/UserGuide/ExampleIAMPolicies.html] for example policies.
Using {IAM roles}[http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html] with a properly configured IAM policy are preferred over embedding access keys on EC2 instances.
+
+== Use your compression algorithm
+
+s3 plugin has plugabble compression mechanizm like Fleuntd\'s input / output plugin.
+If you set 'store_as xxx', s3 plugin searches `fluent/plugin/s3_compressor_xxx.rb`.
+You can define your compression with 'S3Output::Compressor' class. Compressor API is here:
+
+ module Fluent
+ class S3Output
+ class XXXCompressor < Compressor
+ S3Output.register_compressor('xxx', self)
+
+ # Used to file extension
+ def ext
+ 'xxx'
+ end
+
+ # Used to file content type
+ def content_type
+ 'application/x-xxx'
+ end
+
+ # chunk is buffer chunk. tmp is destination file for upload
+ def compress(chunk, tmp)
+ # call command or something
+ end
+ end
+ end
+ end
+
+See bundled Compressor classes for more detail.
== Website, license, et. al.
Web site:: http://fluentd.org/
Documents:: http://docs.fluentd.org/