:plugin: elasticsearch :type: input :default_codec: json /////////////////////////////////////////// START - GENERATED VARIABLES, DO NOT EDIT! /////////////////////////////////////////// :version: %VERSION% :release_date: %RELEASE_DATE% :changelog_url: %CHANGELOG_URL% :include_path: ../../../../logstash/docs/include /////////////////////////////////////////// END - GENERATED VARIABLES, DO NOT EDIT! /////////////////////////////////////////// [id="plugins-{type}s-{plugin}"] === Elasticsearch input plugin include::{include_path}/plugin_header.asciidoc[] ==== Description .Compatibility Note [NOTE] ================================================================================ Starting with Elasticsearch 5.3, there's an {ref}/modules-http.html[HTTP setting] called `http.content_type.required`. If this option is set to `true`, and you are using Logstash 2.4 through 5.2, you need to update the Elasticsearch input plugin to version 4.0.2 or higher. ================================================================================ Read from an Elasticsearch cluster, based on search query results. This is useful for replaying test logs, reindexing, etc. You can periodically schedule ingestion using a cron syntax (see `schedule` setting) or run the query one time to load data into Logstash. Example: [source,ruby] input { # Read all documents from Elasticsearch matching the given query elasticsearch { hosts => "localhost" query => '{ "query": { "match": { "statuscode": 200 } }, "sort": [ "_doc" ] }' } } This would create an Elasticsearch query with the following format: [source,json] curl 'http://localhost:9200/logstash-*/_search?&scroll=1m&size=1000' -d '{ "query": { "match": { "statuscode": 200 } }, "sort": [ "_doc" ] }' ==== Scheduling Input from this plugin can be scheduled to run periodically according to a specific schedule. This scheduling syntax is powered by https://github.com/jmettraux/rufus-scheduler[rufus-scheduler]. The syntax is cron-like with some extensions specific to Rufus (e.g. timezone support ). Examples: |========================================================== | `* 5 * 1-3 *` | will execute every minute of 5am every day of January through March. | `0 * * * *` | will execute on the 0th minute of every hour every day. | `0 6 * * * America/Chicago` | will execute at 6:00am (UTC/GMT -5) every day. |========================================================== Further documentation describing this syntax can be found https://github.com/jmettraux/rufus-scheduler#parsing-cronlines-and-time-strings[here]. [id="plugins-{type}s-{plugin}-options"] ==== Elasticsearch Input Configuration Options This plugin supports the following configuration options plus the <> described later. [cols="<,<,<",options="header",] |======================================================================= |Setting |Input type|Required | <> |a valid filesystem path|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No |======================================================================= Also see <> for a list of options supported by all input plugins.   [id="plugins-{type}s-{plugin}-ca_file"] ===== `ca_file` * Value type is <> * There is no default value for this setting. SSL Certificate Authority file in PEM encoded format, must also include any chain certificates as necessary. [id="plugins-{type}s-{plugin}-docinfo"] ===== `docinfo` * Value type is <> * Default value is `false` If set, include Elasticsearch document information such as index, type, and the id in the event. It might be important to note, with regards to metadata, that if you're ingesting documents with the intent to re-index them (or just update them) that the `action` option in the elasticsearch output wants to know how to handle those things. It can be dynamically assigned with a field added to the metadata. Example [source, ruby] input { elasticsearch { hosts => "es.production.mysite.org" index => "mydata-2018.09.*" query => '{ "query": { "query_string": { "query": "*" } } }' size => 500 scroll => "5m" docinfo => true } } output { elasticsearch { index => "copy-of-production.%{[@metadata][_index]}" document_type => "%{[@metadata][_type]}" document_id => "%{[@metadata][_id]}" } } NOTE: Starting with Logstash 6.0, the `document_type` option is deprecated due to the https://www.elastic.co/guide/en/elasticsearch/reference/6.0/removal-of-types.html[removal of types in Logstash 6.0]. It will be removed in the next major version of Logstash. [id="plugins-{type}s-{plugin}-docinfo_fields"] ===== `docinfo_fields` * Value type is <> * Default value is `["_index", "_type", "_id"]` If document metadata storage is requested by enabling the `docinfo` option, this option lists the metadata fields to save in the current event. See {ref}/mapping-fields.html[Meta-Fields] in the Elasticsearch documentation for more information. [id="plugins-{type}s-{plugin}-docinfo_target"] ===== `docinfo_target` * Value type is <> * Default value is `"@metadata"` If document metadata storage is requested by enabling the `docinfo` option, this option names the field under which to store the metadata fields as subfields. [id="plugins-{type}s-{plugin}-hosts"] ===== `hosts` * Value type is <> * There is no default value for this setting. List of one or more Elasticsearch hosts to use for querying. Each host can be either IP, HOST, IP:port, or HOST:port. The port defaults to 9200. [id="plugins-{type}s-{plugin}-index"] ===== `index` * Value type is <> * Default value is `"logstash-*"` The index or alias to search. See https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-index.html[Multi Indices documentation] in the Elasticsearch documentation for more information on how to reference multiple indices. [id="plugins-{type}s-{plugin}-password"] ===== `password` * Value type is <> * There is no default value for this setting. The password to use together with the username in the `user` option when authenticating to the Elasticsearch server. If set to an empty string authentication will be disabled. [id="plugins-{type}s-{plugin}-query"] ===== `query` * Value type is <> * Default value is `'{ "sort": [ "_doc" ] }'` The query to be executed. Read the https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html[Elasticsearch query DSL documentation] for more information. [id="plugins-{type}s-{plugin}-schedule"] ===== `schedule` * Value type is <> * There is no default value for this setting. Schedule of when to periodically run statement, in Cron format for example: "* * * * *" (execute query every minute, on the minute) There is no schedule by default. If no schedule is given, then the statement is run exactly once. [id="plugins-{type}s-{plugin}-scroll"] ===== `scroll` * Value type is <> * Default value is `"1m"` This parameter controls the keepalive time in seconds of the scrolling request and initiates the scrolling process. The timeout applies per round trip (i.e. between the previous scroll request, to the next). [id="plugins-{type}s-{plugin}-size"] ===== `size` * Value type is <> * Default value is `1000` This allows you to set the maximum number of hits returned per scroll. [id="plugins-{type}s-{plugin}-slices"] ===== `slices` * Value type is <> * There is no default value. * Sensible values range from 2 to about 8. In some cases, it is possible to improve overall throughput by consuming multiple distinct slices of a query simultaneously using the https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#sliced-scroll[Sliced Scroll API], especially if the pipeline is spending significant time waiting on Elasticsearch to provide results. If set, the `slices` parameter tells the plugin how many slices to divide the work into, and will produce events from the slices in parallel until all of them are done scrolling. NOTE: The Elasticsearch manual indicates that there can be _negative_ performance implications to both the query and the Elasticsearch cluster when a scrolling query uses more slices than shards in the index. If the `slices` parameter is left unset, the plugin will _not_ inject slice instructions into the query. [id="plugins-{type}s-{plugin}-ssl"] ===== `ssl` * Value type is <> * Default value is `false` If enabled, SSL will be used when communicating with the Elasticsearch server (i.e. HTTPS will be used instead of plain HTTP). [id="plugins-{type}s-{plugin}-user"] ===== `user` * Value type is <> * There is no default value for this setting. The username to use together with the password in the `password` option when authenticating to the Elasticsearch server. If set to an empty string authentication will be disabled. [id="plugins-{type}s-{plugin}-common-options"] include::{include_path}/{type}.asciidoc[] :default_codec!: