docs/index.asciidoc in logstash-input-elasticsearch-4.7.0 vs docs/index.asciidoc in logstash-input-elasticsearch-4.7.1
- old
+ new
@@ -182,11 +182,24 @@
document_type => "%{[@metadata][_type]}"
document_id => "%{[@metadata][_id]}"
}
}
+If set, you can use metadata information in the <<plugins-{type}s-{plugin}-add_field>> common option.
+Example
+[source, ruby]
+ input {
+ elasticsearch {
+ docinfo => true
+ add_field => {
+ identifier => %{[@metadata][_index]}:%{[@metadata][_type]}:%{[@metadata][_id]}"
+ }
+ }
+ }
+
+
[id="plugins-{type}s-{plugin}-docinfo_fields"]
===== `docinfo_fields`
* Value type is <<array,array>>
* Default value is `["_index", "_type", "_id"]`
@@ -294,11 +307,11 @@
* Value type is <<number,number>>
* There is no default value.
* Sensible values range from 2 to about 8.
In some cases, it is possible to improve overall throughput by consuming multiple
-distinct slices of a query simultaneously using the
-https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html#sliced-scroll[Sliced Scroll API],
+distinct slices of a query simultaneously using
+https://www.elastic.co/guide/en/elasticsearch/reference/current/paginate-search-results.html#slice-scroll[sliced scrolls],
especially if the pipeline is spending significant time waiting on Elasticsearch
to provide results.
If set, the `slices` parameter tells the plugin how many slices to divide the work
into, and will produce events from the slices in parallel until all of them are done