:integration: rabbitmq :plugin: rabbitmq :type: input :default_codec: json /////////////////////////////////////////// START - GENERATED VARIABLES, DO NOT EDIT! /////////////////////////////////////////// :version: %VERSION% :release_date: %RELEASE_DATE% :changelog_url: %CHANGELOG_URL% :include_path: ../../../../logstash/docs/include /////////////////////////////////////////// END - GENERATED VARIABLES, DO NOT EDIT! /////////////////////////////////////////// [id="plugins-{type}s-{plugin}"] === Rabbitmq input plugin include::{include_path}/plugin_header-integration.asciidoc[] ==== Description Pull events from a http://www.rabbitmq.com/[RabbitMQ] queue. The default settings will create an entirely transient queue and listen for all messages by default. If you need durability or any other advanced settings, please set the appropriate options This plugin uses the http://rubymarchhare.info/[March Hare] library for interacting with the RabbitMQ server. Most configuration options map directly to standard RabbitMQ and AMQP concepts. The https://www.rabbitmq.com/amqp-0-9-1-reference.html[AMQP 0-9-1 reference guide] and other parts of the RabbitMQ documentation are useful for deeper understanding. The properties of messages received will be stored in the `[@metadata][rabbitmq_properties]` field if the `@metadata_enabled` setting is enabled. Note that storing metadata may degrade performance. The following properties may be available (in most cases dependent on whether they were set by the sender): * app-id * cluster-id * consumer-tag * content-encoding * content-type * correlation-id * delivery-mode * exchange * expiration * message-id * priority * redeliver * reply-to * routing-key * timestamp * type * user-id For example, to get the RabbitMQ message's timestamp property into the Logstash event's `@timestamp` field, use the date filter to parse the `[@metadata][rabbitmq_properties][timestamp]` field: [source,ruby] filter { if [@metadata][rabbitmq_properties][timestamp] { date { match => ["[@metadata][rabbitmq_properties][timestamp]", "UNIX"] } } } Additionally, any message headers will be saved in the `[@metadata][rabbitmq_headers]` field. [id="plugins-{type}s-{plugin}-options"] ==== Rabbitmq Input Configuration Options This plugin supports the following configuration options plus the <> described later. [cols="<,<,<",options="header",] |======================================================================= |Setting |Input type|Required | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|Yes | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |<>|No | <> |a valid filesystem path|No | <> |<>|No | <> |<>|Yes | <> |<>|No | <> |<>|No | <> |<>|No |======================================================================= Also see <> for a list of options supported by all input plugins.   [id="plugins-{type}s-{plugin}-ack"] ===== `ack` * Value type is <> * Default value is `true` Enable message acknowledgements. With acknowledgements messages fetched by Logstash but not yet sent into the Logstash pipeline will be requeued by the server if Logstash shuts down. Acknowledgements will however hurt the message throughput. This will only send an ack back every `prefetch_count` messages. Working in batches provides a performance boost here. [id="plugins-{type}s-{plugin}-arguments"] ===== `arguments` * Value type is <> * Default value is `{}` Optional queue arguments as an array. Relevant RabbitMQ doc guides: * https://www.rabbitmq.com/queues.html#optional-arguments[Optional queue arguments] * https://www.rabbitmq.com/parameters.html#policies[Policies] * https://www.rabbitmq.com/quorum-queues.html[Quorum Queues] [id="plugins-{type}s-{plugin}-auto_delete"] ===== `auto_delete` * Value type is <> * Default value is `false` Should the queue be deleted on the broker when the last consumer disconnects? Set this option to `false` if you want the queue to remain on the broker, queueing up messages until a consumer comes along to consume them. [id="plugins-{type}s-{plugin}-automatic_recovery"] ===== `automatic_recovery` * Value type is <> * Default value is `true` Set this to https://www.rabbitmq.com/connections.html#automatic-recovery[automatically recover] from a broken connection. You almost certainly don't want to override this! [id="plugins-{type}s-{plugin}-connect_retry_interval"] ===== `connect_retry_interval` * Value type is <> * Default value is `1` Time in seconds to wait before retrying a connection [id="plugins-{type}s-{plugin}-connection_timeout"] ===== `connection_timeout` * Value type is <> * There is no default value for this setting. The default connection timeout in milliseconds. If not specified the timeout is infinite. [id="plugins-{type}s-{plugin}-durable"] ===== `durable` * Value type is <> * Default value is `false` Is this queue durable? (aka; Should it survive a broker restart?) If consuming directly from a queue you must set this value to match the existing queue setting, otherwise the connection will fail due to an inequivalent arg error. [id="plugins-{type}s-{plugin}-exchange"] ===== `exchange` * Value type is <> * There is no default value for this setting. The name of the exchange to bind the queue to. Specify `exchange_type` as well to declare the exchange if it does not exist [id="plugins-{type}s-{plugin}-exchange_type"] ===== `exchange_type` * Value type is <> * There is no default value for this setting. The type of the exchange to bind to. Specifying this will cause this plugin to declare the exchange if it does not exist. [id="plugins-{type}s-{plugin}-exclusive"] ===== `exclusive` * Value type is <> * Default value is `false` Is the queue exclusive? Exclusive queues can only be used by the connection that declared them and will be deleted when it is closed (e.g. due to a Logstash restart). [id="plugins-{type}s-{plugin}-heartbeat"] ===== `heartbeat` * Value type is <> * There is no default value for this setting. https://www.rabbitmq.com/heartbeats.html[Heartbeat timeout] in seconds. If unspecified then heartbeat timeout of 60 seconds will be used. [id="plugins-{type}s-{plugin}-host"] ===== `host` * This is a required setting. * Value type is <> * There is no default value for this setting. Common functionality for the rabbitmq input/output RabbitMQ server address(es) host can either be a single host, or a list of hosts i.e. host => "localhost" or host => ["host01", "host02] if multiple hosts are provided on the initial connection and any subsequent recovery attempts of the hosts is chosen at random and connected to. Note that only one host connection is active at a time. [id="plugins-{type}s-{plugin}-key"] ===== `key` * Value type is <> * Default value is `"logstash"` The routing key to use when binding a queue to the exchange. This is only relevant for direct or topic exchanges. * Routing keys are ignored on fanout exchanges. * Wildcards are not valid on direct exchanges. [id="plugins-{type}s-{plugin}-metadata_enabled"] ===== `metadata_enabled` * Value type is <> * Accepted values are: - `none`: no metadata is added - `basic`: headers and properties are added - `extended`: headers, properties, and raw payload are added - `false`: deprecated alias for `none` - `true`: deprecated alias for `basic` * Default value is `none` Enable metadata about the RabbitMQ topic to be added to the event's `@metadata` field, for availablity during pipeline processing. In general, most output plugins and codecs do not include `@metadata` fields. This may impact memory usage and performance. [id="plugins-{type}s-{plugin}-metadata_locations"] ====== Metadata mapping |===== | category | location | type | headers |`[@metadata][rabbitmq_headers]` | key/value map | properties |`[@metadata][rabbitmq_properties]` | key/value map | raw payload |`[@metadata][rabbitmq_payload]` | byte sequence |===== [id="plugins-{type}s-{plugin}-passive"] ===== `passive` * Value type is <> * Default value is `false` If true the queue will be passively declared, meaning it must already exist on the server. To have Logstash create the queue if necessary leave this option as false. If actively declaring a queue that already exists, the queue options for this plugin (durable etc) must match those of the existing queue. [id="plugins-{type}s-{plugin}-password"] ===== `password` * Value type is <> * Default value is `"guest"` RabbitMQ password [id="plugins-{type}s-{plugin}-port"] ===== `port` * Value type is <> * Default value is `5672` RabbitMQ port to connect on [id="plugins-{type}s-{plugin}-prefetch_count"] ===== `prefetch_count` * Value type is <> * Default value is `256` Prefetch count. If acknowledgements are enabled with the `ack` option, specifies the number of outstanding unacknowledged messages allowed. [id="plugins-{type}s-{plugin}-queue"] ===== `queue` * Value type is <> * Default value is `""` The properties to extract from each message and store in a @metadata field. Technically the exchange, redeliver, and routing-key properties belong to the envelope and not the message but we ignore that distinction here. However, we extract the headers separately via get_headers even though the header table technically is a message property. Freezing all strings so that code modifying the event's @metadata field can't touch them. If updating this list, remember to update the documentation above too. The default codec for this plugin is JSON. You can override this to suit your particular needs however. The name of the queue Logstash will consume events from. If left empty, a transient queue with an randomly chosen name will be created. [id="plugins-{type}s-{plugin}-ssl"] ===== `ssl` * Value type is <> * There is no default value for this setting. Enable or disable SSL. Note that by default remote certificate verification is off. Specify ssl_certificate_path and ssl_certificate_password if you need certificate verification [id="plugins-{type}s-{plugin}-ssl_certificate_password"] ===== `ssl_certificate_password` * Value type is <> * There is no default value for this setting. Password for the encrypted PKCS12 (.p12) certificate file specified in ssl_certificate_path [id="plugins-{type}s-{plugin}-ssl_certificate_path"] ===== `ssl_certificate_path` * Value type is <> * There is no default value for this setting. Path to an SSL certificate in PKCS12 (.p12) format used for verifying the remote host [id="plugins-{type}s-{plugin}-ssl_version"] ===== `ssl_version` * Value type is <> * Default value is `"TLSv1.2"` Version of the SSL protocol to use. [id="plugins-{type}s-{plugin}-subscription_retry_interval_seconds"] ===== `subscription_retry_interval_seconds` * This is a required setting. * Value type is <> * Default value is `5` Amount of time in seconds to wait after a failed subscription request before retrying. Subscribes can fail if the server goes away and then comes back. [id="plugins-{type}s-{plugin}-threads"] ===== `threads` * Value type is <> * Default value is `1` [id="plugins-{type}s-{plugin}-user"] ===== `user` * Value type is <> * Default value is `"guest"` RabbitMQ username [id="plugins-{type}s-{plugin}-vhost"] ===== `vhost` * Value type is <> * Default value is `"/"` The vhost (virtual host) to use. If you don't know what this is, leave the default. With the exception of the default vhost ("/"), names of vhosts should not begin with a forward slash. [id="plugins-{type}s-{plugin}-common-options"] include::{include_path}/{type}.asciidoc[] :default_codec!: