README.md in tobox-0.6.1 vs README.md in tobox-0.7.0

- old
+ new

@@ -19,12 +19,14 @@ - [Inbox](#inbox) - [Zeitwerk](#zeitwerk) - [Sentry](#sentry) - [Datadog](#datadog) - [Stats](#stats) + - [PG Notify](#pg-notify) - [Advanced](#advanced) - [Batch Events Handling](#batch-events) +- [Recommendations](#recommendations) - [Supported Rubies](#supported-rubies) - [Rails support](#rails-support) - [Why?](#why) - [Development](#development) - [Contributing](#contributing) @@ -199,10 +201,32 @@ ```ruby table :outbox ``` +### `visibility_column` + +the name of the database column used to mark an event as invisible while being handled (`:run_at` by default). + +The column type MUST be either a datetime (or timestamp, depending of your database) or a boolean (if your database supports it, MySQL doesn't for example). + +If it's a datetime/timestamp column, this value will be used, along with the `visibility timeout` option, to mark the event as invisible for the given duration; this will ensure that the event will be picked up again in case of a crash eventually, in case of non-transactional event handling (via the `:progress` plugin); if it's a boolean column, the event is marked as invisible indefinitely, so in case of a crash, you'll need to recover it manually. + +```ruby +visibility_column :run_at +``` + +### `attempts_column` + +the name of the database column where the number of times an event was handled and failed (`:attempts` by default). If `nil`, events will be retried indefinitely. + +### `created_at_column` + +the name of the database column where the event creation timestamp is stored (`:created_at` by default). + +When creating the outbox table, you're **recommended** to set this column default to `CURRENT_TIMESTAMP` (or the equivalent in your database), instead of passing it manually in the corresponding `INSERT` statements. + ### `max_attempts` Maximum number of times a failed attempt to process an event will be retried (`10` by default). ```ruby @@ -534,11 +558,10 @@ # assuming this bit above runs two times in two separate workers, each will be processed by tobox only once. ``` #### Configuration - ##### inbox table Defines the name of the table to be used for inbox (`:inbox` by default). ##### inbox column @@ -587,20 +610,27 @@ (requires the `ddtrace` gem.) Plugin for [datadog](https://github.com/DataDog/dd-trace-rb) ruby SDK. It'll generate traces for event handling. ```ruby -# you can init the datadog config in another file to load: -Datadog.configure do |c| - c.tracing.instrument :tobox -end - # tobox.rb plugin(:datadog) +# or, if you want to pass options to tracing call: +plugin(:datadog, enabled: false) +# or, if you want to access the datadog configuration: +plugin(:datadog) do |c| + c.tracing.instrument :smth_else +end ``` -<a id="markdown-datadog" name="stats"></a> +`datadog` tracing functionality can also be enabled/disabled via environment variables, namely the following: + +* `DD_TOBOX_ENABLED`: enables/disables tobox tracing (defaults to `true`) +* `DD_TOBOX_ANALYTICS_ENABLED`: enables/disables tobox analytics (defaults to `true`) +* `DD_TRACE_TOBOX_ANALYTICS_SAMPLE_RATE`: sets tobox tracing sample rate (defaults to `1.0`) + +<a id="markdown-stats" name="stats"></a> ### Stats The `stats` plugin collects statistics related with the outbox table periodically, and exposes them to app code (which can then relay them to a statsD collector, or similar tool). ```ruby @@ -660,10 +690,33 @@ # some other server already has the lock, try later end end ``` + +<a id="markdown-pg-notify" name="pg-notify"></a> +### PG Notify + +The `pg_notify` plugin is a **PostgreSLQ only** plugin, which uses the [LISTEN](https://www.postgresql.org/docs/current/sql-listen.html) statement to pause the workers when no work is available in the outbox table, until the producer says so, by using the [NOTIFY](https://www.postgresql.org/docs/current/sql-notify.html) statement to notify the channel the workers are listening to. + +It reduces the `SELECT ... FOR UPDATE SKIP LOCKED` statements to the bare minimum required; without this plugin, these may, given enough load, become the cause of overhead in the master replica, considering that they're handled as "write statements", i.e. resources must be allocated, high frequency affects applying changes on (and using) indexes on the outbox table, which may make subsequent queries fall back to table scan, which will hold dead tuples from used transaction xids for longer, which won't be vacuumed fast, which increases replication lag, which... you get the gist. + +```ruby +plugin(:pg_notify) +notifier_channel :outbox_notifications # default + +# that's it +``` + +**NOTE**: this plugin can't be used with `jruby`. + +#### Configuration + +##### `notifier_channel` + +Identifies the name of the channel the `LISTEN` and `NOTIFY` SQL statements will refer to (`:outbox_notifications` by default). + <a id="markdown-advanced" name="advanced"></a> ## Advanced <a id="markdown-batch-events" name="batch-events"></a> ### Batch Events Handling @@ -709,9 +762,107 @@ # events identified by the batch index will be retried. Tobox.raise_batch_errors(batch_errors) end end ``` +<a id="markdown-recommendations" name="recommendations"></a> +## Recommendations + +There is no free lunch. Having a transactional outbox has a cost. Throughput is sacrificed in order to guarantee the processing of the event. The cost has to be reasonable, however. + +### PostgreSQL + +PostgreSQL is the most popular database around, and for good reason: extensible, feature-rich, and quite performant for most workloads. It does have some known drawbacks though: its implementation of MVCC, with creation of tuples for UPDATEs and DELETEs, along with the requirement for indexes to point to the address of the most recent tuple, and WAL logs having to bookkeep all of that (which impacts, among other things, disk usage and replication), highly impacts the performance of transaction management. This phenomenon is known as "write amplification". + +Considering the additional overhead that a transactional outbox introduces to the same database your main application uses, certain issues may escalate badly, and it'll be up to you to apply strategies to mitigate them. Here are some recommendations. + +### Tweak `max_connections` + +By default, a `tobox` consumer process will have as many database connections as there are workers (each worker polls the outbox table individually). As the system scales out to cope with more traffic, you may see that, as more workers are added, so will query latency (and database CPU usage). + +One way to address that is to limit the number of database connections that can be used by the workers in a `tobox` consumer process, by setting the `max_connections` configuration option to a number lower than `concurrency`, i.e. 1/3 or 1/4. As a result, workers will wait for an available connection to fetch work from, when none is available. + +#### Caveats + +This is not the main source of query latency overhead, you may start seeing "pool timeout" errors as a result, so do monitor their performance and apply other mitigations accordingly. + +### Handling events in batches + +By default, each worker will fetch-and-handle-then-delete events one by one. As surges happen and volume increases, the database will spend way more time and resources managing the transaction, than doing the actual work you need, thereby affecting overall turnaround time. In the case of PostgreSQL, the constant DELETEs and UPDATEs may result in the query planner deciding not to use indexes to find an event, and instead fallback to table scan, if an index is assumed to be "behind" due to a large queue of pending updates from valid transactions. + +A way to mitigate this is to [handle more events at once](#batch-events). It's a strategy that makes sense if the event handler APIs support batching. For instance, if all your event handler is doing is relaying to AWS SNS, you can use the [PublishBatch](https://docs.aws.amazon.com/sns/latest/api/API_PublishBatch.html) API (and adjust the batching window to the max threshold you're able to handle at once). + +#### Caveats + +As per above, it makes sense to use this if events can be handled as a batch; if that's not the case, and the handling block iterates across the batch one by one, this will cause significant variance in single event TaT metrics, as a "slow to handle" event will delay subsequent events in the batch. Delays can also cause visibility timeouts to expire, and make events visible to other handlers earlier than expected. + +Recovering from errors in a batch is also more convoluted, (see `Tobox.raise_batch_errors`). + +### Disable retries and ordering + +The `tobox` default configuration expects the `visibility_column` to be a datetime column ( default is `:run_at`), which is therefore used as a "visibility timeout", and along the `attempts` column, used to retry failed events gracefully with an exponential backoff interval. + +As a consequence, and in order to ensure reliable performance of the worker polling query, a sorted index is recommended; in PostgreSQL, it's `CREATE INDEX ... (id, run_at DESC NULLS FIRST)`, which ensures that new events get handled before retries, which can append ` WHERE attempts < 10` to the index statement, in order to rule out events which have exhausted attempts. + +This comes at the cost of increased overhead per event: when producing it via `INSERT` statement, the sorted index will have to be rebalanced. When picking it up, setting the "visibility timeout" before handling it will rebalance it again; and after handling it, whether successfully or not, it'll rebalance it again. This will increase the backlog associated with index management, which may have other consequences (described somewhere else in this section). + +You may observe in your systems that your handler either never fails, or when it does, it's the type of transient error which can be retried immediately after, and at a marginal cost. In such situations, the default "planning for failure" exponential backoff strategy described above imposes too much weight for little gain. + +You can improve this by setting `visibility_column` to a boolean column, with default set to `false`: + +```ruby +# in migration +column :in_progress, :boolean, default: false + +# tobox +visibility_column :in_progress +# and, if you require unbounded retries +attempts_column nil +``` + +this should improve the performance of the main polling query, by **not requiring a sorted index on the visibility column** (i.e. the primary key index is all you need), and rely on conditional boolean statements (instead of the more expensive datetime logical operators). + +#### Caveats + +While using a boolean column as the `visibility_column` may improve the performance of most queries and reduce the overhead of writes, event handling will not be protected against database crashes, so you'll have to monitor idle events and recover them manually (by resetting the `visibility_column` to `false`). + +### Do not poll, get notified + +The database must allocate resources and bookkeep some data on each transaction. In some cases (i.e. PostgreSQL), some of that bookkeeping does not happen **until** the first write statement is processed. However, due to the usage of locks via `SELECT ... FOR UPDATE`, most databases will consider the polling statement as a write statement, which means that, in a `tobox` process, transaction overhead is ever present. In a high availability configuration scenario, transactional resources will need to be maintained and replicated to read replica nodes, which given enough replication lag and inability to vacuum data, may snowball resource usage in the master replica, which may trigger autoscaling, causing more workers to poll the database for more work, and eventually bringing the whole system down. + +This can be mitigated by either adjusting polling intervals (via `wait_for_events_delay` option), or replacing polling by asynchronously notifying workers of when there's work to do. For PostgreSQL, you can use the [pg_notify](#pg-notify) plugin, which will use the PostgreSQL-only `LISTEN`/`NOTIFY` statements for that effect. + +#### Caveats + +Using `LISTEN` requires maintaining a long-lived most-idle separate database connection; this approach may not be compatible with your setup, such as if you're using a connection pooler with a particular configuration. For instance, if you're using the popular [pgbouncer](https://www.pgbouncer.org/features.html), this plugin will be incompatible with transaction pooling. + +There will be a slight race condition between the moment that a worker wasn't able to fetch an event, and the moment it starts listening to the notification channel; if an event arrives meanwhile, and the notification is broadcasted before the worker starts listening, the worker won't pick up this work immediately. Given enough entropy and workers, this should be a non-scenario, but a theoretical one still. + +### Unlogged tables + +By design (storing the event in the same transaction where the associated changes happen), a transactional outbox consumer requires access that the outbox table is stored in the same database the application uses, and accesses it via the master replica. As already mentioned, this means associated bookkeeping overhead in the master replica, including WAL logs and replication lag, which under extreme load, leads to all kind of issues to guarantee data consistency, despite the outbox table being unused and irrelevant in read nodes. + +In such cases, you may want to set the outbox table as [unlogged](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-UNLOGGED), which ensures that associated write statements aren't part of WAL logs, and aren't replicated either. This will massively improve throughput of associated traffic, while preserving most of the desired transactional properties of using a transactional outbox solution, i.e. writing events along with associated data, making it visible to consumers only after transaction commits, and **in case of a clean shutdown**, ensure that data is flushed to disk. + +#### Caveats + +The last statement leads to the biggest shortcoming of this recommendation: by choosing to do unlog the outbox table, your database cannot ensure 100% consistency for its data in case of a database crash or unclean shutdown, which means you may lose events in that event. And while outbox data should not be business critical,having less than 100% event handling may be unacceptable to you. + +You may decide to do it temporarily though whenever you expect the level of traffic that justifies foregoing 100% consistency, but be aware that an `ALTER TABLE ... SET UNLOGGED` statement **rewrites the table**, so bear in mind of that, if you try to do this during an ongoing traffic surge / incident; the recommendation is to do this **before** the surge happens, such as a thursday before a black friday. + +### WAL outbox consumer (Debezium/Kafka) + +It takes a lot of write statements to both produce and consume from the outbox table, in the manner in which it is implemented in `tobox`.In PostgreSQL, considering each write statement on a given row will just generate a new tuple, per event, that amounts to at least 3 tuples. In the long run, and given enough volume, the health of the whole database will be limited by how quickly dead tuples are vacuumed from the outbox table. + +An alternative way to consume outbox events which does not require consuming events via SQL is by using a broker which is able to relay outbox events directly from the WAL logs. One such alternative is [Debezium](https://debezium.io/documentation/reference/stable/integrations/outbox.html), which relays them into Kafka streams. + +This solution means not using `tobox` anymore. + +#### Caveats + +This solution is, at least at the time of writing, limited to Kafka streams; if events are to be relayed to other alternatives (AWS SNS, RabbitMQ...), or there's more to your event handler than relaying, this solution will not work for you either. + +There are also several shortcomings to consider when using Kafka streams; for once, events are consumed one at a time, which will affect event handling turnaround time. <a id="markdown-supported-rubies" name="supported-rubies"></a> ## Supported Rubies All Rubies greater or equal to 2.7, and always latest JRuby and Truffleruby.