README.md in phobos_db_checkpoint-3.2.0 vs README.md in phobos_db_checkpoint-3.3.0
- old
+ new
@@ -12,10 +12,11 @@
1. [Installation](#installation)
1. [Usage](#usage)
1. [Setup](#setup)
1. [Handler](#handler)
+ 1. [Payload](#payload)
1. [Failures](#failures)
1. [Accessing the events](#accessing-the-events)
1. [Events API](#events-api)
1. [Instrumentation](#instrumentation)
1. [Upgrading](#upgrading)
@@ -118,9 +119,30 @@
```
If your handler returns anything different than an __ack__ it won't be saved to the database.
Note that the `PhobosDBCheckpoint::Handler` will automatically skip already handled events (i.e. duplicate Kafka messages).
+
+#### <a name="payload"></a> Payload
+PhobosDBCheckpoint assumes that the payload received from Phobos is in a JSON format. This means that if your payload is in any other format, for example Avro binary, you need to convert/decode it to JSON.
+
+To achieve this you can override the `#before_consume` method of the handler:
+
+```ruby
+class MyHandler
+ include PhobosDBCheckpoint::Handler
+
+ # <-- setup @avro before
+
+ def before_consume(payload)
+ @avro.decode(payload)
+ end
+
+ def consume(payload, metadata)
+ # <-- consume your stuff with the decoded payload
+ end
+end
+```
#### <a name="failures"></a> Failures
If your handler fails during the process of consuming the event, the event will be processed again acknowledged or skipped. The default behavior of `Phobos` is to back off but keep retrying the same event forever, in order to guarantee messages are processed in the correct order. However, this blocking process could go on indefinitely, so in order to help you deal with this PhobosDBCheckpoint can (on an opt-in basis) mark them as permanently failed after a configurable number of attempts.