You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To mark the work for reprocessing, just use [`TransactionOutbox.unblock()`](https://www.javadoc.io/doc/com.gruelbox/transactionoutbox-core/latest/com/gruelbox/transactionoutbox/TransactionOutbox.html). Its failure count will be marked back down to zero and it will get reprocessed on the next call to `flush()`:
293
294
294
-
```
295
+
```java
295
296
transactionOutboxEntry.unblock(entryId);
296
297
```
297
298
298
299
Orif using a `TransactionManager` that relies on explicit context (such as a non-thread local [`JooqTransactionManager`](https://www.javadoc.io/doc/com.gruelbox/transactionoutbox-jooq/latest/com/gruelbox/transactionoutbox/JooqTransactionManager.html)):
299
300
300
-
```
301
+
```java
301
302
transactionOutboxEntry.unblock(entryId, context);
302
303
```
303
304
304
305
A good approach here is to use the [`TransactionOutboxListener`](https://www.javadoc.io/doc/com.gruelbox/transactionoutbox-core/latest/com/gruelbox/transactionoutbox/TransactionOutboxListener.html) callback to post an [interactive Slack message](https://api.slack.com/legacy/interactive-messages) - this can operate as both the alert and the "button" allowing a support engineer to submit the work for reprocessing.
305
306
306
307
## Advanced
307
308
309
+
### Topics and FIFO ordering
310
+
311
+
For some applications, the order in which tasks are processed is important, such as when:
312
+
313
+
- using the outbox to write to a FIFO queue, Kafka or AWSKinesis topic; or
314
+
- data replication, e.g. when feeding a data warehouse or distributed cache.
315
+
316
+
In these scenarios, the default behaviour is unsuitable. Tasks are usually processed in a highly parallel fashion.
317
+
Evenif the volume of tasks is low, if a task fails and is retried later, it can easily end up processing after
318
+
some later task even if that later task was processed hours or even days after the failing one.
319
+
320
+
To avoid problems associated with tasks being processed out-of-order, you can order the processing of your tasks
- `red` will always need to be processed (successfully) before `blue`;
333
+
- `green` will always need to be processed (successfully) before `yellow`; but
334
+
- `red` and `blue` can run in any sequence with respect to `green` and `yellow`.
335
+
336
+
This functionality was specifically designed to allow outboxed writing to Kafka topics. For maximum throughput
337
+
when writing to Kafka, it is advised that you form your outbox topic name by combining the Kafka topic and partition,
338
+
since that is the boundary where ordering is required.
339
+
340
+
There are a number of things to consider before using this feature:
341
+
342
+
-Tasks are not processed immediately when submitting, as normal, and are processed by
343
+
background flushing only. This means there will be an increased delay between the source transaction being
344
+
committed and the task being processed, depending on how your application calls `TransactionOutbox.flush()`.
345
+
-If a task fails, no further requests will be processed _in that topic_ until
346
+
a subsequent retry allows the failing task to succeed, to preserve ordered
347
+
processing. This means it is possible for topics to become entirely frozen in the event
348
+
that a task fails repeatedly. Forthis reason, it is essential to use a
349
+
`TransactionOutboxListener` to watch for failing tasks and investigate quickly. Note
350
+
that other topics will be unaffected.
351
+
- `TransactionOutboxBuilder.blockAfterAttempts` is ignored for all tasks that use this
352
+
option.
353
+
-A single topic can only be processed in single-threaded fashion, but separate topics can be processed in
354
+
parallel. If your tasks use a small number of topics, scalability will be affected since the degree of
355
+
parallelism will be reduced.
356
+
308
357
### The nested-outbox pattern
309
358
310
359
In practice it can be extremely hard to guarantee that an entire unit of work is idempotent and thus suitable for retry. For example, the request might be to "update a customer record" with a new address, but this might record the change to an audit history table with a fresh UUID, the current date and time and so on, which in turn triggers external changes outside the transaction. The parent customer update request may be idempotent, but the downstream effects may not be.
0 commit comments