Notice: Consumer Prefetching Is Removed
The Consumer library has been updated with the removal of prefeteching, which was a performance optimization for consumers that process large backlogs of messages. Although this provided a significant simplification of its internals, the change could potentially degrade performance for consumers that process large backlogs of messages.
Previously, each consumer ran two separate threads: one thread retrieved messages, while the other thread dispatched messages to handlers. This allowed message retrieval and message handling to happen in parallel. The technique is known as “prefetching,” and it significantly reduced the time it took for consumers to process large backlogs of messages. The parallel processing provided negligible benefit for consumers that had already caught up to the category, which is the more typical case.
The parallel processing also caused problems for consumers that subscribed to low traffic categories where a significant amount of time elapses in between new messages. The dispatching thread would maintain its own connection to MessageDB, separate from the consumer’s subscription. While the subscription’s connection was being exercised regularly to poll for new messages, the dispatching thread’s connection would remain idle until there were new messages to actually handle. During that idle time, the dispatching thread’s connection would often reach network timeout limits and expire, which would in turn manifest as connection errors whenever the consumer’s handlers would access MessageDB – generally by reading from an entity store or writing a message.
In the future, MessageDB sessions may be elaborated to be resilient against connection failures and timeouts. This would make it possible to restore the parallel reading and handling of messages. If you experience any issues, please reach out to us on the Eventide Slack .