LogPoller: process blocks in batches#1482
Merged
dhaidashenko merged 5 commits intodevelopfrom Mar 31, 2026
Merged
Conversation
Contributor
|
👋 dhaidashenko, thanks for creating this pull request! To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team. Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks! |
Contributor
|
378f6ad to
7e83439
Compare
7e83439 to
d2695a5
Compare
Contributor
|
jadepark-dev
approved these changes
Mar 30, 2026
archseer
approved these changes
Mar 30, 2026
ogtownsend
approved these changes
Mar 31, 2026
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.


Motivation
During prolonged downtime or RPC issues, Chainlink Node may lag behind Solana's latest finalized block. Max lag could be up to 172800 blocks.
LogPoller searches for blocks that contain events defined by Filters and schedules their fetching.
Block fetching is done by async workers. If the operation fails, the job is retried after a delay; it competes with the rest of the pool for a worker.
Fetched blocks are added to another queue, which sorts them and passes them to the events processor. It's guaranteed that the event processor will observe all blocks with subscribed events, in increasing order.
Thus, if we are scheduled to fetch 1k blocks and we fail to fetch the first block for a long time, all 999 blocks will sit in memory.
Our current assumption is that unreliable RPCs and large lag cause OOM for CL Nodes.
As we do not have access to the profile files for LOOP plugins (WIP), there is no guarantee that our assumption is correct.
Changes
core ref: 78e0a3fb76940b1aa5d700545e29925d5ad90610