-
Notifications
You must be signed in to change notification settings - Fork 11
Track the latest finalized block header for tx submission #896
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughTransaction pool constructors now return errors and bootstrap aborts startup if pool creation fails. SingleTxPool periodically tracks the latest Flow block header and uses it for building transactions. BatchTxPool adapts to the SingleTxPool signature and header usage. Tests use flow.DefaultTransactionExpiry. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Admin as Bootstrap
participant Req as requester (SingleTxPool)
participant AN as Access Node
rect rgb(240,248,255)
note over Admin: Startup & pool creation
Admin->>Req: NewSingleTxPool(ctx, client, ...)
Req->>AN: GetLatestBlockHeader()
AN-->>Req: BlockHeader
Req-->>Admin: (*SingleTxPool, nil) or error
alt error
Admin-->>Admin: fail startup (wrapped error)
end
end
rect rgb(245,255,250)
note over Req: Background header updater (every 5s)
loop every 5s
Req->>AN: GetLatestBlockHeader()
AN-->>Req: BlockHeader or error
opt on error
Req-->>Req: log and continue
end
end
end
rect rgb(255,250,240)
note over Client, Req: Transaction submission
participant Client
Client->>Req: Add(tx)
Req->>Req: Use cached latestBlockHeader
Req->>AN: GetAccount(address)
AN-->>Req: Account
Req-->>Client: enqueue/submit tx
end
sequenceDiagram
autonumber
actor Admin as Bootstrap
participant BReq as requester (BatchTxPool)
participant STP as SingleTxPool
Admin->>BReq: NewBatchTxPool(ctx, ...)
BReq->>STP: NewSingleTxPool(ctx, ...)
alt STP error
STP-->>BReq: error
BReq-->>Admin: error
Admin-->>Admin: fail startup
else ok
STP-->>BReq: pool
BReq-->>Admin: (*BatchTxPool, nil)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
da04f2e
to
dce819d
Compare
"github.com/onflow/flow-evm-gateway/services/requester/keystore" | ||
) | ||
|
||
var blockHeaderUpdateFrequency = time.Second * 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can increase the update frequency (like every 2 or 3 seconds), but since we get the latest finalized (not sealed) block header, I think this is a good value and we're safe. Any objections?
services/requester/single_tx_pool.go
Outdated
) | ||
continue | ||
} | ||
t.latestBlockHeader = blockHeader |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need any synchronization mechanism for updating t.latestBlockHeader
?
The goroutine which runs trackLatestBlockHeader
is the only place that writes to the t.latestBlockHeader
field.
There are various places that read it, and only for setting the ReferenceBlockID
when preparing a Cadence tx for submission.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: I have stored the latestBlockHeader
behind an atomic.Value
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
services/requester/batch_tx_pool.go (2)
185-191
: Data race: reading latestBlockHeader without synchronizationprocessPooledTransactions passes t.latestBlockHeader while SingleTxPool updates it concurrently. Use an accessor backed by atomic/mutex (see SingleTxPool fix).
Apply this diff after adding getLatestBlockHeader() to SingleTxPool:
- t.latestBlockHeader, + t.getLatestBlockHeader(),
261-268
: Data race: reading latestBlockHeader without synchronization (single submission)Replace direct field access with synchronized getter.
- t.latestBlockHeader, + t.getLatestBlockHeader(),services/requester/single_tx_pool.go (1)
114-121
: Use synchronized header read when building txAvoid direct field access.
- t.latestBlockHeader, + t.getLatestBlockHeader(),
🧹 Nitpick comments (2)
services/requester/single_tx_pool.go (2)
208-211
: Build metadata uses passed header; ensure caller passes synchronized valueNo change needed here once callers use getLatestBlockHeader(). Just a heads-up to avoid reading the field here directly.
23-24
: Make update frequency configurableExpose blockHeaderUpdateFrequency via config (with sane default) to tune under different AN latencies.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
bootstrap/bootstrap.go
(3 hunks)services/requester/batch_tx_pool.go
(8 hunks)services/requester/single_tx_pool.go
(5 hunks)tests/helpers.go
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
services/requester/batch_tx_pool.go (1)
services/requester/single_tx_pool.go (1)
NewSingleTxPool
(42-73)
services/requester/single_tx_pool.go (4)
services/requester/cross-spork_client.go (1)
CrossSporkClient
(108-113)config/config.go (1)
Config
(43-124)services/requester/keystore/key_store.go (1)
KeyStore
(39-52)services/requester/tx_pool.go (1)
TxPool
(16-18)
bootstrap/bootstrap.go (2)
services/requester/batch_tx_pool.go (1)
NewBatchTxPool
(52-92)services/requester/single_tx_pool.go (1)
NewSingleTxPool
(42-73)
🔇 Additional comments (2)
tests/helpers.go (1)
93-95
: Good: use default transaction expirySwitching to flow.DefaultTransactionExpiry removes a magic number and aligns tests with protocol defaults.
bootstrap/bootstrap.go (1)
249-275
: Error propagation on tx pool creation is correctCapturing and returning constructor errors here prevents silent startup with a broken pool. LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
services/requester/batch_tx_pool.go (2)
187-190
: Guard against nil latestBlockHeader before batch submissionIf the header is ever unavailable (e.g., transient during startup/tests), buildTransaction will dereference nil. Skip this batch tick when header is nil to avoid noisy errors.
- err := t.batchSubmitTransactionsForSameAddress( + hdr := t.getLatestBlockHeader() + if hdr == nil { + t.logger.Warn().Msg("latest Flow block header unavailable; skipping batch tick") + continue + } + err := t.batchSubmitTransactionsForSameAddress( ctx, - t.getLatestBlockHeader(), + hdr, account, pooledTxs, )
251-266
: Early-check latest header in single submission path (optional)Same nil risk exists here; you can short‑circuit early to avoid counting these as “dropped” due to build failure.
func (t *BatchTxPool) submitSingleTransaction( @@ - flowTx, err := t.buildTransaction( - t.getLatestBlockHeader(), + hdr := t.getLatestBlockHeader() + if hdr == nil { + return fmt.Errorf("latest Flow block header unavailable") + } + flowTx, err := t.buildTransaction( + hdr, account, script, hexEncodedTx, coinbaseAddress, )Also applies to: 263-268
services/requester/single_tx_pool.go (4)
176-188
: Add a defensive nil check for latestBlockHeaderAlthough you store an initial header, a simple guard prevents panics in edge cases (tests, partial construction, future refactors).
func (t *SingleTxPool) buildTransaction( latestBlockHeader *flow.BlockHeader, @@ ) (*flow.Transaction, error) { + if latestBlockHeader == nil { + return nil, fmt.Errorf("latest Flow block header unavailable") + } @@ flowTx := flow.NewTransaction(). SetScript(script). - SetReferenceBlockID(latestBlockHeader.ID). + SetReferenceBlockID(latestBlockHeader.ID).Also applies to: 112-120, 240-245
24-25
: Make ticker frequency configurable (derive from expiry window)Hard-coding 5s works, but exposing it via config (or deriving from flow.DefaultTransactionExpiry) lets ops tune staleness vs. load.
219-238
: Consider jitter/backoff on header update errorsOn repeated AN errors, the fixed 5s tick can spam logs. Add small jitter/backoff after failures to smooth retries.
37-40
: Optional: use typed atomic.Pointer for stronger typingatomic.Value is fine; typed atomics (e.g., go.uber.org/atomic.Pointer[*flow.BlockHeader]) give compile‑time type safety and a helpful String() for safe logging.
Based on learnings
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
services/requester/batch_tx_pool.go
(8 hunks)services/requester/single_tx_pool.go
(8 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
services/requester/batch_tx_pool.go (1)
services/requester/single_tx_pool.go (1)
NewSingleTxPool
(45-76)
services/requester/single_tx_pool.go (3)
services/requester/tx_pool.go (1)
TxPool
(16-18)services/requester/cross-spork_client.go (1)
CrossSporkClient
(108-113)config/config.go (1)
Config
(43-124)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (2)
services/requester/single_tx_pool.go (1)
62-76
: Good: initialize header atomically before starting trackerStoring the initial header via atomic before launching the goroutine removes races. Solid.
services/requester/batch_tx_pool.go (1)
60-76
: Approve error propagation in BatchTxPool constructor
Errors returned by NewBatchTxPool are handled at all usage sites (including in bootstrap/bootstrap.go), ensuring initialization failures aren’t ignored.
Have we thought about having a subscription to the latest finalized block instead of polling it. That way you basically just updated once you get a new block through the subscription channel. |
@janezpodhostnik I have thought about that indeed. However, subscriptions come with their own complexities too. See for examples this error: *ingestion.Engine","error":"failure in event subscription with: recoverable: disconnected:
error receiving event: rpc error: code = Unavailable desc = error reading from server: EOF This is an error that we receive as part of the EVM event subscriber, and it's not trivial to get them to run indefinitely. That's why I opted for a simple polling solution. |
Closes: #895
Description
For every tx submission, we used to fetch the latest block, as well as the COA account.
This meant that we needed 2 AN calls for each tx submission.
Now we have a ticker which tracks the latest finalized block header, which is mainly used for setting the proper Reference Block ID, when preparing the Cadence tx for submission.
Later on, we plan to remove the COA account fetching as well.
For contributor use:
master
branchFiles changed
in the Github PR explorerSummary by CodeRabbit
Bug Fixes
Refactor
Tests