Add bwatch as a standalone plugin #9098
Open
sangbida wants to merge 30 commits intoElementsProject:masterfrom
Open
Add bwatch as a standalone plugin #9098sangbida wants to merge 30 commits intoElementsProject:masterfrom
sangbida wants to merge 30 commits intoElementsProject:masterfrom
Conversation
Like bitcoin_txid, they are special backwards-printed snowflakes. Thanks Obama! Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These helper functions decode hex strings from JSON into big-endian 32-bit and 64-bit values, useful for parsing datastore entries exposing these into a more common space so they can be used by bwatch in the future.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
bwatch is an async block scanner that consumes blocks from bcli or any other bitcoind interface and communicates with lightningd by sending it updates. In this commit we're only introducing the plugin and some files that we will populate in future commits. Changelog-Added: bwatch plugin, this is to handle block processing outside of lightningd. Not yet hooked up to lightningd.
This wire file primarily contains datastructures that is used to serialize data for storing in the datastore. We have 2 types of datastores for bwatch. The block history datastore and the watch datastore. For block history we store height, the hash and the hash of the previous block. For watches we have 4 types of watches - utxo, scriptpubkey, scid and blockdepth watches, each of these have their unique info stored in the datastore. The common info for all watches includes the start block and the list of owners interested in watching.
We have 4 types of watches: utxo (outpoint), scriptpubkey, scid and blockdepth. Each gets its own hash table with a key shape that makes lookups direct.
bwatch keeps a tail of recent blocks (height, hash, prev hash) so it can detect and unwind reorgs without re-fetching from bitcoind. The datastore key for each block is zero-padded to 10 digits so listdatastore returns blocks in ascending height order. On startup we replay the stored history and resume from the most recent block.
Each watch (and its set of owners) is serialized through the wire format from the earlier commit and stored in the datastore. On startup we walk each type's prefix and reload the watches into their respective hash tables, so a restart resumes watching the same things without anyone re-registering.
bwatch_add_watch and bwatch_del_watch are the high-level entry points the RPCs (added in a later commit) use. Adding a watch that already exists merges the owner list and lowers start_block if the new request needs to scan further back, so a re-registering daemon (e.g. onchaind on restart) doesn't lose missed events. Removing a watch drops only the requesting owner; the watch itself is removed once the owner list is empty.
Add the chain-polling loop. A timer fires bwatch_poll_chain, which calls getchaininfo to learn bitcoind's tip; if we're behind, we fetch the next block via getrawblockbyheight, append it to the in-memory history and persist it to the datastore. After each successful persist we reschedule the timer at zero delay so we keep fetching back-to-back until we catch up to the chain tip. Once getchaininfo reports no new block, we settle into the steady-state cadence (30s by default, tunable via the --bwatch-poll-interval option). This commit only handles the happy path. Reorg detection, watchman notifications and watch matching land in subsequent commits.
After bwatch persists a new tip, send a block_processed RPC to watchman (lightningd) with the height and hash. bwatch only continues polling for the next block once watchman has acknowledged that it has also processed the new block height on its end. This matters for crash safety: on restart we treat watchman's height as the floor and re-fetch anything above it, so any block we acted on must be visible to watchman before we move on. If watchman isn't ready yet (e.g. lightningd still booting) the RPC errors out non-fatally; we just reschedule and retry.
When handle_block fetches the next block, validate its parent hash against our current tip. If they disagree we're seeing a reorg: pop our in-memory + persisted tip via bwatch_remove_tip, walk the history one back, and re-fetch from the new height. Each fetch may itself reorg further, so the loop naturally peels off as many stale tips as needed until the chain rejoins. After every rollback, tell watchman the new tip via revert_block_processed so its persisted height tracks bwatch's. If we crash before the ack lands, watchman's stale height will be higher than ours on restart, which retriggers the rollback. If the rollback exhausts our history (we rolled back past the oldest record we still hold) we zero current_height/current_blockhash and let the next poll re-init from bitcoind's tip. Notifying owners that their watches were reverted lands in a subsequent commit.
829379e to
04a433a
Compare
Add two RPCs for surfacing watches to lightningd on a new block or reorg. bwatch_send_watch_found informs lightningd of any watches that were found in the current processed block. The owner is used to disambiguate watches that may pertain to multiple subdaemons. bwatch_send_watch_revert is sent in case of a revert; it informs the owner that a previously reported watch has been rolled back. These functions get wired up in subsequent commits. Made-with: Cursor
After every fetched block, walk each transaction and fire watch_found for matching scriptpubkey outputs and spent outpoints. Outputs are matched by hash lookup against scriptpubkey_watches; inputs by reconstructing the spent outpoint and looking it up in outpoint_watches.
After the per-tx scriptpubkey/outpoint pass, walk every scid watch and fire watch_found for any whose encoded blockheight matches the block just processed. The watch's scid encodes the expected (txindex, outnum), so we jump straight there without scanning. If the position is out of range (txindex past the block, or outnum past the tx) we send watch_found with tx=NULL, which lightningd treats as the "not found" case.
Subdaemons like channel_open and onchaind care about confirmation depth, not the underlying tx. Walk blockdepth_watches on every new block and send watch_found with the current depth to each owner. This is what keeps bwatch awake in environments like Greenlight, where we'd otherwise prefer to hibernate: as long as something is waiting on a confirmation milestone, the blockdepth watch holds the poll open; once it's deleted, we're free to sleep again. Depth fires before the per-tx scan so restart-marker watches get a chance to spin up subdaemons before any outpoint hits land for the same block. Watches whose start_block is ahead of the tip are stale (reorged-away, awaiting delete) and skipped.
On init, query bcli for chain name, headercount, blockcount and IBD state, then forward the result to watchman via the chaininfo RPC before bwatch starts its normal poll loop. Watchman uses this to gate any work that depends on bitcoind being synced. If bitcoind's blockcount comes back lower than our persisted tip, peel stored blocks off until they line up so watchman gets a consistent picture. During steady-state polling the same case is handled by hash-mismatch reorg detection inside handle_block; this shortcut only matters at startup, before we've fetched anything. If bcli or watchman is not yet ready, log and fall back to scheduling the poll loop anyway so init never stalls. bwatch_remove_tip is exposed in bwatch.h so the chaininfo path in bwatch_interface.c can use it.
addscriptpubkeywatch and delscriptpubkeywatch are how lightningd asks bwatch to start/stop watching an output script for a given owner. Changelog-Added: Plugins: bwatch exposes addscriptpubkeywatch / delscriptpubkeywatch RPCs.
addoutpointwatch and deloutpointwatch are how lightningd asks bwatch to start/stop watching a specific (txid, outnum) for a given owner. Changelog-Added: Plugins: bwatch exposes addoutpointwatch / deloutpointwatch RPCs.
addscidwatch and delscidwatch are how lightningd asks bwatch to start/stop watching a specific short_channel_id for a given owner. The scid pins the watch to one (block, txindex, outnum), so on each new block we go straight to that position rather than scanning. Changelog-Added: Plugins: bwatch exposes addscidwatch / delscidwatch RPCs.
addblockdepthwatch and delblockdepthwatch are how lightningd asks bwatch to start/stop a depth-tracker for a given (owner, start_block). start_block doubles as the watch key and the anchor used to compute depth = tip - start_block + 1 on every new block. Changelog-Added: Plugins: bwatch exposes addblockdepthwatch / delblockdepthwatch RPCs. Made-with: Cursor
listwatch returns every active watch as a flat array. Each entry carries its type-specific key (scriptpubkey hex, outpoint, scid triple, or blockdepth anchor) plus the common type / start_block / owners fields, so callers can dispatch on the per-type key without parsing the type string first. Mostly used by tests and operator tooling to inspect what bwatch is currently tracking. Changelog-Added: Plugins: bwatch exposes listwatch RPC.
To support rescans (added next), bwatch_process_block_txs and bwatch_check_scid_watches gain a `const struct watch *w` parameter so the caller can ask the scanner to check just one watch instead of all of them. When a new watch is added with start_block <= current_height (say the watch starts at block 100 but bwatch is already at 105) we need to replay blocks 100..105 for that watch alone — not re-scan every active watch over those blocks. w == NULL -> check every active watch (normal polling) w != NULL -> check only that one watch (rescan)
bwatch_start_rescan(cmd, w, start_block, target_block) replays blocks from start_block..target_block for a single watch w (or for all watches if w is NULL). The rescan runs asynchronously: fetch_block_rescan -> rescan_block_done -> next fetch, terminating with rescan_complete (which returns success for an RPC-driven rescan and aux_command_done for a timer-driven one). Nothing calls bwatch_start_rescan yet; the add-watch RPCs wire it up next.
bwatch_add_watch returns the watch it created (or found); each
addwatch RPC now passes that into add_watch_and_maybe_rescan,
which:
- returns success immediately if start_block > current_height
(the watch only cares about future blocks), and
- otherwise calls bwatch_start_rescan over
[start_block, current_height] for that one watch and leaves
the RPC pending until the rescan completes.
This lets callers add a watch for an event that already confirmed
(e.g. a channel funding tx some blocks back) and still get a
watch_found.
When bwatch removes its tip block on a reorg, fire watch_revert for
the affected owners so lightningd-side handlers actually run.
Two cases, depending on whether the watch has an anchor block:
- scriptpubkey watches have no anchor (a wallet address can receive
funds in any block), so notify every owner on every removed block.
Handlers are cheap and defensive — they check their own state and
no-op if there is nothing to undo.
- outpoint, scid, and blockdepth watches each carry a start_block.
Notify only those with start_block >= removed_height (the watch's
anchor is gone). Older watches stay armed and refire naturally on
the new chain.
Owners are snapshotted before dispatch so revert handlers can safely
call watchman_unwatch_* and mutate the watch tables.
04a433a to
07bd036
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Core Lightning's lightningd currently polls the Bitcoin backend every 30 seconds and performs all transaction filtering internally. We introduce bwatch, a dedicated block filtering plugin that sits between bcli and lightningd. This avoids breaking changes to bcli (used by alternative plugins like Sauron) while keeping filtering logic separate from core.
This PR is the first in what's to be a series of PRs on bwatch to keep reviewing and bisectability simple, this PR only introduces the bwatch plugin it does not include it being wired up to lightningd, that is to be added in successive PRs.