BftState, NodeStateMachine, voting rules, and routines (genesis → build → broadcast → QC → commit)Canonical flow: The end-to-end transaction and block flow (steps 1–14, single-shard vs multi-shard branch) is in Transaction Flow: User to Finality (module-01b). This module focuses on BFT and node state in Rust: block lifecycle, vote order (BlockVote before StateVoteBlock), and the Rust routines (BftState, handlers, commit rule).
High-level: Transactions enter a node's mempool, get selected into a block by the block producer, the block is BFT-consensed (validators vote on the block), and once a quorum is reached the block is finalized. Single-shard flow stays inside one shard; multi-shard adds provisions (StateProvision, ProvisionCoordinator—no 2PC coordinator) so other shards can complete their part. Below: single-shard and multi-shard block flow, then Rust deep dive.
All steps happen inside one shard. The BFT object is that shard's block.
A block moves through these stages:
Vote order: BlockVote (consensus on the block) comes first; StateVoteBlock (execution outcome per tx) happens only after the block is committed.
Flow (same style as Transaction Flow):
BlockProposal).Assume
state_root_0.Order
Each validator has its own mempool; content is shard-1 txs only and kept in sync via gossip.
{
"pool": {
"0xhash_txA": {
"tx": "<RoutableTransaction: txA>",
"status": "Pending",
"added_at": 1234567890,
"cross_shard": false,
"submitted_locally": true
},
"0xhash_txB": { "tx": "<txB>", "status": "Pending", ... }
}
}
Keys: tx_hash = hash(encoded RoutableTransaction) (e.g. SBOR then hash). Included: Only txs where topology.involves_local_shard(tx); same txs on all shard-1 nodes via gossip.
Proposer (e.g. V0 for this height/round) takes parent = block N−1, certificates from N−1’s execution (i.e. outcome of N−1’s txs), and ready txs from mempool (txA, txB). Builds header and block.
The header contains a parent_qc (consensus proof for the previous block) and a state_root (state after applying this block’s certificates). The certificates in block N are the execution outcomes of the parent block N−1’s transactions (e.g. tx a, tx b in N−1 → their certs go in N). The block body’s certificates are what actually feed into the state root; see the box below.
{
"BlockHeader": {
"height": { "0": 5 },
"parent_hash": "0xhash_block_4",
"parent_qc": { "QuorumCertificate for block N−1 (BFT: 2f+1 voted for that block)" },
"proposer": { "0": 0 },
"timestamp": 1700000000000,
"round": 0,
"is_fallback": false,
"state_root": "0xstate_root_after_certs",
"state_version": 42,
"transaction_root": "0xtx_root_5"
}
}
How each field is obtained:
| Field | Formula / source |
|---|---|
| height | parent_qc.height.0 + 1 |
| parent_hash | latest_qc.block_hash (block N−1) |
| parent_qc | QC for block N−1 (from BFT) |
| proposer | committee[(height + round) % len] (round-robin) |
| timestamp | now.as_millis() (proposer wall clock) |
| round | Current BFT view |
| state_root | storage.prepare_block_commit(parent_state_root, block.certificates, local_shard) → first element. parent_state_root = get_block_by_hash(parent_hash).header.state_root; then apply certificates in this block (from N−1’s txs) to get new JMT root. |
| state_version | parent_state_version + count(certificates in this block with state writes) |
| transaction_root | compute_transaction_root(retry_txs, priority_txs, normal_txs) — see below. |
Transaction root formula:
[ hash("RETRY" || tx_hash) for retry_txs ] + [ hash("PRIORITY" || tx_hash) for priority_txs ] + [ hash("NORMAL" || tx_hash) for normal_txs ]transaction_root = compute_merkle_root(leaves). Empty sections → Hash::ZERO.parent_qc (in header) vs certificates (in block body):
| Where | What it is | Role |
|---|---|---|
| parent_qc (in BlockHeader) | QuorumCertificate for block N−1 | BFT consensus proof: “2f+1 validators voted for block N−1.” Chains blocks for consensus; used to derive height, parent_hash. Does not define state. |
| certificates (in Block body) | TransactionCertificates from block N−1’s execution | Execution outcomes: “How did the transactions in block N−1 run?” Block N carries them so the new state root can be computed: state_root = parent_state_root + apply(certificates). The header’s state_root is exactly that result. |
In short: parent_qc = “we agreed on the previous block”; certificates = “execution results for the parent block’s txs (N−1), which we apply to parent state root to get the new state root.”
Block state root vs transaction root: The block state root (in the header) essentially encodes the order of blocks (hierarchy): each block is like a package of txs, and the state root chains parent → child so the sequence of state roots reflects block order. The transaction root (Merkle root of the tx list in this block) commits to the order of transactions within the block; global transaction order is then block order plus position within each block, so you can compute or verify the global order of transactions regardless of which block (package) they are in.
Example: Block N−1 has tx list (tx a, tx b) and state root 123. Block N (proposed) has state_root(N) = apply(parent.state_root: 123, certificates for tx a and tx b). Tx a and tx b are in the N−1 block; their execution certificates go into block N.
{
"Block": {
"header": { "BlockHeader": "..." },
"retry_transactions": [],
"priority_transactions": [],
"transactions": [
{ "RoutableTransaction": "txA payload" },
{ "RoutableTransaction": "txB payload" }
],
"certificates": [
{ "TransactionCertificate": "execution outcome for a tx from block N−1 (from N−1's execution)" }
],
"deferred": [],
"aborted": []
}
}
certificates: TransactionCertificates from block N−1’s execution (outcome of N−1’s txs). Applied with parent state root to get header.state_root (formula: state_root = parent_state_root + apply(certificates)). transactions: From mempool ready_transactions(max_txs, ...); order: retry, priority, normal (each section hash-sorted).
Validators have received the block. After validating it (state root, tx root, structural checks), each signs a BlockVote on the block hash. This is the consensus vote (happens before commit and before execution votes).
{
"BlockVote": {
"block_hash": "0xhash_of_block_header_5",
"height": { "0": 5 },
"round": 0,
"voter": { "0": 0 },
"signature": "<BLS G2 signature>",
"timestamp": 1700000000001
}
}
| Field | Formula / source |
|---|---|
| block_hash | header.hash() = Hash::from_bytes(basic_encode(header)) |
| height, round | From the block header being voted on |
| voter | This validator’s ValidatorId |
| signature | signing_key.sign_v1(message) where message = block_vote_message(shard, height, round, block_hash) = DOMAIN_BLOCK_VOTE || shard.0 || height || round || block_hash (all fixed-width/encoded) |
| timestamp | Validator’s clock when creating the vote |
When 2f+1 validators have voted for the same block_hash at (height, round), their BlockVotes are aggregated into a QC. The block is now certified; with the two-chain rule it can be committed.
{
"QuorumCertificate": {
"block_hash": "0xhash_of_block_header_5",
"height": { "0": 5 },
"parent_block_hash": "0xhash_block_4",
"round": 0,
"signers": "<bitfield: which validators signed>",
"aggregated_signature": "<BLS G2 aggregated>",
"voting_power": 3,
"weighted_timestamp_ms": 1700000000000
}
}
| Field | Formula / source |
|---|---|
| block_hash, height, round | From the block that received quorum |
| parent_block_hash | From that block’s header / parent |
| signers | Bitfield of validator indices that contributed to the aggregate |
| aggregated_signature | BLS aggregate of each signer’s signature from their BlockVote (all over the same block_vote_message) |
| voting_power | Sum of stake of signers; must ≥ quorum (e.g. 2f+1) |
| weighted_timestamp_ms | sum(vote.timestamp * stake_i) / sum(stake_i) over signers |
Once the block is committed (step 8), each validator runs txA then txB through the Radix Engine and produces one StateVoteBlock per tx. So execution votes (StateVoteBlock) happen after consensus votes (BlockVote) and commit. Example for txA at validator V0:
{
"StateVoteBlock": {
"transaction_hash": "0xhash_txA",
"shard_group_id": { "0": 1 },
"state_root": "0xoutputs_merkle_root_txA",
"success": true,
"state_writes": [
{ "node_id": "<AccountA node>", "partition": 0, "sort_key": "...", "value": "<+2 XRD balance blob>" },
{ "node_id": "<AccountB node>", "partition": 0, "sort_key": "...", "value": "<-500 USDC blob>" }
],
"validator": { "0": 0 },
"signature": "<BLS G2>"
}
}
state_root here = compute_writes_commitment(state_writes) — outputs merkle root (commitment to this tx’s writes only). signature: signing_key.sign_v1(exec_vote_message(tx_hash, state_root, shard, success)).
When 2f+1 validators have the same state_root (and success) for a given tx, their execution votes are aggregated into a StateCertificate for that shard. These certs are for transactions that were in the block we just committed; one StateCertificate per (tx, shard). They feed into the TransactionCertificate (step 7) and are carried in the next block’s body.
{
"StateCertificate": {
"transaction_hash": "0xhash_txA",
"shard_group_id": { "0": 1 },
"read_nodes": [],
"state_writes": [ "same as quorum votes" ],
"outputs_merkle_root": "0xoutputs_merkle_root_txA",
"success": true,
"aggregated_signature": "<BLS G2 aggregated>",
"signers": "<bitfield>",
"voting_power": 3
}
}
For single-shard txs, one StateCertificate per tx. For cross-shard, one per participating shard; then a TransactionCertificate combines them.
{
"TransactionCertificate": {
"transaction_hash": "0xhash_txA",
"decision": "Accept",
"shard_proofs": {
"1": { "StateCertificate": "..." }
}
}
}
decision: Accept iff all shard proofs succeeded; otherwise Reject. shard_proofs: Map shard_id → StateCertificate. These certs (for txA, txB) are produced after block N is committed and will be in block N+1.
When BFT has a QC for block N and the two-chain rule is satisfied: Commit: set_committed_state(height, block_hash, qc); then put_block(height, block, qc) (RocksDB: blocks, transactions, certificates column families). State: Block N's state root = parent state root (block N−1) + apply(certificates in block N). Those certificates are the TransactionCertificates from block N−1's execution (block N carries them). Apply them to the parent JMT to get the new root (already in header.state_root); commit those writes to JMT.
Block N+1's header: state_root = parent_state_root(block N) + apply(certificates in block N+1). Those certificates are the TransactionCertificates for txA and txB (and any other txs in block N).
| Value | Formula / function |
|---|---|
| Block header hash | basic_encode(header) then Hash::from_bytes(...) |
| Transaction root | compute_merkle_root([hash(TAG || tx_hash) for each section in order]) |
| State root (block) | prepare_block_commit(parent_state_root, certificates, local_shard) → JMT root. Certificates in this block = execution outcomes of the parent block’s txs. |
| Block vote message | block_vote_message(shard, height, round, block_hash) → sign with BLS (types in crates/messages/src/notification/mod.rs) |
| Outputs merkle root (per tx) | compute_writes_commitment(state_writes) = hash chain over sorted writes |
| Execution vote message | exec_vote_message(tx_hash, state_root, shard, success) → sign with BLS |
| QC | BLS aggregate of 2f+1 block votes over same message |
| StateCertificate | BLS aggregate of 2f+1 StateVoteBlocks with same (tx_hash, state_root, success) |
BlockVote only checks that the block is well-formed: state_root = parent_state_root + apply(certificates in this block). Those certificates are the execution outcomes of the parent block’s transactions (N−1). So validators can vote without running the current block’s transactions. Doing execution and StateVoteBlock in the same round would tie consensus to execution: every validator would have to run the block’s txs before voting. That would slow rounds (execution can be heavy), and any execution non-determinism or bug could prevent quorum on the block itself. Separating the two keeps consensus fast and simple; execution outcomes are then certified in the next phase and carried in the next block.
The committed block is still valid and useful. Consensus committed ordering (this block contains txA, txB, …). After commit, execution produces a TransactionCertificate per tx with decision: Accept or Reject. Rejected txs get a cert with decision: Reject; that cert goes in block N+1 like any other. The state root of block N is unchanged by block N’s own tx outcomes — it’s parent + apply(certs from N−1). When block N+1 applies the certs from block N, it applies “no state change” for rejected txs (or records the reject). So the chain stays consistent: the committed block fixed the order; the following block’s certificates record which txs were accepted and which rejected.
Each layer serves a different purpose:
state_root) and success/fail.” The vote is on the tx’s execution outcome (writes + success bit).state_root, same success), their StateVoteBlocks are aggregated into a StateCertificate. So: “the committee for this shard agrees on how this tx executed on this shard.”shard_proofs (one StateCertificate per involved shard) and sets decision: Accept iff all shards succeeded, else Reject. This is what the next block needs: one cert per tx from the previous block to apply to the state root.Chain: Many validators (StateVoteBlock) → quorum per tx per shard (StateCertificate) → one final cert per tx (TransactionCertificate) carried in the next block.
Transactions that span shards add receipts (cross-shard messages) and provisions. Each shard still runs BFT on its own block; the target shard’s block includes and applies the receipts provisioned by the source shard.
Assume: The distinguished transaction (e.g. CallMethod(Component(C), "transfer_to_bob", Decimal("10")) — C on Shard 0, Bob on Shard 1). Tx touches Shard 0 (source) and Shard 1 (target). consensus_shards = {0, 1}; protocol order by ShardGroupId. Cross-shard execution is provision-based (no 2PC coordinator): ProvisionCoordinator on each node tracks quorum of provisions from the other shard; then execution and certificate aggregation.
Tx is sent to all all_shards_for_transaction(tx). Each shard's mempool holds it; cross_shard: true. Same BFT block proposal per shard; execution is provision-based (StateProvision, ProvisionCoordinator, then execution and certificates).
{
"pool": {
"0xhash_crossTx": {
"tx": "<RoutableTransaction>",
"status": "Pending",
"cross_shard": true,
"consensus_shards": [0, 1]
}
}
}
Shard 0's block producer includes the tx. Execution may produce StateProvision to send to Shard 1. BFT object is still Shard 0's block.
{
"Block": {
"header": {
"BlockHeader": {
"height": { "0": 5 },
"state_root": "0x...",
...
}
},
"transactions": [{ "RoutableTransaction": "crossTx" }],
"certificates": [],
"commitment_proofs": []
}
}
Validators on Shard 0 produce signed StateProvision (proof of state written) and broadcast to Shard 1.
{
"StateProvision": {
"transaction_hash": "0xhash_crossTx",
"source_shard": { "0": 0 },
"target_shard": { "0": 1 },
"entries": [...],
"block_height": 5,
"validator": { "0": 0 },
"signature": "<BLS G2>"
}
}
On each node of Shard 1, ProvisionCoordinator buffers StateProvisions. required_shards = {0}. When quorum (2f+1) matching provisions from Shard 0, emits ProvisioningComplete.
{
"TxRegistration": {
"transaction_hash": "0xhash_crossTx",
"required_shards": [0],
"provisions_by_shard": {
"0": ["<StateProvision>", ...]
},
"complete": true
}
}
VerifyAndAggregateProvisions → CommitmentProof. Shard 1's block contains commitment_proofs and applies the receipt from Shard 0.
{
"Block": {
"header": { ... },
"commitment_proofs": [
{
"CommitmentProof": {
"source_shard": { "0": 0 },
"transaction_hash": "0xhash_crossTx",
"aggregated_signature": "<BLS>",
"entries": "..."
}
}
]
}
}
One StateCertificate per shard; TransactionCertificate combines them. decision: Accept iff all shard proofs succeeded.
{
"TransactionCertificate": {
"transaction_hash": "0xhash_crossTx",
"decision": "Accept",
"shard_proofs": {
"0": { "StateCertificate": "..." },
"1": { "StateCertificate": "..." }
}
}
}
Order: Protocol order by ShardGroupId. ProvisionCoordinator requires quorum of StateProvision from each required_shard. CommitmentProof = aggregated 2f+1 provisions from a source shard. See Cross-Shard Transactions in Hyperscale-rs for the provision-based protocol and livelock (no 2PC coordinator in hyperscale-rs).
Setup: Shard 0 (component C), Shard 1 (account Bob). Distinguished transaction (Radix manifest style):
# Signer: Alice (Shard 0). C on Shard 0; Bob on Shard 1.
1. CallMethod(Component(C), "transfer_to_bob", Decimal("10")) # C credits Bob with 10
C (Shard 0) executes the call and produces a receipt to credit Bob (Shard 1) with 10.
Block_0. Execution: component C runs, produces Receipt(Shard0 → Shard1, Credit(Bob, 10)). Block_0 contains this receipt (or its hash).BlockVote(Block_0). Quorum on Block_0 → Block_0 finalized → receipt is provisioned to Shard 1.BlockVote(Block_1). Quorum on Block_1 → Block_1 finalized.Message enums: BlockProposal(Block_0) (Shard 0) → BlockVote(Block_0) → Receipt(Shard0→Shard1, Credit(Bob,10)). Shard 1: BlockProposal(Block_1) containing receipt application → BlockVote(Block_1) → Block_1 finalized.
Who is involved: Shard 0 block producer + Shard 0 validators (BFT on Block_0); Shard 1 block producer + Shard 1 validators (BFT on Block_1). BFT object on each shard is that shard’s block; “vote on Shard 0 provisions” = Shard 1 voting on the Shard 1 block that applies those provisions.
The following sections map the flow above to the hyperscale-rs codebase: the node state machine, BftState, voting rules, data structures, and routines (genesis → request build → broadcast → QC → commit).
Main object: BftState in crates/bft/src/state.rs.
BftState per node; it is held by the node’s state machine as self.bft in NodeStateMachine (crates/node/src/state.rs).NodeStateMachine::handle() to the appropriate self.bft.on_*(...) method.So at the stages we care about, “BFT” = the BftState instance plus the BFT handlers that run outside it (e.g. build_proposal, verify_and_build_qc in crates/bft/src/handlers.rs). The handlers are pure functions invoked by the runner when it executes BFT-related Actions; BftState does not build blocks or QCs itself—it emits actions, and the runner calls the handlers.
Validators vote for a specific block at a given height and round. A vote is "I support this block (hash) at this (height, round)." Votes are collected per block; when a quorum (e.g. 2f+1 out of 3f+1) of validators have voted for the same block, a Quorum Certificate (QC) can be built. The QC proves "enough of us agreed on this block" and is used to extend the chain and to justify the next proposal (parent QC).
Imagine your class is voting on "what to do on Friday":
Rule: For one vote (same height), in one try (same round), each kid can only choose one option. If Alice says "movie" in round 1, that's fine. If Alice later also says "park" in the same round 1, that's cheating—she's saying two different things for the same vote. That's equivocation, and we treat it as Byzantine (bad behavior).
"Different rounds (revotes) are allowed" means: In round 1 Alice said "movie." Time ran out; we start round 2. Now Alice is allowed to change her mind and say "park" in round 2. That's a revote, not cheating, because it's a different round.
So:
BftState keeps voted_heights: HashMap<u64, (Hash, u64)> — for each height we've voted, the (block_hash, round) we voted for — so we don't vote twice for the same height in the same round. received_votes_by_height (and similar structures) record, per (height, round), which validator voted for which block; that's how equivocation is detected (same validator, same (height, round), two different block hashes). VoteSet collects votes for a single block; when the count reaches quorum, the node can emit Action::VerifyAndBuildQuorumCertificate. Commit only happens when the two-chain condition is satisfied (on_block_ready_to_commit → commit_block_and_buffered).
BftState (the main state machine)Location: crates/bft/src/state.rs — pub struct BftState { ... }
Identity / config
node_index, signing_key, topology: Arc<dyn Topology>, shard_group, config: BftConfig, now, last_proposal_time, etc.Chain state
view: u64 — current roundview_at_height_start: u64 — round at start of current height (for backoff)committed_height: u64, committed_hash: Hashlatest_qc: Option<QuorumCertificate>genesis_block: Option<Block>Pending blocks, votes, certified
pending_blocks: HashMap<Hash, PendingBlock> — blocks being assembled (header + txs/certs)pending_block_created_at: HashMap<Hash, Duration>vote_sets: HashMap<Hash, VoteSet> — per-block vote collectionvoted_heights: HashMap<u64, (Hash, u64)> — (height → (block_hash, round)) we voted forreceived_votes_by_height — equivocation trackingcertified_blocks: HashMap<Hash, (Block, QuorumCertificate)> — certified but not yet committedpending_proposal: Option<PendingProposal> — in-flight proposal (to correlate ProposalBuilt)Verification and buffers
pending_qc_verifications, pending_synced_block_verifications, pending_cycle_proof_verificationspending_state_root_verifications, state_root_verifications_in_flightverified_state_roots, verified_transaction_roots, verified_cycle_proofs, verified_qcspending_commits: BTreeMap<u64, (Hash, QuorumCertificate)> — buffered commits (two-chain order)pending_commits_awaiting_data, buffered_synced_blockslast_chain_committed_state_version, last_committed_jmt_statecrates/bft/src/pending.rs) — header + hashes + received txs/certs; builds full Block when complete.crates/bft/src/vote_set.rs) — collects votes for one block; verified vs unverified; used to decide when to call QC build.{ height, round } used to match Event::ProposalBuilt to the proposal we requested.proposal_interval, max_transactions_per_block, min_block_interval.BftStateBftState::initialize_genesis(&mut self, genesis: Block) -> Vec<Action>NodeStateMachine::initialize_genesis() → self.bft.initialize_genesis(genesis) (e.g. after the runner creates genesis).genesis_block, committed_hash, last_chain_committed_state_version from the genesis header; logs "Initialized genesis block"; returns SetTimer for Proposal and Cleanup.BftStateBftState::on_proposal_timer(...) -> Vec<Action>Event::ProposalTimer. In NodeStateMachine::handle() this is dispatched to BFT only after checking round timeout and gathering ready_txs (mempool), deferred (livelock), aborted (mempool), certificates (execution), commitment_proofs; then self.bft.on_proposal_timer(ready_txs, deferred, aborted, certificates, commitment_proofs) is called.latest_qc / committed_height, round from view; checks we are the proposer for that (height, round); prepares parent QC, timestamp, filters certs/deferrals/aborts, gets parent state root/version; sets pending_proposal = Some(PendingProposal { height, round }); logs "Requesting block build for proposal"; returns SetTimer and Action::BuildProposal { proposer, height, round, parent_hash, parent_qc, timestamp, is_fallback, parent_state_root, parent_state_version, retry_transactions, priority_transactions, transactions, certificates, commitment_proofs, deferred, aborted }.So the object that does “Requesting block build for proposal” is BftState; the routine is BftState::on_proposal_timer. It does not build the block itself; it emits Action::BuildProposal, which the runner (action handler) executes.
Building the block: Not BftState. The runner handles Action::BuildProposal in the action handler (crates/node/src/action_handler.rs): it calls hyperscale_bft::handlers::build_proposal(storage, ...) (crates/bft/src/handlers.rs).
handlers::build_proposal<S>(storage, proposer, height, round, parent_hash, parent_qc, timestamp, is_fallback, parent_state_root, parent_state_version, retry_transactions, priority_transactions, transactions, certificates, commitment_proofs, deferred, aborted, local_shard) -> ProposalResult<S::PreparedCommit>.BlockHeader and Block, returns ProposalResult { block, block_hash, prepared_commit }.The action handler then produces Event::ProposalBuilt { height, round, block, block_hash } and enqueues it.
“Broadcasting proposal”: When that event is handled, the node state machine calls self.bft.on_proposal_built(height, round, block, block_hash).
BftState::on_proposal_built(&mut self, height, round, block, block_hash) -> Vec<Action>.pending_proposal matches (height, round); logs "Broadcasting proposal"; builds a PendingBlock from the full block and inserts it into pending_blocks; builds BlockHeaderGossip and returns Action::BroadcastBlockHeader { shard, header } plus actions from create_vote(...) (vote for own block).So: the object that logs “Broadcasting proposal” and does broadcast + own vote is BftState; the routine is BftState::on_proposal_built. The block value is built by handlers::build_proposal (pure function over storage and proposal parameters), not by BftState itself.
BftStateBftState::on_qc_formed(block_hash, qc, ready_txs, deferred, aborted, certificates) -> Vec<Action>Action::VerifyAndBuildQuorumCertificate and produces Event::QuorumCertificateFormed (or similar); the node passes it to self.bft.on_qc_formed(...).latest_qc, persists certified block (PersistBlock), may enqueue Event::BlockReadyToCommit (two-chain rule), may trigger the next proposal. The QC itself is built earlier by handlers::verify_and_build_qc when the runner runs VerifyAndBuildQuorumCertificate.BftStateBftState::commit_block_and_buffered(&mut self, block_hash: Hash) -> Vec<Action> (internal). It is called from on_block_ready_to_commit when the two-chain rule allows committing the parent block.committed_height + 1, gets (block, qc) from certified_blocks (or pending); updates committed_height, committed_hash, last_chain_committed_state_version; cleans old state; logs "Committing block"; then pushes Action::EmitCommittedBlock { block, qc } and Action::EnqueueInternal { event: Event::BlockCommitted { block_hash, height, block } }. If the next height is already in pending_commits, it loops and commits that too.Flow (same style as Transaction Flow): genesis → request build → broadcast → QC → commit.
BftState::initialize_genesis — chain starts from a known genesis.BftState::on_proposal_timer — emits Action::BuildProposal.hyperscale_bft::handlers::build_proposal → event ProposalBuilt.BftState::on_proposal_built — emits BroadcastBlockHeader + own vote.BftState::on_qc_formed — QC built by handlers::verify_and_build_qc.BftState::commit_block_and_buffered — called from on_block_ready_to_commit when two-chain rule allows.Takeaway: BFT as data structures = BftState plus PendingBlock, VoteSet, and related maps/sets. BFT as routines = the on_* methods on BftState plus the pure handlers (build_proposal, verify_and_build_qc) used by the runner when executing BFT actions.
Answer based on the content above. Pass threshold: 70%.