Dynamics of Node State Machine and BFT

⏱️ Duration: 1.5–2 hours 📊 Difficulty: Intermediate 🎯 Hyperscale-rs Specific

Learning Objectives

Flow: From Node Mempool to Finalized Block

Canonical flow: The end-to-end transaction and block flow (steps 1–14, single-shard vs multi-shard branch) is in Transaction Flow: User to Finality (module-01b). This module focuses on BFT and node state in Rust: block lifecycle, vote order (BlockVote before StateVoteBlock), and the Rust routines (BftState, handlers, commit rule).

High-level: Transactions enter a node's mempool, get selected into a block by the block producer, the block is BFT-consensed (validators vote on the block), and once a quorum is reached the block is finalized. Single-shard flow stays inside one shard; multi-shard adds provisions (StateProvision, ProvisionCoordinator—no 2PC coordinator) so other shards can complete their part. Below: single-shard and multi-shard block flow, then Rust deep dive.

Single-Shard Transaction

All steps happen inside one shard. The BFT object is that shard's block.

Block lifecycle (stages)

A block moves through these stages:

  • Proposed — Proposer builds and broadcasts the block; validators receive it.
  • Received — Validators have the block; they validate it (header, state root, tx root) and send a BlockVote on the block hash. Block vote happens first.
  • Certified — 2f+1 BlockVotes for the same block are aggregated into a QuorumCertificate (QC). The block is then eligible to commit (subject to the two-chain rule).
  • Committed — BFT marks the block committed (persisted, chain advanced). After commit, execution runs: each validator runs the block’s transactions and produces a StateVoteBlock per tx. So: BlockVote → QC → commit first; then StateVoteBlock → StateCertificate → TransactionCertificate (execution layer).
  • Certs in next block — The TransactionCertificates for this block’s txs are included in the next block’s body and used to compute that block’s state root.

Vote order: BlockVote (consensus on the block) comes first; StateVoteBlock (execution outcome per tx) happens only after the block is committed.

Flow (same style as Transaction Flow):

BFT / node (per shard)
1. Submission & mempool
User/client sends a tx to a node. Node validates (signature, format, shard targeting) and inserts it into the mempool for that shard.
2. Block production
The block producer (leader) packs txs from the mempool into a block. This block is the BFT object validators will vote on.
3. Distribution & BFT proposal
Block stage: proposedreceived. Block producer broadcasts the block to validators of the same shard. Validators receive the proposal (BlockProposal).
4. Validation & BlockVote
Block stage: received. Each validator validates the block (state root, tx root, structure), then signs a BlockVote on the block hash. This is the consensus vote (not yet execution).
5. Quorum (QC)
Block stage: certified. Once 2f+1 validators have sent BlockVotes for the same block, their votes are aggregated into a QuorumCertificate. The block is now certified and (with two-chain rule) can be committed.
6. Finalization (commit)
Block stage: committed. The block is marked finalized and persisted; chain advances. After commit, execution runs: validators produce StateVoteBlock per tx → StateCertificate → TransactionCertificate (those certs go in the next block).
7. Node state
Node state for the shard: last finalized block hash, height, state root. The mempool is not part of consensus state; it is ephemeral and local.

Step-by-step: Mempool → final block (JSON + formulas)

Assume

  • Shard 1, 3 validators (V0, V1, V2).
  • Block N−1 is already committed with some state_root_0.
  • Block N will include txA and txB (and certificates from block N−1’s txs).

Order

  • Consensus (steps 2–4): propose → BlockVote → QC.
  • Commit (step 8): block N is committed.
  • Execution (steps 5–7, after commit): StateVoteBlock per tx → StateCertificate → TransactionCertificate; those certs go in block N+1.
1 Mempool (per-node, shard-scoped)

Each validator has its own mempool; content is shard-1 txs only and kept in sync via gossip.

{
  "pool": {
    "0xhash_txA": {
      "tx": "<RoutableTransaction: txA>",
      "status": "Pending",
      "added_at": 1234567890,
      "cross_shard": false,
      "submitted_locally": true
    },
    "0xhash_txB": { "tx": "<txB>", "status": "Pending", ... }
  }
}

Keys: tx_hash = hash(encoded RoutableTransaction) (e.g. SBOR then hash). Included: Only txs where topology.involves_local_shard(tx); same txs on all shard-1 nodes via gossip.

2 Block proposal (proposer builds block N)

Proposer (e.g. V0 for this height/round) takes parent = block N−1, certificates from N−1’s execution (i.e. outcome of N−1’s txs), and ready txs from mempool (txA, txB). Builds header and block.

2.1 Block header (what BFT votes on)

The header contains a parent_qc (consensus proof for the previous block) and a state_root (state after applying this block’s certificates). The certificates in block N are the execution outcomes of the parent block N−1’s transactions (e.g. tx a, tx b in N−1 → their certs go in N). The block body’s certificates are what actually feed into the state root; see the box below.

{
  "BlockHeader": {
    "height": { "0": 5 },
    "parent_hash": "0xhash_block_4",
    "parent_qc": { "QuorumCertificate for block N−1 (BFT: 2f+1 voted for that block)" },
    "proposer": { "0": 0 },
    "timestamp": 1700000000000,
    "round": 0,
    "is_fallback": false,
    "state_root": "0xstate_root_after_certs",
    "state_version": 42,
    "transaction_root": "0xtx_root_5"
  }
}

How each field is obtained:

FieldFormula / source
heightparent_qc.height.0 + 1
parent_hashlatest_qc.block_hash (block N−1)
parent_qcQC for block N−1 (from BFT)
proposercommittee[(height + round) % len] (round-robin)
timestampnow.as_millis() (proposer wall clock)
roundCurrent BFT view
state_rootstorage.prepare_block_commit(parent_state_root, block.certificates, local_shard) → first element. parent_state_root = get_block_by_hash(parent_hash).header.state_root; then apply certificates in this block (from N−1’s txs) to get new JMT root.
state_versionparent_state_version + count(certificates in this block with state writes)
transaction_rootcompute_transaction_root(retry_txs, priority_txs, normal_txs) — see below.

Transaction root formula:

  • Leaves = tagged tx hashes in order: [ hash("RETRY" || tx_hash) for retry_txs ] + [ hash("PRIORITY" || tx_hash) for priority_txs ] + [ hash("NORMAL" || tx_hash) for normal_txs ]
  • transaction_root = compute_merkle_root(leaves). Empty sections → Hash::ZERO.

parent_qc (in header) vs certificates (in block body):

WhereWhat it isRole
parent_qc (in BlockHeader)QuorumCertificate for block N−1BFT consensus proof: “2f+1 validators voted for block N−1.” Chains blocks for consensus; used to derive height, parent_hash. Does not define state.
certificates (in Block body)TransactionCertificates from block N−1’s executionExecution outcomes: “How did the transactions in block N−1 run?” Block N carries them so the new state root can be computed: state_root = parent_state_root + apply(certificates). The header’s state_root is exactly that result.

In short: parent_qc = “we agreed on the previous block”; certificates = “execution results for the parent block’s txs (N−1), which we apply to parent state root to get the new state root.”

Block state root vs transaction root: The block state root (in the header) essentially encodes the order of blocks (hierarchy): each block is like a package of txs, and the state root chains parent → child so the sequence of state roots reflects block order. The transaction root (Merkle root of the tx list in this block) commits to the order of transactions within the block; global transaction order is then block order plus position within each block, so you can compute or verify the global order of transactions regardless of which block (package) they are in.

Example: Block N−1 has tx list (tx a, tx b) and state root 123. Block N (proposed) has state_root(N) = apply(parent.state_root: 123, certificates for tx a and tx b). Tx a and tx b are in the N−1 block; their execution certificates go into block N.

2.2 Full block (proposal)

{
  "Block": {
    "header": { "BlockHeader": "..." },
    "retry_transactions": [],
    "priority_transactions": [],
    "transactions": [
      { "RoutableTransaction": "txA payload" },
      { "RoutableTransaction": "txB payload" }
    ],
    "certificates": [
      { "TransactionCertificate": "execution outcome for a tx from block N−1 (from N−1's execution)" }
    ],
    "deferred": [],
    "aborted": []
  }
}

certificates: TransactionCertificates from block N−1’s execution (outcome of N−1’s txs). Applied with parent state root to get header.state_root (formula: state_root = parent_state_root + apply(certificates)). transactions: From mempool ready_transactions(max_txs, ...); order: retry, priority, normal (each section hash-sorted).

3 BFT block vote (per validator) — block stage: received

Validators have received the block. After validating it (state root, tx root, structural checks), each signs a BlockVote on the block hash. This is the consensus vote (happens before commit and before execution votes).

{
  "BlockVote": {
    "block_hash": "0xhash_of_block_header_5",
    "height": { "0": 5 },
    "round": 0,
    "voter": { "0": 0 },
    "signature": "<BLS G2 signature>",
    "timestamp": 1700000000001
  }
}
FieldFormula / source
block_hashheader.hash() = Hash::from_bytes(basic_encode(header))
height, roundFrom the block header being voted on
voterThis validator’s ValidatorId
signaturesigning_key.sign_v1(message) where message = block_vote_message(shard, height, round, block_hash) = DOMAIN_BLOCK_VOTE || shard.0 || height || round || block_hash (all fixed-width/encoded)
timestampValidator’s clock when creating the vote
4 QuorumCertificate (QC) for the block — block stage: certified

When 2f+1 validators have voted for the same block_hash at (height, round), their BlockVotes are aggregated into a QC. The block is now certified; with the two-chain rule it can be committed.

{
  "QuorumCertificate": {
    "block_hash": "0xhash_of_block_header_5",
    "height": { "0": 5 },
    "parent_block_hash": "0xhash_block_4",
    "round": 0,
    "signers": "<bitfield: which validators signed>",
    "aggregated_signature": "<BLS G2 aggregated>",
    "voting_power": 3,
    "weighted_timestamp_ms": 1700000000000
  }
}
FieldFormula / source
block_hash, height, roundFrom the block that received quorum
parent_block_hashFrom that block’s header / parent
signersBitfield of validator indices that contributed to the aggregate
aggregated_signatureBLS aggregate of each signer’s signature from their BlockVote (all over the same block_vote_message)
voting_powerSum of stake of signers; must ≥ quorum (e.g. 2f+1)
weighted_timestamp_mssum(vote.timestamp * stake_i) / sum(stake_i) over signers
5 Execution: StateVoteBlock (per tx, per validator) — after block committed

Once the block is committed (step 8), each validator runs txA then txB through the Radix Engine and produces one StateVoteBlock per tx. So execution votes (StateVoteBlock) happen after consensus votes (BlockVote) and commit. Example for txA at validator V0:

{
  "StateVoteBlock": {
    "transaction_hash": "0xhash_txA",
    "shard_group_id": { "0": 1 },
    "state_root": "0xoutputs_merkle_root_txA",
    "success": true,
    "state_writes": [
      { "node_id": "<AccountA node>", "partition": 0, "sort_key": "...", "value": "<+2 XRD balance blob>" },
      { "node_id": "<AccountB node>", "partition": 0, "sort_key": "...", "value": "<-500 USDC blob>" }
    ],
    "validator": { "0": 0 },
    "signature": "<BLS G2>"
  }
}

state_root here = compute_writes_commitment(state_writes) — outputs merkle root (commitment to this tx’s writes only). signature: signing_key.sign_v1(exec_vote_message(tx_hash, state_root, shard, success)).

6 StateCertificate (per tx, per shard — after quorum on execution)

When 2f+1 validators have the same state_root (and success) for a given tx, their execution votes are aggregated into a StateCertificate for that shard. These certs are for transactions that were in the block we just committed; one StateCertificate per (tx, shard). They feed into the TransactionCertificate (step 7) and are carried in the next block’s body.

{
  "StateCertificate": {
    "transaction_hash": "0xhash_txA",
    "shard_group_id": { "0": 1 },
    "read_nodes": [],
    "state_writes": [ "same as quorum votes" ],
    "outputs_merkle_root": "0xoutputs_merkle_root_txA",
    "success": true,
    "aggregated_signature": "<BLS G2 aggregated>",
    "signers": "<bitfield>",
    "voting_power": 3
  }
}
7 TransactionCertificate (per tx, all shards)

For single-shard txs, one StateCertificate per tx. For cross-shard, one per participating shard; then a TransactionCertificate combines them.

{
  "TransactionCertificate": {
    "transaction_hash": "0xhash_txA",
    "decision": "Accept",
    "shard_proofs": {
      "1": { "StateCertificate": "..." }
    }
  }
}

decision: Accept iff all shard proofs succeeded; otherwise Reject. shard_proofs: Map shard_id → StateCertificate. These certs (for txA, txB) are produced after block N is committed and will be in block N+1.

8 Block N committed and stored — block stage: committed

When BFT has a QC for block N and the two-chain rule is satisfied: Commit: set_committed_state(height, block_hash, qc); then put_block(height, block, qc) (RocksDB: blocks, transactions, certificates column families). State: Block N's state root = parent state root (block N−1) + apply(certificates in block N). Those certificates are the TransactionCertificates from block N−1's execution (block N carries them). Apply them to the parent JMT to get the new root (already in header.state_root); commit those writes to JMT.

9 Next block (N+1) and state root again

Block N+1's header: state_root = parent_state_root(block N) + apply(certificates in block N+1). Those certificates are the TransactionCertificates for txA and txB (and any other txs in block N).

Summary formulas (quick reference)

ValueFormula / function
Block header hashbasic_encode(header) then Hash::from_bytes(...)
Transaction rootcompute_merkle_root([hash(TAG || tx_hash) for each section in order])
State root (block)prepare_block_commit(parent_state_root, certificates, local_shard) → JMT root. Certificates in this block = execution outcomes of the parent block’s txs.
Block vote messageblock_vote_message(shard, height, round, block_hash) → sign with BLS (types in crates/messages/src/notification/mod.rs)
Outputs merkle root (per tx)compute_writes_commitment(state_writes) = hash chain over sorted writes
Execution vote messageexec_vote_message(tx_hash, state_root, shard, success) → sign with BLS
QCBLS aggregate of 2f+1 block votes over same message
StateCertificateBLS aggregate of 2f+1 StateVoteBlocks with same (tx_hash, state_root, success)
Design note

Why block vote first, then execution votes?

BlockVote only checks that the block is well-formed: state_root = parent_state_root + apply(certificates in this block). Those certificates are the execution outcomes of the parent block’s transactions (N−1). So validators can vote without running the current block’s transactions. Doing execution and StateVoteBlock in the same round would tie consensus to execution: every validator would have to run the block’s txs before voting. That would slow rounds (execution can be heavy), and any execution non-determinism or bug could prevent quorum on the block itself. Separating the two keeps consensus fast and simple; execution outcomes are then certified in the next phase and carried in the next block.

What if some txs are rejected?

The committed block is still valid and useful. Consensus committed ordering (this block contains txA, txB, …). After commit, execution produces a TransactionCertificate per tx with decision: Accept or Reject. Rejected txs get a cert with decision: Reject; that cert goes in block N+1 like any other. The state root of block N is unchanged by block N’s own tx outcomes — it’s parent + apply(certs from N−1). When block N+1 applies the certs from block N, it applies “no state change” for rejected txs (or records the reject). So the chain stays consistent: the committed block fixed the order; the following block’s certificates record which txs were accepted and which rejected.

Why so many certs? StateVoteBlock → StateCertificate → TransactionCertificate

Each layer serves a different purpose:

  • StateVoteBlock — One per validator, per tx. “I ran this tx and got these state writes; here’s my commitment (state_root) and success/fail.” The vote is on the tx’s execution outcome (writes + success bit).
  • StateCertificate — One per (tx, shard). When 2f+1 validators agree on the same outcome (same state_root, same success), their StateVoteBlocks are aggregated into a StateCertificate. So: “the committee for this shard agrees on how this tx executed on this shard.”
  • TransactionCertificate — One per tx, for the whole transaction. For single-shard txs it wraps the single StateCertificate; for cross-shard it collects shard_proofs (one StateCertificate per involved shard) and sets decision: Accept iff all shards succeeded, else Reject. This is what the next block needs: one cert per tx from the previous block to apply to the state root.

Chain: Many validators (StateVoteBlock) → quorum per tx per shard (StateCertificate) → one final cert per tx (TransactionCertificate) carried in the next block.

Multi-Shard Transaction

Transactions that span shards add receipts (cross-shard messages) and provisions. Each shard still runs BFT on its own block; the target shard’s block includes and applies the receipts provisioned by the source shard.

Step-by-step: Cross-shard (JSON + structures)

Assume: The distinguished transaction (e.g. CallMethod(Component(C), "transfer_to_bob", Decimal("10")) — C on Shard 0, Bob on Shard 1). Tx touches Shard 0 (source) and Shard 1 (target). consensus_shards = {0, 1}; protocol order by ShardGroupId. Cross-shard execution is provision-based (no 2PC coordinator): ProvisionCoordinator on each node tracks quorum of provisions from the other shard; then execution and certificate aggregation.

1 Mempool & routing (cross-shard)

Tx is sent to all all_shards_for_transaction(tx). Each shard's mempool holds it; cross_shard: true. Same BFT block proposal per shard; execution is provision-based (StateProvision, ProvisionCoordinator, then execution and certificates).

{
  "pool": {
    "0xhash_crossTx": {
      "tx": "<RoutableTransaction>",
      "status": "Pending",
      "cross_shard": true,
      "consensus_shards": [0, 1]
    }
  }
}
2 Block on source shard (Shard 0)

Shard 0's block producer includes the tx. Execution may produce StateProvision to send to Shard 1. BFT object is still Shard 0's block.

{
  "Block": {
    "header": {
      "BlockHeader": {
        "height": { "0": 5 },
        "state_root": "0x...",
        ...
      }
    },
    "transactions": [{ "RoutableTransaction": "crossTx" }],
    "certificates": [],
    "commitment_proofs": []
  }
}
3 StateProvision (source → target)

Validators on Shard 0 produce signed StateProvision (proof of state written) and broadcast to Shard 1.

{
  "StateProvision": {
    "transaction_hash": "0xhash_crossTx",
    "source_shard": { "0": 0 },
    "target_shard": { "0": 1 },
    "entries": [...],
    "block_height": 5,
    "validator": { "0": 0 },
    "signature": "<BLS G2>"
  }
}
4 ProvisionCoordinator (checklist)

On each node of Shard 1, ProvisionCoordinator buffers StateProvisions. required_shards = {0}. When quorum (2f+1) matching provisions from Shard 0, emits ProvisioningComplete.

{
  "TxRegistration": {
    "transaction_hash": "0xhash_crossTx",
    "required_shards": [0],
    "provisions_by_shard": {
      "0": ["<StateProvision>", ...]
    },
    "complete": true
  }
}
5 CommitmentProof & block on target (Shard 1)

VerifyAndAggregateProvisionsCommitmentProof. Shard 1's block contains commitment_proofs and applies the receipt from Shard 0.

{
  "Block": {
    "header": { ... },
    "commitment_proofs": [
      {
        "CommitmentProof": {
          "source_shard": { "0": 0 },
          "transaction_hash": "0xhash_crossTx",
          "aggregated_signature": "<BLS>",
          "entries": "..."
        }
      }
    ]
  }
}
6 TransactionCertificate (cross-shard)

One StateCertificate per shard; TransactionCertificate combines them. decision: Accept iff all shard proofs succeeded.

{
  "TransactionCertificate": {
    "transaction_hash": "0xhash_crossTx",
    "decision": "Accept",
    "shard_proofs": {
      "0": { "StateCertificate": "..." },
      "1": { "StateCertificate": "..." }
    }
  }
}

Multi-shard summary

Order: Protocol order by ShardGroupId. ProvisionCoordinator requires quorum of StateProvision from each required_shard. CommitmentProof = aggregated 2f+1 provisions from a source shard. See Cross-Shard Transactions in Hyperscale-rs for the provision-based protocol and livelock (no 2PC coordinator in hyperscale-rs).

Example: Multi-Shard Transaction

Setup: Shard 0 (component C), Shard 1 (account Bob). Distinguished transaction (Radix manifest style):

# Signer: Alice (Shard 0). C on Shard 0; Bob on Shard 1.
1. CallMethod(Component(C), "transfer_to_bob", Decimal("10"))   # C credits Bob with 10

C (Shard 0) executes the call and produces a receipt to credit Bob (Shard 1) with 10.

  1. Mempool (Shard 0): The transaction above enters Shard 0 mempool.
  2. Block on Shard 0: Shard 0 producer includes tx in Block_0. Execution: component C runs, produces Receipt(Shard0 → Shard1, Credit(Bob, 10)). Block_0 contains this receipt (or its hash).
  3. BFT on Shard 0: Shard 0 validators validate Block_0 (including receipt output), vote BlockVote(Block_0). Quorum on Block_0 → Block_0 finalized → receipt is provisioned to Shard 1.
  4. Shard 1 consumes provisions: Shard 1 receives the provision (receipt from Shard 0). Shard 1’s block producer includes “apply Receipt(Credit(Bob, 10))” in Block_1.
  5. BFT on Shard 1: Shard 1 validators validate Block_1 (Bob’s balance +10), vote BlockVote(Block_1). Quorum on Block_1 → Block_1 finalized.
  6. Finalization: The transaction as a whole is finalized when every block (and thus every sub-tx state) that contains a part of it is finalized. Here: when both Block_0 and Block_1 are finalized.

Message enums: BlockProposal(Block_0) (Shard 0) → BlockVote(Block_0)Receipt(Shard0→Shard1, Credit(Bob,10)). Shard 1: BlockProposal(Block_1) containing receipt application → BlockVote(Block_1) → Block_1 finalized.

Who is involved: Shard 0 block producer + Shard 0 validators (BFT on Block_0); Shard 1 block producer + Shard 1 validators (BFT on Block_1). BFT object on each shard is that shard’s block; “vote on Shard 0 provisions” = Shard 1 voting on the Shard 1 block that applies those provisions.

Rust Deep Dive: BFT and Node State

The following sections map the flow above to the hyperscale-rs codebase: the node state machine, BftState, voting rules, data structures, and routines (genesis → request build → broadcast → QC → commit).

BFT as a Rust entity

Main object: BftState in crates/bft/src/state.rs.

So at the stages we care about, “BFT” = the BftState instance plus the BFT handlers that run outside it (e.g. build_proposal, verify_and_build_qc in crates/bft/src/handlers.rs). The handlers are pure functions invoked by the runner when it executes BFT-related Actions; BftState does not build blocks or QCs itself—it emits actions, and the runner calls the handlers.

Voting and rules

Validators vote for a specific block at a given height and round. A vote is "I support this block (hash) at this (height, round)." Votes are collected per block; when a quorum (e.g. 2f+1 out of 3f+1) of validators have voted for the same block, a Quorum Certificate (QC) can be built. The QC proves "enough of us agreed on this block" and is used to extend the chain and to justify the next proposal (parent QC).

Height and round

Metaphor: class voting on "what to do on Friday"

Imagine your class is voting on "what to do on Friday":

Rule: For one vote (same height), in one try (same round), each kid can only choose one option. If Alice says "movie" in round 1, that's fine. If Alice later also says "park" in the same round 1, that's cheating—she's saying two different things for the same vote. That's equivocation, and we treat it as Byzantine (bad behavior).

"Different rounds (revotes) are allowed" means: In round 1 Alice said "movie." Time ran out; we start round 2. Now Alice is allowed to change her mind and say "park" in round 2. That's a revote, not cheating, because it's a different round.

So:

Rules in the protocol

Where this shows up in the code

BftState keeps voted_heights: HashMap<u64, (Hash, u64)> — for each height we've voted, the (block_hash, round) we voted for — so we don't vote twice for the same height in the same round. received_votes_by_height (and similar structures) record, per (height, round), which validator voted for which block; that's how equivocation is detected (same validator, same (height, round), two different block hashes). VoteSet collects votes for a single block; when the count reaches quorum, the node can emit Action::VerifyAndBuildQuorumCertificate. Commit only happens when the two-chain condition is satisfied (on_block_ready_to_commitcommit_block_and_buffered).

Core BFT data structures

BftState (the main state machine)

Location: crates/bft/src/state.rspub struct BftState { ... }

Identity / config

Chain state

Pending blocks, votes, certified

Verification and buffers

Supporting types

Routines by stage (genesis → request build → broadcast → QC → commit)

Initialized genesis block

Requesting block build for proposal

So the object that does “Requesting block build for proposal” is BftState; the routine is BftState::on_proposal_timer. It does not build the block itself; it emits Action::BuildProposal, which the runner (action handler) executes.

Who actually builds the block and emits “Broadcasting proposal”

Building the block: Not BftState. The runner handles Action::BuildProposal in the action handler (crates/node/src/action_handler.rs): it calls hyperscale_bft::handlers::build_proposal(storage, ...) (crates/bft/src/handlers.rs).

The action handler then produces Event::ProposalBuilt { height, round, block, block_hash } and enqueues it.

“Broadcasting proposal”: When that event is handled, the node state machine calls self.bft.on_proposal_built(height, round, block, block_hash).

So: the object that logs “Broadcasting proposal” and does broadcast + own vote is BftState; the routine is BftState::on_proposal_built. The block value is built by handlers::build_proposal (pure function over storage and proposal parameters), not by BftState itself.

QC formed

Committing block

Summary: BFT Rust internal flow

Flow (same style as Transaction Flow): genesis → request build → broadcast → QC → commit.

BftState / runner / handlers
1. Initialized genesis block
BftState::initialize_genesis — chain starts from a known genesis.
2. Requesting block build
BftState::on_proposal_timer — emits Action::BuildProposal.
3. Block built
Runner calls hyperscale_bft::handlers::build_proposal → event ProposalBuilt.
4. Broadcasting proposal
BftState::on_proposal_built — emits BroadcastBlockHeader + own vote.
5. QC formed
BftState::on_qc_formed — QC built by handlers::verify_and_build_qc.
6. Committing block
BftState::commit_block_and_buffered — called from on_block_ready_to_commit when two-chain rule allows.

Takeaway: BFT as data structures = BftState plus PendingBlock, VoteSet, and related maps/sets. BFT as routines = the on_* methods on BftState plus the pure handlers (build_proposal, verify_and_build_qc) used by the runner when executing BFT actions.

Quiz: BFT inner workings

Answer based on the content above. Pass threshold: 70%.