Transaction Flow: User to Finality

⏱️ Duration: ~30 min 📊 Difficulty: Basic 🎯 Hyperscale-rs Specific

Why This Diagram?

Hyperscale-rs is a consensus layer, not a full blockchain stack. This diagram shows the end-to-end path of a transaction from the user to finality, and which parts Hyperscale touches versus which are outside (e.g. wallet, Radix Engine semantics, network client). Hover over each step to see which crates implement it.

Basic level: This is the only Hyperscale-specific basic module. The web3 modules (Blockchain, Consensus, Distributed Systems, State Machines) give foundation. After this, go to Intermediate for Hyperscale Overview, Crate Groups (with 10-question quizzes per group), Codebase Exploration, and First Contribution.

Where BFT comes in: Steps 1–6 get the tx into the system (user, network, node, cross-shard determination, mempool, and for cross-shard txs, travel to involved shards). BFT consensus runs from step 7 (proposer selection) through step 11 (block committed) per shard. Steps 12–14 are execution, then (for cross-shard) coordination & composition, then finality.

End-to-End Transaction Flow

Hyperscale-rs (consensus layer) Outside Hyperscale (wallet, engine, network)
1. User signs transaction
Wallet creates and signs a transaction (e.g. transfer, smart contract call).

Crates

Not in Hyperscale — wallet / client application.

2. Transaction submitted to network
RPC or gateway sends the signed tx to a node (or broadcast).

Crates

Network client / RPC — outside. production receives on the node side.

3. Node receives transaction
Runner turns incoming bytes into Events and feeds the node.

Crates

  • production — I/O, turns network into events
  • node — NodeStateMachine receives events
4. Cross-shard determination
The tx is analyzed for which NodeIDs (components, resources, packages, accounts) it reads or writes. If those NodeIDs belong to more than one shard, the tx is classified as a cross-shard transaction.

Crates

  • node — routing, shard mapping
  • core — Event/Action types

Shard assignment of NodeIDs is defined by the protocol / Radix Engine; consensus uses it to decide single- vs cross-shard.

5. Mempool (per shard)
Single-shard: tx goes into that shard's mempool. Cross-shard: the receiving node enqueues the tx in its own shard’s mempool (if the tx touches that shard). For other involved shards, the tx is sent in step 6 and nodes there enqueue it in their mempools — so the tx ends up in the mempool of every involved shard.

Crates

  • mempool — transaction pool
  • node — composes mempool
  • core — Event/Action types
6. (Cross-shard) Tx travels to each involved shard
For cross-shard txs, the receiving node (or the routing layer) sends the tx (or sub-transactions) to the nodes of every other involved shard. Those nodes enqueue it in their shard’s mempool. A coordinator (one of these shards or a designated role) is chosen so that later (step 13) it can drive prepare/commit/abort in a fixed order. Each shard then runs steps 7–12 for its part.

Crates

  • provisions — centralized provision coordination; drives cross-shard flow
  • production — cross-shard messaging (libp2p Gossipsub on shard-scoped topics)
  • node — composes provisions, coordination state

In Hyperscale-rs, the node sends the tx to other shards via the production network layer: libp2p Gossipsub with shard-scoped topics (e.g. hyperscale/{msg_type}/shard-{id}/1.0.0). RPC is only for client→node submission.

livelock crate handles cross-shard deadlock detection and prevention.

7. Proposer selection (per shard, per round)
BFT starts here. Each shard runs its own BFT instance with its own view/round. The proposer (block leader) is chosen deterministically from the shard's validator set (e.g. round-robin by validator identity or view number modulo validator count).

Crates

  • bft — view, leader election
  • core — traits, time
  • types — block, validator set
8. Block proposal
Proposer builds block (header + txs from mempool), broadcasts header; validators receive it.

Crates

  • bft — proposal logic
  • types — Block, BlockHeader
  • node — routes ProposalTimer, etc.
9. Validators vote
Validators in this shard's BFT instance validate the block (e.g. check it extends from the parent QC, payload and hashes are valid, and it obeys consensus rules) and, if valid, sign a vote. 2f+1 votes are required for a quorum (BFT fault model: n = 3f+1 nodes per shard).

Crates

  • bft — block validation logic and voting; collects votes
  • types — Block, Vote, signatures
10. Quorum certificate (QC) formed
How a QC is formed: (i) Proposer for H built and broadcast the block header (step 8). (ii) Validators validated and voted; votes are broadcast to the shard (step 9). (iii) When any validator has 2f+1 valid votes, it requests QC build; that node’s latest_qc is set to the new QC. (iv) The QC is not sent as a separate message—the next proposer has it by forming it from votes or from a received block header. (v) The next proposer builds block H+1 with parent_qc = QC(H) and broadcasts H+1; everyone then sees QC(H) in H+1’s header.

Crates & code

  • bft — vote collection, QC build request, latest_qc, block build with parent_qc
  • typesBlockHeader.parent_qc (block.rs:75), QuorumCertificate
  • coreAction::PersistAndBroadcastVote (action.rs:444), Action::VerifyAndBuildQuorumCertificate (action.rs:113)

Locations: BFT state emits vote broadcast in bft/src/state.rs (≈2575), emits QC build in bft/src/state.rs (≈2765); on_qc_formed sets latest_qc (state.rs:3583); latest_qc from received header (state.rs:1597); block build uses latest_qc as parent_qc (state.rs:1013–1017, 1147, 1182–1208). vote_set.build_qc in bft/src/vote_set.rs (307).

11. Block committed
Who: Every validator node in the shard (not only the proposer). Once the commit rule is satisfied (e.g. two-chain: this block and the next have QCs), each node commits. Commit mechanics: (1) accept the block as final, (2) append it to the local chain, (3) trigger execution (run the transactions in the block), (4) trigger persistence (write to storage). All honest nodes do this in sync with the agreed order.

Crates

  • node — CommitBlock action, composition
  • bft — commit rule
  • core — Action::CommitBlock

“Node” = any validator running the NodeStateMachine in this shard; the proposer is one of them. All commit the same block in the same order.

12. Execution
Transactions in the block are executed (per shard). Hyperscale runs the execution state machine; semantics (e.g. Radix Engine) may be external.

Crates

  • execution — execution state machine; cross-shard 2PC coordination
  • engine — Radix Engine integration for smart contract execution
  • node — composes execution

Execution semantics (Radix Engine) are in engine crate / vendor; BFT only orders and coordinates.

13. (Cross-shard) Coordination & composition
For cross-shard txs, atomic composability works like this. Communication & order: Shards talk in a fixed protocol order (e.g. one coordinator shard drives 2PC). It sends prepare to each involved shard; each shard runs its part (lock/reserve, no visible state change yet) and replies yes/no. If all vote yes, coordinator sends commit; otherwise abort. State updates: Only after the decision, each shard applies updates in an agreed order (e.g. by shard ID or coordinator order) — commit = apply state; abort = release locks. That way every node sees the same composed outcome and no shard commits while another aborts. Single-shard txs skip this.

Crates

  • provisions — centralized provision coordination for cross-shard txs
  • execution — transaction execution with cross-shard 2PC coordination
  • node — composes provisions, execution, core
  • core — Event/Action for coordination

See module “Cross-Shard Transactions” for 2PC and provision coordination details.

14. Finality & persistence
BFT gives one-round finality (no reorg after QC). State/storage may be in-node or external.

Crates

  • bft — finality rule
  • node — state, persistence

Tip: Hover over a step to see which crates implement it. Popup closes when you move the cursor away.

Clarifying the flow (multi-shard and proposer)

A few common points to keep straight:

  • Proposer is per shard, per round. The node that receives the tx (e.g. via RPC) belongs to some shard, but that does not make it the proposer. Proposer is chosen deterministically per shard (e.g. round-robin by view). So for shard A, “the proposer for this round” is one validator in A; for shard B, it’s one validator in B. They are different nodes. The receiving node just ingests the tx (steps 3–6); the tx lands in mempool(s) of every involved shard. The proposer pulls from its shard’s mempool when its ProposalTimer fires and builds a block — it doesn’t “receive the tx” specially.
  • Who enqueues in which mempool? The node that receives the tx (e.g. RPC node) belongs to one shard. It can enqueue the tx in its own shard’s mempool (if the tx touches that shard). It cannot directly enqueue into another shard’s mempool — that shard’s mempool lives on that shard’s nodes. So for cross-shard: the receiving node enqueues in its shard’s mempool (step 5) and sends the tx (or sub-tx) to the other involved shards (step 6). Nodes in those shards receive it and enqueue it in their mempools. So the tx ends up in the mempools of every involved shard, but “enqueue in mempools of every involved shard” is done by the receiving node for its shard and by the other shards’ nodes when they receive the tx.
  • Tx does not “broadcast until it lands on the proposer.” Once the tx is in the mempools (as above), each shard’s proposer pulls from that shard’s mempool when its ProposalTimer fires and builds a block. The proposer broadcasts the block (header) to validators in that shard only, not to other shards.
  • Proposer sends a block to its shard, not “the block” to all shards. The proposer for shard S builds one block (header with parent_qc = QC for previous block in S’s chain, plus txs from S’s mempool). It broadcasts that block (or header; full block is assembled via gossip) to validators in shard S only. So: one block per shard, broadcast inside that shard. Other shards have their own proposers and their own blocks at the same “logical” time.
  • Voting and execution order. Validators in that shard vote on the block (BFT: do we agree on this block?). That yields a QC for that shard’s block. Then the block is committed (step 11). After commit, execution runs (step 12): the transactions in the block are run (e.g. via Radix Engine). So: BFT agrees on block → commit → then execute. There is no “each shard votes on validity and reports to the proposer” in the sense of one global proposer; each shard votes on its own block and gets its own QC.
  • Cross-shard: no single “proposer finalizes with all shards’ QCs.” Each shard has its own chain and its own QC. For a cross-shard tx, the tx (or sub-tx) is in each involved shard’s mempool; each shard may include it in its block; each shard runs BFT (propose → vote → QC → commit) and then execution. Atomicity across shards is achieved by 2PC (step 13): a coordinator drives prepare → all shards prepare (lock/reserve) and reply yes/no → commit or abort → each shard applies state in an agreed order. So “all report success” is the 2PC prepare phase; then commit/abort and apply. There is no single block that “includes the votes (QC) of each shard” — each shard’s block has its own QC in the next block’s header (the normal two-chain rule per shard).
  • What is the coordinator? For the full picture (2PC coordinator vs ProvisionCoordinator, protocol order, concepts), see the Intermediate module Cross-Shard Transactions in Hyperscale-rs.

For how atomic composability works (prepare → commit/abort → state updates), the two coordinators (2PC vs ProvisionCoordinator), how order is fixed, manifest vs protocol order, and concepts (shards, proposer, NodeID, finality), see the Intermediate module Cross-Shard Transactions in Hyperscale-rs.

Crate groupings (code reading order)

The codebase is grouped by transaction-flow progression. Read the groups in order (1 → 6); use the quizzes in the next module to check understanding.

Group Crates One-liner
1. First contactproduction, node, mempool, types, core, messagesHow Hyperscale receives a tx at the RPC and gets it into the right shards’ mempools.
2. Sharding and routingtypes, core, nodeWho does what once a tx is decomposed into NodeIds and each shard is responsible for a slice of state.
3. Proposing and building blocksbft, mempool, types, coreHow one validator becomes the proposer and assembles the next block from the mempool.
4. Voting and committingbft, types, coreHow validators agree on a block and when it is finally committed (votes and QCs).
5. Execution after commitexecution, engine, node, types, coreWho runs transactions after commit and how single-shard vs cross-shard paths diverge.
6. Cross-shard: provisions and livelockprovisions, execution, livelock, types, coreHow state moves between shards and how Hyperscale avoids deadlock.

In Intermediate, do Crate Groups & Quizzes next (one-liners, code pointers, 10-question quiz per group).

Practical next steps: run the code and debug

Use these steps to run Hyperscale and follow a transaction with a debugger. They assume you have the repo cloned and cargo build works.

  1. Read Event and Action first. The flow is “event in → state machine → actions out; runner performs actions and feeds back events.” Skim core::event and core::action (e.g. action.rs / event.rs) so you know what to break on.
  2. Run a node or the simulator. Follow the repo README to run a single node or the sim (e.g. cargo run for the production runner or sim binary). Submit one transaction and note its hash (from logs or RPC).
  3. Follow one tx with a debugger. In your IDE, set breakpoints on: SubmitTransaction / TransactionGossipReceived (first contact), then BlockCommitted (BFT), then TransactionExecuted (execution), then where the certificate is included in a block and status becomes Completed. Step through to see which crates handle each step.
  4. Use the crate table. The production runner is large. Use the table above to jump to “first contact” (RPC handler, submit path, gossip) and “actions to network” (e.g. BroadcastToShard handling) instead of reading the production crate top to bottom.
  5. When things go wrong: tx stuck in Pending → check mempool and gossip (did the tx reach all involved shards?). Block not committing → BFT and QC (round/timeout, vote collection). Cross-shard tx stuck after Executed → certificates and inclusion in a later block (provisions, certificate aggregation).

Quiz: Transaction Flow (big picture)

Answer based on the diagram and concepts above. Pass threshold: 70%.