Learning Objectives
- Understand 2PC and provision coordination in hyperscale-rs
- Map a complex Radix manifest to shards, provisioning, and coordination
- See how protocol order (ShardGroupId) differs from manifest instruction order
- Understand livelock prevention in cross-shard flow
- Trace cross-shard flow in the codebase (types, topology, execution, provisions)
How atomic composability works (cross-shard)
The diagram in Transaction Flow emphasizes that cross-shard atomicity depends on correct order of communication and when state updates are applied:
- Coordinator (one shard or a designated role) sends prepare to every involved shard in a defined order.
- Each shard prepares: runs its part of the tx (e.g. reserve/lock), but does not make the state update visible yet. It replies yes or no.
- When all have voted yes, coordinator sends commit; otherwise it sends abort to all.
- State updates: Each shard then applies updates in an agreed order (e.g. by shard ID, or the order the coordinator specifies). On commit, each applies its state change; on abort, each releases locks. No shard commits while another aborts — the protocol order ensures a single composed outcome.
So shards communicate in this fixed protocol order (prepare phase → decision → apply phase), and state updates happen only after the decision, in the same order everywhere. That is what makes the cross-shard tx atomic and composable.
Two coordinators: 2PC coordinator and ProvisionCoordinator
Cross-shard flow uses two different “coordinator” ideas:
- 2PC coordinator (step 13 in the tx flow). This is one of the involved shards (or a designated role), not a separate server. Validators in that shard run the logic that: sends prepare to each involved shard (in a fixed order), collects yes/no, then sends commit (if all yes) or abort. So it’s the shard that “asks whether everyone is ready” and then decides go or cancel. Implementation:
execution crate (cross-shard 2PC coordination).
- ProvisionCoordinator (provisions crate). This is a sub-state machine on every node, not a single elected node. It tracks provisions: after a shard commits its part of a cross-shard tx, it sends a signed proof (a "provision") to the other shards. ProvisionCoordinator on each node keeps a checklist: “For this tx, do we have quorum of provisions from shard 1? From shard 2? …” When it has enough from every required shard, it emits ProvisioningComplete so execution can proceed. So it’s the “checklist” for “did we get enough proofs from the prerequisite shards?” — not the prepare yes/no (that’s 2PC).
How is the order fixed? "The protocol fixes an order" means a deterministic rule that all nodes follow (e.g. by shard ID, or by the order of NodeIDs in the tx), so everyone agrees on which shard is first, second, third. The 2PC coordinator does not choose that order; it drives the protocol in that pre-agreed order (sends prepare in that order, then commit/abort, then state updates in that order). So: the protocol defines the order; the coordinator executes the protocol in that order.
How is order determined for composite (cross-shard) transactions? In the Radix manifest, instructions are in a fixed order (e.g. withdraw from A → split → put in staking vault → put in LP → return IOUs). The Radix Engine runs those instructions sequentially when executing; data dependencies are implicit (instruction 2 sees the result of instruction 1). For Hyperscale (consensus), the transaction is turned into a RoutableTransaction by instruction analysis (crates/types/src/transaction.rs): the manifest is walked and every NodeId read or written is collected into declared_reads and declared_writes. Those are sets (deduplicated); instruction order is not preserved at the consensus layer. Shard sets are then derived: consensus_shards = unique shards of declared_writes; provisioning_shards = shards of declared_reads that are not write shards. Both are stored in BTreeSets, so the protocol order is by ShardGroupId (numeric). So: we do not derive "Account_A shard first, then Staking_VAULT shard, then LP shard" from the manifest order—we get a deterministic order by shard ID. Prerequisites are enforced by provisions: a shard that needs remote state (reads from another shard) must receive quorum of provisions from that shard before it can complete; which shards are "required" is the set of all other participating shards. Parallelism: each shard runs BFT and execution independently; cross-shard atomicity is then 2PC (prepare all in shard-ID order, then commit/abort) and provisioning (everyone waits for provisions from every other participating shard). Refs: crates/types/src/topology.rs (consensus_shards, provisioning_shards, all_shards_for_transaction), crates/types/src/transaction.rs (analyze_instructions_v1 / analyze_instructions_v2).
See the Transaction Flow diagram for the full step-by-step from user to finality.
Example: complex Radix manifest and Hyperscale provisioning
Consider a composite transaction that splits funds from a user account into staking and liquidity:
# Simplified Radix-style manifest (conceptual)
1. CallMethod(Account_A, "withdraw", XRD, amount) # → NodeID_Account_A, vault
2. TakeFromWorktop(XRD, amount)
3. SplitBucket(amount1, amount2) # worktop
4. CallMethod(StakingVault, "stake", bucket1) # → NodeID_Staking_VAULT
5. CallMethod(LiquidityPool, "contribute", bucket2) # → NodeID_LP_COMPONENT
6. CallMethod(Account_A, "deposit", staking_IOU) # → NodeID_Account_A
7. CallMethod(Account_A, "deposit", lp_IOU) # → NodeID_Account_A
Radix Engine: Runs these instructions in order. Data flow is implicit (e.g. step 4 uses the bucket from step 3; step 6–7 deposit what step 4–5 returned).
Instruction analysis (Hyperscale): The manifest is walked and every NodeId read or written is collected into declared_reads and declared_writes (crates/types/src/transaction.rs: analyze_instructions_v1 / analyze_instructions_v2). The result is sets (deduplicated); instruction order is not preserved at the consensus layer. Assume:
declared_writes: Account_A, Staking_VAULT, LP_COMPONENT (and possibly resource/vault NodeIds)
declared_reads: any read-only touched nodes (e.g. package, resource type)
Shard mapping (topology): Each NodeId maps to a shard via shard_for_node(node_id, num_shards) (crates/types/src/topology.rs). Suppose Account_A → shard 1, Staking_VAULT → shard 2, LP_COMPONENT → shard 3. Then:
- consensus_shards = {1, 2, 3} (unique shards of declared_writes), ordered as 1, 2, 3 (BTreeSet by ShardGroupId).
- provisioning_shards = read-only shards (if any) not in that set.
- all_shards_for_transaction = consensus ∪ provisioning, again in shard-ID order.
Provisioning and coordination:
- The tx is sent to shards 1, 2, 3 (step 6 in the tx flow). Each shard runs BFT and execution for its part. When a shard commits its block containing this tx, it produces a StateProvision (signed proof of the state it wrote) and sends it to the other shards.
- ProvisionCoordinator on each node (e.g. on shard 3) keeps a checklist: “Do we have quorum of provisions from shard 1? From shard 2?” When it has provisions from every other participating shard (
required_shards in TxRegistration), it emits ProvisioningComplete so execution can proceed. So shard 3 cannot “finish” its view of the cross-shard tx until it has proofs from shards 1 and 2—that’s the prerequisite.
- 2PC coordinator (one of shards 1, 2, or 3) drives prepare → collect yes/no → commit or abort in shard-ID order (1, 2, 3), not in manifest order. State updates are applied in that same order after the decision.
So: the manifest defines the logical flow (withdraw → split → stake → LP → deposit); the engine runs it sequentially; Hyperscale uses shard ID order for coordination and provisions to enforce “everyone has proof from everyone else” before completion.
Order: manifest vs protocol
As above: instruction order is not preserved at consensus; protocol order is by ShardGroupId. Prerequisites are enforced by provisions; required_shards is the set of all other participating shards (start_cross_shard_execution in crates/execution/src/state.rs).
Concepts in the Flow
- Shards — The network is split into shards; each shard has its own validators and ordering. Your tx is assigned to a shard (e.g. by account or payload). Each shard has its own BFT consensus state: its own view number, its own block chain, and its own proposer per round. So shard A and shard B run independent BFT instances; they don't share a single global "view."
- Proposer (block leader) — For each round (view), one validator in that shard is the proposer. It builds the block and broadcasts it; the other validators in the shard vote. Proposer selection is deterministic: typically a function of the current view number and the shard's validator set, e.g.
proposer = validators[view % validators.len()] (round-robin by index) or by validator identity. That way everyone agrees on who the leader is without extra messages.
- Validator identity (consensus) — In the consensus layer, "which validator" is identified by a validator ID (from the node's public key or protocol assignment). That identity is used for the validator set, proposer selection (e.g. round-robin), and routing. In Hyperscale-rs, types that refer to "which node" in the sense of "which validator" use this.
- NodeID in Radix (on-chain) — In Radix, NodeID often means something different: it refers to addresses of on-chain entities — components, resources, packages, accounts. Those are "nodes" in the ledger's state graph (e.g. a component instance, a resource type). So: validator identity = which machine runs consensus; NodeID/address in Radix = which component, resource, or package you're talking about on the ledger. Don't confuse the two when reading Radix docs.
- Cross-shard transactions & atomic composability — If a tx touches NodeIDs on different shards, it is cross-shard: sent to each involved shard (step 6); each runs BFT and execution for its part (steps 7–12). Atomicity across shards: 2PC (prepare → commit/abort) and provisions (see "How atomic composability works" and "Two coordinators" above).
- 2PC coordinator vs ProvisionCoordinator — See the "Two coordinators" section above. In short: the 2PC coordinator is one involved shard that asks "everyone ready?" (prepare), then sends commit or abort. The ProvisionCoordinator is a checklist on every node: "Do we have quorum of provisions (proofs) from each prerequisite shard?" so execution can proceed. The protocol fixes the order (e.g. by shard ID); the 2PC coordinator drives the protocol in that order.
- Finality — Once a block has a quorum certificate (2f+1 votes from that shard's validators), it is final. No reorg under normal BFT assumptions.
Quiz: Cross-shard and provisioning
Answer based on the content above. Pass threshold: 70%.