Sharding splits the system into multiple subsets (shards), each maintaining its own subset of state and running consensus among a subset of validators. Benefits include higher throughput (parallel execution) and smaller per-shard state. The challenge is cross-shard transactions: when a single logical transaction touches state on more than one shard, the system must coordinate so that the outcome is atomic (all shards commit or all abort).
In 2PC, a coordinator drives two phases:
This gives atomicity: either every shard commits or none do. The order in which shards prepare and apply updates is fixed (e.g. by shard ID) so all nodes agree on the same outcome.
Besides 2PC (prepare/commit/abort), cross-shard systems often need provisions: after a shard commits its block, it produces a signed proof (e.g. “state at height H for this tx”) and sends it to other shards. A shard that depends on remote state cannot finish its view of the transaction until it has received enough of these proofs (e.g. quorum) from the other involved shards. So there are two coordination ideas:
In cross-shard protocols, livelock can occur if shards keep retrying or re-ordering without ever committing. Common techniques:
This module is the conceptual intro to sharding and cross-shard transactions; Cross-Shard Transactions in Hyperscale-rs is the implementation. In the next module you’ll see how hyperscale-rs implements these ideas: 2PC in the execution crate, provision coordination in the provisions crate, and how protocol order (e.g. ShardGroupId) and livelock prevention are handled. Continue to Cross-Shard Transactions in Hyperscale-rs.