sharding: merge master

Former-commit-id: e182bb374419163f7e971e18716ac31291d6113a [formerly d796b1730d08d8eb56e3ec77f14f895c2bbf1d0f]
Former-commit-id: 028f1a8904ae5830db3a1d3d7314c4e68ad834f0
This commit is contained in:
Raul Jordan
2018-05-30 14:50:22 -06:00
13 changed files with 293 additions and 87 deletions

View File

@@ -86,6 +86,7 @@ var AppHelpFlagGroups = []flagGroup{
{
Name: "SHARDING",
Flags: []cli.Flag{
utils.ActorFlag,
utils.DepositFlag,
},
},

View File

@@ -391,10 +391,60 @@ Work in progress.
## Selecting Notaries Via Random Beacon Chains
In our current implementation for the Ruby Release, we are selecting notaries through a pseudorandom method built into the SMC directly. Inspired by dfinity's Random Beacon chains, the Ethereum Research team has been proposing better solutions that have faster finality guarantees.
In our current implementation for the Ruby Release, we are selecting notaries through a pseudorandom method built into the SMC directly. Inspired by dfinity's random beacon chains, the Ethereum Research team has been proposing [better solutions](https://github.com/ethereum/research/tree/master/sharding_fork_choice_poc) that have faster finality guarantees. The random beacon chain would be in charge for pseudorandomly sampling notaries and would allow for cool stuff such as off-chain collation headers that were not possible before. Through this, no gas would need to be paid for including collation headers and we can achieve faster finality guarantees, making the system way better than before.
<https://ethresear.ch/t/posw-random-beacon/1814/6>
## Leaderless Random Beacons
In the prevous research on random beacons, committees are able to generate random numbers if a certain number of participants participate correctly. This is similar to the random beacon used in Dfinity without the use of BLS threshold signatures. The scheme is separated into two separate sections.
In the first section, each participant is committed to a secret key and shares the resulting public key.
In the second section, each participant will use their secret key to deterministically build a polynomial and that polynomial is used to to create n shares (where n is the size of the committee) which can then be encrypted with respect to the public keys and then shared publicly.
Then, in the resolution, all participants are then to reveal their private keys, once the key is revealed anyone can check if the participant committed correctly. We can define the random output as the sum of the private keys for which the participants committed correctly.
<https://ethresear.ch/t/leaderless-k-of-n-random-beacon/2046/3>
## Torus-shaped Sharded P2P Network
One recommendation is using a [Torus-shaped sharding network](https://commons.wikimedia.org/wiki/File:Toroidal_coord.png). In this paradigm, there would be a single network that all shards share rather than a network for each shard. Nodes would propagate messages to peers interested in neighboring shards. A node listening on shard 16 would relay messages for shards in range of 11 to 21 (i.e +/-5). Nodes that need to listen on multiple shards can quickly change shards to find peers that may relay necessary messages. A node could potentially have access to messages from all shards with only 10 distinct peers for a 100 shard network. At the same time, we're considering replacing [DEVp2p](https://github.com/ethereum/wiki/wiki/%C3%90%CE%9EVp2p-Wire-Protocol) with [libp2p](https://github.com/libp2p) framework, which is actively maintained, proven to work with IPFS, and comes with client libraries for Go and Javascript.
Active research is on going for moving Ethereum fron DEVp2p to libp2p. We are looking into how to map shards to libp2p and how to balance flood/gossipsub progagation vs active connections. Here is the current work of [poc](https://github.com/mhchia/go-libp2p/tree/poc-testing/examples/minimal) on [gossiphub](https://github.com/libp2p/go-floodsub/pull/67/). It utilizies pubsub for propagating messages such as transactions, proposals and sharded collations.
<https://ethresear.ch/t/torus-shaped-sharding-network/1720>
## Sparse Merkle Tree for State Storage
With a sharded network comes sharded state storage. State sync today is difficult for clients today. While the blockchain data stored on disk might use~80gb for a fast sync, less than 5gb of that disk is state data while state sync accounts for the majority of time spent syncing. As the state grows, this issue will also grow. We imagine that it might be difficult to sync effectively when there are 100 shards and 100 different state tries. One recommendation from the Ethereum Research team outlines using [sparse merkle trees].(https://www.links.org/files/RevocationTransparency.pdf)
<https://ethresear.ch/t/data-availability-proof-friendly-state-tree-transitions/1453>
## Proof of Custody
A critique against the notary scheme currently followed in the minimal sharding protocol is the susceptibility of these agents towards the “validators dilemma”, wherein agents are incentivized to be “lazy” and trust the work of other validators when making coordinated decisions. Specifically, notaries are tasked with checking data availability of collation headers submitted to the SMC during their assigned period. This means notaries have to download headers via a shardp2p network and commit their votes after confirming availability. Proposers can try to game validators by publishing unavailable proposals and then challenging lazy validators to take their deposits. In order to prevent abuse of collation availability traps, the responsibility of notaries is extended to also provide a “Merkle root of a signature tree, where each signature in the signature tree is the signature of the corresponding chunk of the original collation data.” (ETHResearch) This means that at challenge time, notaries must have the fully available collation data in order to construct a signature tree of all its chunks.
<https://ethresear.ch/t/extending-skin-in-the-game-of-notarization-with-proofs-of-custody/1639>
## Safe Notary Pool Sizes: RANDAO Exploration
When notary pool sizes are too small a few things can happen: A small pool would result in the notary requiring a large amount of bandwidth. The amount of bandwidth required by each notary is inversely proportional to the size of the pool, so in order to be sufficiently decentralized the notary pool should be large enough so that the bandwidth required should be manageable with poor internet connection. Secondly the notary pool size has a direct effect on the capital requirements in order to take over notarisation and revert/censor transactions. An acceptable notary pool size would be one that required a minimum acceptable capital threshold for a takeover of the chain. In Vitaliks RANDAO analysis he looked at how vulnerable the RANDAO chain was comparatively to a POW(Proof of Work) chain. The result of the exercise was that an attacker with a 40% of stake on the RANDAO chain can effectively revert transactions; to achieve the same result on a POW chain they would require 50% of the hashpower. On the other hand if the chain utilised a 2/2 notarization committee, the attacker would need to up their stake to 46% on the chain to be able to effectively censor transactions.
<https://ethresear.ch/t/safe-notary-pool-size/1728>
## Cross Links Between Shard Chain and Main Chain
For synchronizing cross shard chain communications, we are researching how to properly link between the shard chain and the beacon chain. In order to accomplish this, a randomly sampled committee will vote to approve a collation in a sharded chain per period and per shard. As Vitalik wrote, there are two ways create cross links between main shard and shards. On-chain aggregation and off-chain aggregation. For on-chain aggregation, the state of beacon chain will keep track of the randomly sampled committee as validators. Each validator can make one vote casper FFG style, the vote will also contain cross-link of that committee. For off-chain aggregation, every beacon chain block creator will choose one CAS to link the sharded chain to main chain. Off chain aggregation mechanisms have benefits as there is no need for the beacon chain to track of vote counts.
<https://ethresear.ch/t/extending-minimal-sharding-with-cross-links/1989/8>
<https://ethresear.ch/t/two-ways-to-do-cross-links/2074/2>
## Fixed ETH Deposit Size for Notaries
A notary must submit a deposit to the Sharding Manager Contract in order to get randomly selected to vote on a block. A fixed size deposit is good for making the random selection convenient and work well with slashing, as it can always destroy at least a minimum amount of ether. However, a fixed-size deposit does not do well with rewards and penalties. An alternative solution is to design incentive system where rewards and penalties are tracked in a separate variable, and when the final balance when the withdrawal penalties minus rewards reach a threshold, the notary can be voted out. Such a design might ignore an important function which is to reduce the influence of notaries that are offline. In Casper FFG, if more than 1/3 of validators to offline around same time, the deposits will begin to leak quickly. This is called quadratic leak.
<https://ethresear.ch/t/fixed-size-deposits-and-rewards-penalties-quad-leak/2073/7>
# Community Updates and Contributions
Excited by our work and want to get involved in building out our sharding releases? We created this document as a single source of reference for all things related to sharding Ethereum, and we need as much help as we can get!
@@ -448,3 +498,19 @@ A special thanks for entire [Prysmatic Labs](https://gitter.im/prysmaticlabs/get
[Model for Phase 4 Tightly-Coupled Sharding](https://ethresear.ch/t/a-model-for-stage-4-tightly-coupled-sharding-plus-full-casper/1065)
[History, State, and Asynchronous Accumulators in the Stateless Model](https://ethresear.ch/t/history-state-and-asynchronous-accumulators-in-the-stateless-model/287)
[Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720)
[Data Availability Proof-friendly State Tree Transitions](https://ethresear.ch/t/data-availability-proof-friendly-state-tree-transitions/1453)
[General Framework of Overhead and Finality Time in Sharding](https://ethresear.ch/t/a-general-framework-of-overhead-and-finality-time-in-sharding-and-a-proposal/1638)
[Safety Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728)
[Fixed Size Deposits and Rewards Penalties Quadleak](https://ethresear.ch/t/fixed-size-deposits-and-rewards-penalties-quad-leak/2073/7)
[Two Ways To Do Cross Links](https://ethresear.ch/t/two-ways-to-do-cross-links/2074/2)
[Extending Minimal Sharding with Cross Links](https://ethresear.ch/t/two-ways-to-do-cross-links/2074/2)
[Leaderless K of N Random Beacon](https://ethresear.ch/t/leaderless-k-of-n-random-beacon/2046/3)

View File

@@ -0,0 +1,23 @@
package database
import (
"path/filepath"
"github.com/ethereum/go-ethereum/ethdb"
)
// ShardBackend defines an interface for a shardDB's necessary method
// signatures.
type ShardBackend interface {
Get(k []byte) ([]byte, error)
Has(k []byte) (bool, error)
Put(k []byte, val []byte) error
Delete(k []byte) error
}
// NewShardDB initializes a shardDB that writes to local disk.
func NewShardDB(dataDir string, name string) (ShardBackend, error) {
// Uses default cache and handles values.
// TODO: allow these arguments to be set based on cli context.
return ethdb.NewLDBDatabase(filepath.Join(dataDir, name), 16, 16)
}

View File

@@ -0,0 +1,92 @@
package database
import (
"strconv"
"testing"
)
var db ShardBackend
func init() {
shardDB, err := NewShardDB("/tmp/datadir", "shardchaindata")
if err != nil {
panic(err)
}
db = shardDB
}
// Testing the concurrency of the shardDB with multiple processes attempting to write.
func Test_DBConcurrent(t *testing.T) {
for i := 0; i < 100; i++ {
go func(val string) {
if err := db.Put([]byte("ralph merkle"), []byte(val)); err != nil {
t.Errorf("could not save value in db: %v", err)
}
}(strconv.Itoa(i))
}
}
func Test_DBPut(t *testing.T) {
if err := db.Put([]byte("ralph merkle"), []byte{1, 2, 3}); err != nil {
t.Errorf("could not save value in db: %v", err)
}
}
func Test_DBHas(t *testing.T) {
key := []byte("ralph merkle")
if err := db.Put(key, []byte{1, 2, 3}); err != nil {
t.Fatalf("could not save value in db: %v", err)
}
has, err := db.Has(key)
if err != nil {
t.Errorf("could not check if db has key: %v", err)
}
if !has {
t.Errorf("db should have key: %v", key)
}
key2 := []byte{}
has2, err := db.Has(key2)
if err != nil {
t.Errorf("could not check if db has key: %v", err)
}
if has2 {
t.Errorf("db should not have non-existent key: %v", key2)
}
}
func Test_DBGet(t *testing.T) {
key := []byte("ralph merkle")
if err := db.Put(key, []byte{1, 2, 3}); err != nil {
t.Fatalf("could not save value in db: %v", err)
}
val, err := db.Get(key)
if err != nil {
t.Errorf("get failed: %v", err)
}
if len(val) == 0 {
t.Errorf("no value stored for key")
}
key2 := []byte{}
val2, err := db.Get(key2)
if len(val2) != 0 {
t.Errorf("non-existent key should not have a value. key=%v, value=%v", key2, val2)
}
}
func Test_DBDelete(t *testing.T) {
key := []byte("ralph merkle")
if err := db.Put(key, []byte{1, 2, 3}); err != nil {
t.Fatalf("could not save value in db: %v", err)
}
if err := db.Delete(key); err != nil {
t.Errorf("could not delete key: %v", key)
}
}

View File

@@ -12,48 +12,48 @@ import (
// ShardKV is an in-memory mapping of hashes to RLP encoded values.
type ShardKV struct {
kv map[common.Hash]*[]byte
kv map[common.Hash][]byte
lock sync.RWMutex
}
// NewShardKV initializes a keyval store in memory.
func NewShardKV() *ShardKV {
return &ShardKV{kv: make(map[common.Hash]*[]byte)}
return &ShardKV{kv: make(map[common.Hash][]byte)}
}
// Get fetches a val from the mappping by key.
func (sb *ShardKV) Get(k common.Hash) (*[]byte, error) {
func (sb *ShardKV) Get(k []byte) ([]byte, error) {
sb.lock.RLock()
defer sb.lock.RUnlock()
v, ok := sb.kv[k]
v, ok := sb.kv[common.BytesToHash(k)]
if !ok {
return nil, fmt.Errorf("key not found: %v", k)
return []byte{}, fmt.Errorf("key not found: %v", k)
}
return v, nil
}
// Has checks if the key exists in the mapping.
func (sb *ShardKV) Has(k common.Hash) bool {
func (sb *ShardKV) Has(k []byte) (bool, error) {
sb.lock.RLock()
defer sb.lock.RUnlock()
v := sb.kv[k]
return v != nil
v := sb.kv[common.BytesToHash(k)]
return v != nil, nil
}
// Put updates a key's value in the mapping.
func (sb *ShardKV) Put(k common.Hash, v []byte) error {
func (sb *ShardKV) Put(k []byte, v []byte) error {
sb.lock.Lock()
defer sb.lock.Unlock()
// there is no error in a simple setting of a value in a go map.
sb.kv[k] = &v
sb.kv[common.BytesToHash(k)] = v
return nil
}
// Delete removes the key and value from the mapping.
func (sb *ShardKV) Delete(k common.Hash) error {
func (sb *ShardKV) Delete(k []byte) error {
sb.lock.Lock()
defer sb.lock.Unlock()
// There is no return value for deleting a simple key in a go map.
delete(sb.kv, k)
delete(sb.kv, common.BytesToHash(k))
return nil
}

View File

@@ -2,73 +2,77 @@ package database
import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/sharding"
)
// Verifies that ShardKV implements the ShardBackend interface.
var _ = sharding.ShardBackend(&ShardKV{})
var _ = ShardBackend(&ShardKV{})
func Test_ShardKVPut(t *testing.T) {
kv := NewShardKV()
hash := common.BytesToHash([]byte("ralph merkle"))
if err := kv.Put(hash, []byte{1, 2, 3}); err != nil {
if err := kv.Put([]byte("ralph merkle"), []byte{1, 2, 3}); err != nil {
t.Errorf("could not save value in kv store: %v", err)
}
}
func Test_ShardKVHas(t *testing.T) {
kv := NewShardKV()
hash := common.BytesToHash([]byte("ralph merkle"))
key := []byte("ralph merkle")
if err := kv.Put(hash, []byte{1, 2, 3}); err != nil {
if err := kv.Put(key, []byte{1, 2, 3}); err != nil {
t.Fatalf("could not save value in kv store: %v", err)
}
if !kv.Has(hash) {
t.Errorf("kv store does not have hash: %v", hash)
has, err := kv.Has(key)
if err != nil {
t.Errorf("could not check if kv store has key: %v", err)
}
if !has {
t.Errorf("kv store should have key: %v", key)
}
hash2 := common.BytesToHash([]byte{})
if kv.Has(hash2) {
t.Errorf("kv store should not contain unset key: %v", hash2)
key2 := []byte{}
has2, err := kv.Has(key2)
if err != nil {
t.Errorf("could not check if kv store has key: %v", err)
}
if has2 {
t.Errorf("kv store should not have non-existent key: %v", key2)
}
}
func Test_ShardKVGet(t *testing.T) {
kv := NewShardKV()
hash := common.BytesToHash([]byte("ralph merkle"))
key := []byte("ralph merkle")
if err := kv.Put(hash, []byte{1, 2, 3}); err != nil {
if err := kv.Put(key, []byte{1, 2, 3}); err != nil {
t.Fatalf("could not save value in kv store: %v", err)
}
val, err := kv.Get(hash)
val, err := kv.Get(key)
if err != nil {
t.Errorf("get failed: %v", err)
}
if val == nil {
if len(val) == 0 {
t.Errorf("no value stored for key")
}
hash2 := common.BytesToHash([]byte{})
val2, err := kv.Get(hash2)
if val2 != nil {
t.Errorf("non-existent key should not have a value. key=%v, value=%v", hash2, val2)
key2 := []byte{}
val2, err := kv.Get(key2)
if len(val2) != 0 {
t.Errorf("non-existent key should not have a value. key=%v, value=%v", key2, val2)
}
}
func Test_ShardKVDelete(t *testing.T) {
kv := NewShardKV()
hash := common.BytesToHash([]byte("ralph merkle"))
key := []byte("ralph merkle")
if err := kv.Put(hash, []byte{1, 2, 3}); err != nil {
if err := kv.Put(key, []byte{1, 2, 3}); err != nil {
t.Fatalf("could not save value in kv store: %v", err)
}
if err := kv.Delete(hash); err != nil {
t.Errorf("could not delete key: %v", hash)
if err := kv.Delete(key); err != nil {
t.Errorf("could not delete key: %v", key)
}
}

View File

@@ -44,6 +44,7 @@ type Node interface {
SMCCaller() *contracts.SMCCaller
SMCTransactor() *contracts.SMCTransactor
DepositFlagSet() bool
DataDirFlag() string
}
// General node for a sharding-enabled system.
@@ -161,7 +162,10 @@ func (n *shardingNode) Register(constructor sharding.ServiceConstructor) error {
// Close the RPC client connection.
func (n *shardingNode) Close() {
n.rpcClient.Close()
// rpcClient could be nil if the connection failed.
if n.rpcClient != nil {
n.rpcClient.Close()
}
}
// CreateTXOpts creates a *TransactOpts with a signer using the default account on the keystore.
@@ -213,6 +217,11 @@ func (n *shardingNode) DepositFlagSet() bool {
return n.ctx.GlobalBool(utils.DepositFlag.Name)
}
// DataDirFlag returns the datadir flag as a string.
func (n *shardingNode) DataDirFlag() string {
return n.ctx.GlobalString(utils.DataDirFlag.Name)
}
// Client to interact with a geth node via JSON-RPC.
func (n *shardingNode) ethereumClient() *ethclient.Client {
return n.client

View File

@@ -26,14 +26,14 @@ func dialRPC(endpoint string) (*rpc.Client, error) {
func initSMC(n *shardingNode) (*contracts.SMC, error) {
b, err := n.client.CodeAt(context.Background(), sharding.ShardingManagerAddress, nil)
if err != nil {
return nil, fmt.Errorf("unable to get contract code at %s: %v", sharding.ShardingManagerAddress, err)
return nil, fmt.Errorf("unable to get contract code at %s: %v", sharding.ShardingManagerAddress.Str(), err)
}
// Deploy SMC for development only.
// TODO: Separate contract deployment from the sharding node. It would only need to be deployed
// once on the mainnet, so this code would not need to ship with the node.
if len(b) == 0 {
log.Info(fmt.Sprintf("No sharding manager contract found at %s. Deploying new contract.", sharding.ShardingManagerAddress))
log.Info(fmt.Sprintf("No sharding manager contract found at %s. Deploying new contract.", sharding.ShardingManagerAddress.Str()))
txOps, err := n.CreateTXOpts(big.NewInt(0))
if err != nil {
@@ -52,7 +52,7 @@ func initSMC(n *shardingNode) (*contracts.SMC, error) {
time.Sleep(1 * time.Second)
}
log.Info(fmt.Sprintf("New contract deployed at %s", addr))
log.Info(fmt.Sprintf("New contract deployed at %s", addr.Str()))
return contract, nil
}

View File

@@ -2,6 +2,7 @@ package notary
import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/sharding/database"
"github.com/ethereum/go-ethereum/sharding/node"
)
@@ -9,12 +10,20 @@ import (
// in a sharded system. Must satisfy the Service interface defined in
// sharding/service.go.
type Notary struct {
node node.Node
node node.Node
shardDB database.ShardBackend
}
// NewNotary creates a new notary instance.
func NewNotary(node node.Node) (*Notary, error) {
return &Notary{node}, nil
// Initializes a shardDB that writes to disk at /path/to/datadir/shardchaindata.
// This DB can be used by the Notary service to create Shard struct
// instances.
shardDB, err := database.NewShardDB(node.DataDirFlag(), "shardchaindata")
if err != nil {
return nil, err
}
return &Notary{node, shardDB}, nil
}
// Start the main routine for a notary.

View File

@@ -63,6 +63,10 @@ func (m *mockNode) DepositFlagSet() bool {
return m.DepositFlag
}
func (m *mockNode) DataDirFlag() string {
return "/tmp/datadir"
}
// Unused mockClient methods.
func (m *mockNode) Start() error {
m.t.Fatal("Start called")

View File

@@ -2,6 +2,7 @@ package proposer
import (
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/sharding/database"
"github.com/ethereum/go-ethereum/sharding/node"
)
@@ -9,14 +10,20 @@ import (
// in a sharded system. Must satisfy the Service interface defined in
// sharding/service.go.
type Proposer struct {
node node.Node
node node.Node
shardDB database.ShardBackend
}
// NewProposer creates a struct instance. It is initialized and
// registered as a service upon start of a sharding node.
// Has access to the public methods of this node.
func NewProposer(node node.Node) (*Proposer, error) {
return &Proposer{node}, nil
// Initializes a shardchaindata directory persistent db.
shardDB, err := database.NewShardDB(node.DataDirFlag(), "shardchaindata")
if err != nil {
return nil, err
}
return &Proposer{node, shardDB}, nil
}
// Start the main loop for proposing collations.

View File

@@ -7,25 +7,17 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/rlp"
"github.com/ethereum/go-ethereum/sharding/database"
)
// ShardBackend defines an interface for a shardDB's necessary method
// signatures.
type ShardBackend interface {
Get(k common.Hash) (*[]byte, error)
Has(k common.Hash) bool
Put(k common.Hash, val []byte) error
Delete(k common.Hash) error
}
// Shard base struct.
type Shard struct {
shardDB ShardBackend
shardDB database.ShardBackend
shardID *big.Int
}
// NewShard creates an instance of a Shard struct given a shardID.
func NewShard(shardID *big.Int, shardDB ShardBackend) *Shard {
func NewShard(shardID *big.Int, shardDB database.ShardBackend) *Shard {
return &Shard{
shardID: shardID,
shardDB: shardDB,
@@ -47,17 +39,17 @@ func (s *Shard) ValidateShardID(h *CollationHeader) error {
// HeaderByHash looks up a collation header from the shardDB using the header's hash.
func (s *Shard) HeaderByHash(hash *common.Hash) (*CollationHeader, error) {
encoded, err := s.shardDB.Get(*hash)
encoded, err := s.shardDB.Get(hash.Bytes())
if err != nil {
return nil, fmt.Errorf("get failed: %v", err)
}
if encoded == nil {
if len(encoded) == 0 {
return nil, fmt.Errorf("no value set for header hash: %vs", hash.Hex())
}
var header CollationHeader
stream := rlp.NewStream(bytes.NewReader(*encoded), uint64(len(*encoded)))
stream := rlp.NewStream(bytes.NewReader(encoded), uint64(len(encoded)))
if err := header.DecodeRLP(stream); err != nil {
return nil, fmt.Errorf("could not decode RLP header: %v", err)
}
@@ -88,18 +80,18 @@ func (s *Shard) CanonicalHeaderHash(shardID *big.Int, period *big.Int) (*common.
key := canonicalCollationLookupKey(shardID, period)
// fetches the RLP encoded collation header corresponding to the key.
encoded, err := s.shardDB.Get(key)
encoded, err := s.shardDB.Get(key.Bytes())
if err != nil {
return nil, err
}
if encoded == nil {
if len(encoded) == 0 {
return nil, fmt.Errorf("no canonical collation header set for period=%v, shardID=%v pair: %v", shardID, period, err)
}
// RLP decodes the header, computes its hash.
var header CollationHeader
stream := rlp.NewStream(bytes.NewReader(*encoded), uint64(len(*encoded)))
stream := rlp.NewStream(bytes.NewReader(encoded), uint64(len(encoded)))
if err := header.DecodeRLP(stream); err != nil {
return nil, fmt.Errorf("could not decode RLP header: %v", err)
}
@@ -120,27 +112,26 @@ func (s *Shard) CanonicalCollation(shardID *big.Int, period *big.Int) (*Collatio
// BodyByChunkRoot fetches a collation body.
func (s *Shard) BodyByChunkRoot(chunkRoot *common.Hash) ([]byte, error) {
body, err := s.shardDB.Get(*chunkRoot)
body, err := s.shardDB.Get(chunkRoot.Bytes())
if err != nil {
return nil, err
return []byte{}, err
}
if body == nil {
if len(body) == 0 {
return nil, fmt.Errorf("no corresponding body with chunk root found: %s", chunkRoot)
}
return *body, nil
return body, nil
}
// CheckAvailability is used by notaries to confirm a header's data availability.
func (s *Shard) CheckAvailability(header *CollationHeader) (bool, error) {
key := dataAvailabilityLookupKey(header.ChunkRoot())
val, err := s.shardDB.Get(key)
availability, err := s.shardDB.Get(key.Bytes())
if err != nil {
return false, err
}
if val == nil {
if len(availability) == 0 {
return false, fmt.Errorf("availability not set for header")
}
availability := *val
// availability is a byte array of length 1.
return availability[0] != 0, nil
}
@@ -154,7 +145,7 @@ func (s *Shard) SetAvailability(chunkRoot *common.Hash, availability bool) error
} else {
encoded = []byte{0}
}
return s.shardDB.Put(key, encoded)
return s.shardDB.Put(key.Bytes(), encoded)
}
// SaveHeader adds the collation header to shardDB.
@@ -170,7 +161,7 @@ func (s *Shard) SaveHeader(header *CollationHeader) error {
}
// uses the hash of the header as the key.
return s.shardDB.Put(header.Hash(), encoded)
return s.shardDB.Put(header.Hash().Bytes(), encoded)
}
// SaveBody adds the collation body to the shardDB and sets availability.
@@ -181,7 +172,7 @@ func (s *Shard) SaveBody(body []byte) error {
// right now we will just take the raw keccak256 of the body until #92 is merged.
chunkRoot := common.BytesToHash(body)
s.SetAvailability(&chunkRoot, true)
return s.shardDB.Put(chunkRoot, body)
return s.shardDB.Put(chunkRoot.Bytes(), body)
}
// SaveCollation adds the collation's header and body to shardDB.
@@ -222,7 +213,7 @@ func (s *Shard) SetCanonical(header *CollationHeader) error {
}
// sets the key to be the canonical collation lookup key and val as RLP encoded
// collation header.
return s.shardDB.Put(key, encoded)
return s.shardDB.Put(key.Bytes(), encoded)
}
// dataAvailabilityLookupKey formats a string that will become a lookup

View File

@@ -13,22 +13,22 @@ import (
)
type mockShardDB struct {
kv map[common.Hash]*[]byte
kv map[common.Hash][]byte
}
func (m *mockShardDB) Get(k common.Hash) (*[]byte, error) {
return nil, nil
func (m *mockShardDB) Get(k []byte) ([]byte, error) {
return []byte{}, nil
}
func (m *mockShardDB) Has(k common.Hash) bool {
return false
func (m *mockShardDB) Has(k []byte) (bool, error) {
return false, nil
}
func (m *mockShardDB) Put(k common.Hash, v []byte) error {
func (m *mockShardDB) Put(k []byte, v []byte) error {
return fmt.Errorf("error updating db")
}
func (m *mockShardDB) Delete(k common.Hash) error {
func (m *mockShardDB) Delete(k []byte) error {
return fmt.Errorf("error deleting value in db")
}
@@ -64,7 +64,7 @@ func TestShard_HeaderByHash(t *testing.T) {
header := NewCollationHeader(big.NewInt(1), &emptyHash, big.NewInt(1), &emptyAddr, []byte{})
// creates a mockDB that always returns nil values from .Get and errors in every other method.
mockDB := &mockShardDB{kv: make(map[common.Hash]*[]byte)}
mockDB := &mockShardDB{kv: make(map[common.Hash][]byte)}
// creates a well-functioning shardDB.
shardDB := database.NewShardKV()
@@ -295,10 +295,10 @@ func TestShard_BodyByChunkRoot(t *testing.T) {
}
// setting the val of the key to nil.
if err := shard.shardDB.Put(emptyHash, nil); err != nil {
if err := shard.shardDB.Put([]byte{}, nil); err != nil {
t.Fatalf("could not update shardDB: %v", err)
}
if _, err := shard.BodyByChunkRoot(&emptyHash); err != nil {
if _, err := shard.BodyByChunkRoot(&emptyHash); err == nil {
t.Errorf("value set as nil in shardDB should return error from BodyByChunkRoot")
}
@@ -346,7 +346,7 @@ func TestShard_SetAvailability(t *testing.T) {
header := NewCollationHeader(big.NewInt(1), &chunkRoot, big.NewInt(1), nil, []byte{})
// creates a mockDB that always returns nil values from .Get and errors in every other method.
mockDB := &mockShardDB{kv: make(map[common.Hash]*[]byte)}
mockDB := &mockShardDB{kv: make(map[common.Hash][]byte)}
// creates a well-functioning shardDB.
shardDB := database.NewShardKV()
@@ -401,7 +401,7 @@ func TestShard_SaveCollation(t *testing.T) {
func TestShard_SaveHeader(t *testing.T) {
// creates a mockDB that always returns nil values from .Get and errors in every other method.
mockDB := &mockShardDB{kv: make(map[common.Hash]*[]byte)}
mockDB := &mockShardDB{kv: make(map[common.Hash][]byte)}
emptyHash := common.BytesToHash([]byte{})
errorShard := NewShard(big.NewInt(1), mockDB)
@@ -413,7 +413,7 @@ func TestShard_SaveHeader(t *testing.T) {
func TestShard_SaveBody(t *testing.T) {
// creates a mockDB that always returns nil values from .Get and errors in every other method.
mockDB := &mockShardDB{kv: make(map[common.Hash]*[]byte)}
mockDB := &mockShardDB{kv: make(map[common.Hash][]byte)}
errorShard := NewShard(big.NewInt(1), mockDB)
if err := errorShard.SaveBody([]byte{1, 2, 3}); err == nil {