Compare commits

...

30 Commits

Author SHA1 Message Date
Nishant Das
7e50c36725 Add Hasher To State Data Types (#5244)
* add hasher
* re-order stuff
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
* Merge refs/heads/master into addHasher
2020-03-31 18:57:19 +00:00
terence tsao
e22365c4a8 Uncomment out cold state tests (#5252)
* Fixed most of the tests

* All tests passing

* All tests passing

* Fix merge conflict

* Fixed error test

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-31 11:23:39 -05:00
Nishant Das
c8f8e3f1e0 Unmarshal Block instead of State (#5246)
* unmarshal block instead of state
* add fallback
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
* Merge refs/heads/master into dontUnmarshal
2020-03-31 15:25:58 +00:00
Ivan Martinez
3e81afd7ab Skip anti-flake E2E tests (#5257)
* Skip anti-flake

* Log out the shard index to see it per shard

* Attempt fixes

* Remove unneeded log

* Change eth1 ports

* Remove skips

* Remove log

* Attempt local build

* Fix formatting

* Formatting

* Skip anti flake tests
2020-03-31 23:15:33 +08:00
Ivan Martinez
404a0f6bda Attempt E2E flaking fix (#5256)
* Fix test sharding

* Attempt fix
2020-03-31 12:39:56 +08:00
Preston Van Loon
00ef08b3dc Debug: add cgo symbolizer (#5255)
* Add cgo_symbolizer config

* Add comment

* use import block
2020-03-30 20:20:27 -07:00
Preston Van Loon
6edb3018f9 Add configurations for BLS builds (#5254)
* Add configurations for BLS builds
* Merge refs/heads/master into bls-configurations
2020-03-31 01:58:27 +00:00
Preston Van Loon
17516b625e Use math.Sqrt for IntegerSquareRoot (#5253)
* use std
* Merge refs/heads/master into faster-sqrt
2020-03-31 01:27:37 +00:00
terence tsao
7f7866ff2a Micro optimizations on new-state-mgmt service for initial syncing (#5241)
* Starting a quick PoC

* Rate limit to one epoch worth of blocks in memory

* Proof of concept working

* Quick comment out

* Save previous finalized checkpoint

* Test

* Minor fixes

* More run time fixes

* Remove panic

* Feature flag

* Removed unused methods

* Fixed tests

* E2e test

* comment

* Compatible with current initial sync

* Starting

* New cache

* Cache getters and setters

* It should be part of state gen

* Need to use cache for DB

* Don't have to use finalized state

* Rm unused file

* some changes to memory mgmt when using mempool

* More run time fixes

* Can sync to head

* Feedback

* Revert "some changes to memory mgmt when using mempool"

This reverts commit f5b3e7ff47.

* Fixed sync tests

* Fixed existing tests

* Test for state summary getter

* Gaz

* Fix kafka passthrough

* Fixed inputs

* Gaz

* Fixed build

* Fixed visibility

* Trying without the ignore

* Didn't work..

* Fix kafka

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2020-03-30 17:10:45 -05:00
terence tsao
c5f186d56f Batch save blocks for initial sync. 80% faster BPS (#5215)
* Starting a quick PoC
* Rate limit to one epoch worth of blocks in memory
* Proof of concept working
* Quick comment out
* Save previous finalized checkpoint
* Merge branch 'master' of github.com:prysmaticlabs/prysm into batch-save
* Test
* Merge branch 'prev-finalized-getter' into batch-save
* Minor fixes
* Use a map
* More run time fixes
* Remove panic
* Feature flag
* Removed unused methods
* Fixed tests
* E2e test
* Merge branch 'master' into batch-save
* comment
* Merge branch 'master' into batch-save
* Compatible with current initial sync
* Merge branch 'batch-save' of github.com:prysmaticlabs/prysm into batch-save
* Merge refs/heads/master into batch-save
* Merge refs/heads/master into batch-save
* Merge refs/heads/master into batch-save
* Merge branch 'master' of github.com:prysmaticlabs/prysm into batch-save
* Feedback
* Merge branch 'batch-save' of github.com:prysmaticlabs/prysm into batch-save
* Merge refs/heads/master into batch-save
2020-03-30 18:04:10 +00:00
Ivan Martinez
0982ff124e Fix E2E test sharding (#5248) 2020-03-30 12:10:00 -04:00
Nishant Das
63df1d0b8d Add Merkleize With Customized Hasher (#5234)
* add buffer for merkleizer
* add comment
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge refs/heads/master into merkleize
* Merge branch 'master' of https://github.com/prysmaticlabs/geth-sharding into merkleize
* Merge branch 'merkleize' of https://github.com/prysmaticlabs/geth-sharding into merkleize
* lint
* Merge refs/heads/master into merkleize
2020-03-29 06:13:24 +00:00
Ivan Martinez
cb9ac6282f Separate anti flakes to prevent E2E issues (#5238)
* Separate anti flakes

* Gaz
2020-03-29 13:54:13 +08:00
terence tsao
c67b01e5d3 Check new state mgmt service is compatible with DB (#5231) 2020-03-28 18:07:51 -07:00
terence tsao
b40e6db1e5 Fix save blocks return nil (#5237)
* Fixed save blocks return nil
* Merge refs/heads/master into fix-batch-save-blocks
* Merge refs/heads/master into fix-batch-save-blocks
2020-03-28 19:05:56 +00:00
Preston Van Loon
f89d753275 Add configurable e2e epochs (#5235)
* Add configurable e2e epochs
* Merge refs/heads/master into configurable-test-epochs
* Merge refs/heads/master into configurable-test-epochs
2020-03-28 18:47:31 +00:00
Preston Van Loon
a24546152b HashProto: Use fastssz when available (#5218)
* Use fastssz when available
* fix tests
* fix most tests
* Merge branch 'master' into faster-hash-proto
* Merge refs/heads/master into faster-hash-proto
* Merge refs/heads/master into faster-hash-proto
* Merge refs/heads/master into faster-hash-proto
* fix last test
* Merge branch 'faster-hash-proto' of github.com:prysmaticlabs/prysm into faster-hash-proto-2
* lint
* fix last test
* fix again
* Update beacon-chain/cache/checkpoint_state_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* Merge refs/heads/master into faster-hash-proto
2020-03-28 18:32:11 +00:00
Preston Van Loon
6bc70e228f Prevent panic for different size bitlists (#5233)
* Fix #5232
* Merge branch 'master' into bugfix-5232
2020-03-28 06:25:49 +00:00
terence tsao
f2a3fadda7 Productionization new state service part 1 (#5230)
* Fixed last play methods

* Fixed a regression. Genesis case for state gen

* Comment

* Starting

* Update proto

* Remove boundary root usages

* Update migrate

* Clean up

* Remove unused db methods

* Kafta

* Kafta

* Update tests

* Comments

* Fix state summary tests

* Missed one pass through for kafta
2020-03-27 13:28:38 -07:00
terence tsao
6a4b17f237 Prune garbage state is not for new state mgmt (#5225)
* Prune garbage state is not for new state mgmt
* Merge branch 'master' into state-mgmt-pruning
* Merge branch 'master' into state-mgmt-pruning
* Merge branch 'master' into state-mgmt-pruning
2020-03-27 14:30:24 +00:00
Victor Farazdagi
7ebb3c1784 init-sync revamp (#5148)
* fix issue with rate limiting
* force fetcher to wait for minimum peers
* adds backoff interval
* cap the max blocks requested from a peer
* queue rewritten
* adds docs to fsm
* fix visibility
* updates fsm
* fsm tests added
* optimizes queue resource allocations
* removes debug log
* replace auto-fixed comment
* fixes typo
* better handling of evil peers
* fixes test
* minor fixes to fsm
* better interface for findEpochState func
2020-03-27 09:54:57 +03:00
Nishant Das
33f6c22607 Revert "Add Fast Copy of Trie" (#5228)
* Revert "new fixes (#5221)"

This reverts commit 4118fa5242.
2020-03-27 01:06:30 +00:00
terence tsao
1a0a399bed Handle genesis case for blocks/states at slot index (#5224)
* Handle highest slot = 0
* TestStore_GenesisState_CanGetHighestBelow
* TestStore_GenesisBlock_CanGetHighestAt
* Merge refs/heads/master into handle-genesis
2020-03-27 00:09:14 +00:00
Preston Van Loon
c4c9a8465a Faster hashing for attestation pool (#5217)
* use faster hash proto
* Merge branch 'master' into faster-att-pool
* gaz
* Merge branch 'faster-att-pool' of github.com:prysmaticlabs/prysm into faster-att-pool
* nil checks and failing tests
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Merge refs/heads/master into faster-att-pool
* Fix
* Merge branch 'faster-att-pool' of github.com:prysmaticlabs/prysm into faster-att-pool
* Fix tests
2020-03-26 23:55:25 +00:00
terence tsao
5e2faf1a9d Short circuit genesis condition for new state mgmt (#5223)
* Fixed a regression. Genesis case for state gen

* Comment

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-26 14:13:45 -05:00
shayzluf
93e68db5e6 is slashable attestation endpoint implementation (#5209)
* is slashable attestation endpoint implementation
* fix todo
* comment
* Merge refs/heads/master into is_slashable_attestation
* Merge refs/heads/master into is_slashable_attestation
* Merge refs/heads/master into is_slashable_attestation
* Update slasher/rpc/server.go
* Update slasher/rpc/server.go
* Update slasher/rpc/service.go
2020-03-26 18:31:20 +00:00
Nishant Das
cdac3d61ea Custom Block HTR (#5219)
* add custom htr

* fix root

* fix everything

* Apply suggestions from code review

* Update beacon-chain/state/stateutil/blocks.go

* Update beacon-chain/blockchain/receive_block.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/blockchain/process_block.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* terence's review

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-26 13:10:22 -05:00
Nishant Das
4118fa5242 new fixes (#5221)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-26 11:21:02 -05:00
terence tsao
2df76798bc Add HighestSlotStatesBelow DB getter (#5213)
* Add HighestSlotStatesBelow
* Tests for HighestSlotStatesBelow
* Typos
* Comment
* Merge refs/heads/master into states-slots-saved-at
* Quick fix
* Merge branch 'states-slots-saved-at' of github.com:prysmaticlabs/prysm into states-slots-saved-at
* Prevent underflow foreal, thanks nishant!
2020-03-26 15:37:40 +00:00
Preston Van Loon
3792bf67b6 Add alpine based docker images for validator and beacon chain (#5214)
* Add alpine based images for validator and beacon chain

* Use an alpine image with glibc

* manual tags on transitional targets

* poke buildkite

* poke buildkite
2020-03-25 19:36:28 -05:00
159 changed files with 3339 additions and 3061 deletions

View File

@@ -39,6 +39,12 @@ build:release --compilation_mode=opt
build:llvm --crosstool_top=@llvm_toolchain//:toolchain
build:llvm --define compiler=llvm
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --config=llvm
build:cgo_symbolizer --copt=-g
build:cgo_symbolizer --define=USE_CGO_SYMBOLIZER=true
build:cgo_symbolizer -c dbg
# multi-arch cross-compiling toolchain configs:
-----------------------------------------------
build:cross --crosstool_top=@prysm_toolchains//:multiarch_toolchain

1
.gitignore vendored
View File

@@ -17,6 +17,7 @@ bazel-*
# Coverage outputs
coverage.txt
profile.out
profile.grind
# Nodejs
node_modules

View File

@@ -117,6 +117,18 @@ load(
container_repositories()
load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
)
container_pull(
name = "alpine_cc_linux_amd64",
digest = "sha256:d5cee45549351be7a03a96c7b319b9c1808979b10888b79acca4435cc068005e",
registry = "index.docker.io",
repository = "frolvlad/alpine-glibc",
)
load("@prysm//third_party/herumi:herumi.bzl", "bls_dependencies")
bls_dependencies()
@@ -1637,3 +1649,10 @@ go_repository(
load("@com_github_prysmaticlabs_prombbolt//:repositories.bzl", "prombbolt_dependencies")
prombbolt_dependencies()
go_repository(
name = "com_github_ianlancetaylor_cgosymbolizer",
importpath = "github.com/ianlancetaylor/cgosymbolizer",
sum = "h1:GWsU1WjSE2rtvyTYGcndqmPPkQkBNV7pEuZdnGtwtu4=",
version = "v0.0.0-20200321040036-d43e30eacb43",
)

View File

@@ -1,7 +1,7 @@
load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library", "go_test")
load("@io_bazel_rules_docker//go:image.bzl", "go_image")
load("@io_bazel_rules_docker//container:container.bzl", "container_bundle")
load("//tools:binary_targets.bzl", "binary_targets", "go_image_debug")
load("//tools:binary_targets.bzl", "binary_targets", "go_image_alpine", "go_image_debug")
load("@io_bazel_rules_docker//contrib:push-all.bzl", "docker_push")
go_library(
@@ -37,7 +37,11 @@ go_image(
"main.go",
"usage.go",
],
base = "//tools:cc_image",
base = select({
"//tools:base_image_alpine": "//tools:alpine_cc_image",
"//tools:base_image_cc": "//tools:cc_image",
"//conditions:default": "//tools:cc_image",
}),
goarch = "amd64",
goos = "linux",
importpath = "github.com/prysmaticlabs/prysm/beacon-chain",
@@ -76,6 +80,7 @@ container_bundle(
go_image_debug(
name = "image_debug",
image = ":image",
tags = ["manual"],
)
container_bundle(
@@ -87,6 +92,21 @@ container_bundle(
tags = ["manual"],
)
go_image_alpine(
name = "image_alpine",
image = ":image",
tags = ["manual"],
)
container_bundle(
name = "image_bundle_alpine",
images = {
"gcr.io/prysmaticlabs/prysm/beacon-chain:latest-alpine": ":image_alpine",
"gcr.io/prysmaticlabs/prysm/beacon-chain:{DOCKER_TAG}-alpine": ":image_alpine",
},
tags = ["manual"],
)
docker_push(
name = "push_images",
bundle = ":image_bundle",
@@ -99,6 +119,12 @@ docker_push(
tags = ["manual"],
)
docker_push(
name = "push_images_alpine",
bundle = ":image_bundle_alpine",
tags = ["manual"],
)
go_binary(
name = "beacon-chain",
embed = [":go_default_library"],

View File

@@ -40,6 +40,7 @@ go_library(
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/bytesutil:go_default_library",

View File

@@ -5,9 +5,11 @@ import (
"sort"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -170,10 +172,21 @@ func (s *Service) generateState(ctx context.Context, startRoot [32]byte, endRoot
if preState == nil {
return nil, errors.New("finalized state does not exist in db")
}
endBlock, err := s.beaconDB.Block(ctx, endRoot)
if err != nil {
return nil, err
var endBlock *ethpb.SignedBeaconBlock
if featureconfig.Get().InitSyncBatchSaveBlocks && s.hasInitSyncBlock(endRoot) {
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return nil, err
}
s.clearInitSyncBlocks()
endBlock = s.getInitSyncBlock(endRoot)
} else {
endBlock, err = s.beaconDB.Block(ctx, endRoot)
if err != nil {
return nil, err
}
}
if endBlock == nil {
return nil, errors.New("provided block root does not have block saved in the db")
}
@@ -189,3 +202,48 @@ func (s *Service) generateState(ctx context.Context, startRoot [32]byte, endRoot
}
return postState, nil
}
// This saves a beacon block to the initial sync blocks cache.
func (s *Service) saveInitSyncBlock(r [32]byte, b *ethpb.SignedBeaconBlock) {
s.initSyncBlocksLock.Lock()
defer s.initSyncBlocksLock.Unlock()
s.initSyncBlocks[r] = b
}
// This checks if a beacon block exists in the initial sync blocks cache using the root
// of the block.
func (s *Service) hasInitSyncBlock(r [32]byte) bool {
s.initSyncBlocksLock.RLock()
defer s.initSyncBlocksLock.RUnlock()
_, ok := s.initSyncBlocks[r]
return ok
}
// This retrieves a beacon block from the initial sync blocks cache using the root of
// the block.
func (s *Service) getInitSyncBlock(r [32]byte) *ethpb.SignedBeaconBlock {
s.initSyncBlocksLock.RLock()
defer s.initSyncBlocksLock.RUnlock()
b := s.initSyncBlocks[r]
return b
}
// This retrieves all the beacon blocks from the initial sync blocks cache, the returned
// blocks are unordered.
func (s *Service) getInitSyncBlocks() []*ethpb.SignedBeaconBlock {
s.initSyncBlocksLock.RLock()
defer s.initSyncBlocksLock.RUnlock()
blks := make([]*ethpb.SignedBeaconBlock, 0, len(s.initSyncBlocks))
for _, b := range s.initSyncBlocks {
blks = append(blks, b)
}
return blks
}
// This clears out the initial sync blocks cache.
func (s *Service) clearInitSyncBlocks() {
s.initSyncBlocksLock.Lock()
defer s.initSyncBlocksLock.Unlock()
s.initSyncBlocks = make(map[[32]byte]*ethpb.SignedBeaconBlock)
}

View File

@@ -6,6 +6,7 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
@@ -210,7 +211,7 @@ func TestPruneNonBoundary_CanPrune(t *testing.T) {
func TestGenerateState_CorrectlyGenerated(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
cfg := &Config{BeaconDB: db, StateGen: stategen.New(db)}
cfg := &Config{BeaconDB: db, StateGen: stategen.New(db, cache.NewStateSummaryCache())}
service, err := NewService(context.Background(), cfg)
if err != nil {
t.Fatal(err)

View File

@@ -149,7 +149,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
LatestBlockHeader: &ethpb.BeaconBlockHeader{},
JustificationBits: []byte{0},
Slashings: make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector),
FinalizedCheckpoint: &ethpb.Checkpoint{},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: bytesutil.PadTo([]byte{'A'}, 32)},
})
r := [32]byte{'g'}
if err := service.beaconDB.SaveState(ctx, s, r); err != nil {
@@ -160,7 +160,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'}))
s1, err := service.getAttPreState(ctx, cp1)
if err != nil {
@@ -170,7 +170,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
t.Errorf("Wanted state slot: %d, got: %d", 1*params.BeaconConfig().SlotsPerEpoch, s1.Slot())
}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: []byte{'B'}}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'B'}, 32)}
service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'B'}))
s2, err := service.getAttPreState(ctx, cp2)
if err != nil {
@@ -209,7 +209,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
service.bestJustifiedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.finalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
service.prevFinalizedCheckpt = &ethpb.Checkpoint{Root: r[:]}
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'C'}}
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, 32)}
service.beaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'}))
s3, err := service.getAttPreState(ctx, cp3)
if err != nil {

View File

@@ -7,18 +7,22 @@ import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// This defines size of the upper bound for initial sync block cache.
var initialSyncBlockCacheSize = 2 * params.BeaconConfig().SlotsPerEpoch
// onBlock is called when a gossip block is received. It runs regular state transition on the block.
//
// Spec pseudocode definition:
@@ -67,7 +71,7 @@ func (s *Service) onBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock)
}
preStateValidatorCount := preState.NumValidators()
root, err := ssz.HashTreeRoot(b)
root, err := stateutil.BlockRoot(b)
if err != nil {
return nil, errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
@@ -136,12 +140,13 @@ func (s *Service) onBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock)
}
if featureconfig.Get().NewStateMgmt {
finalizedState, err := s.stateGen.StateByRoot(ctx, fRoot)
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
fBlock, err := s.beaconDB.Block(ctx, fRoot)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "could not get finalized block to migrate")
}
if err := s.stateGen.MigrateToCold(ctx, finalizedState, fRoot); err != nil {
return nil, err
if err := s.stateGen.MigrateToCold(ctx, fBlock.Block.Slot, fRoot); err != nil {
return nil, errors.Wrap(err, "could not migrate to cold")
}
}
}
@@ -210,13 +215,17 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
return errors.Wrap(err, "could not execute state transition")
}
if err := s.beaconDB.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
root, err := ssz.HashTreeRoot(b)
root, err := stateutil.BlockRoot(b)
if err != nil {
return errors.Wrapf(err, "could not get signing root of block %d", b.Slot)
}
if featureconfig.Get().InitSyncBatchSaveBlocks {
s.saveInitSyncBlock(root, signed)
} else {
if err := s.beaconDB.SaveBlock(ctx, signed); err != nil {
return errors.Wrapf(err, "could not save block from slot %d", b.Slot)
}
}
if err := s.insertBlockToForkChoiceStore(ctx, b, root, postState); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", b.Slot)
@@ -247,6 +256,14 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
}
}
// Rate limit how many blocks (2 epochs worth of blocks) a node keeps in the memory.
if len(s.getInitSyncBlocks()) > int(initialSyncBlockCacheSize) {
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
}
// Update finalized check point. Prune the block cache and helper caches on every new finalized epoch.
if postState.FinalizedCheckpointEpoch() > s.finalizedCheckpt.Epoch {
if !featureconfig.Get().NewStateMgmt {
@@ -264,6 +281,13 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
}
}
if featureconfig.Get().InitSyncBatchSaveBlocks {
if err := s.beaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
s.clearInitSyncBlocks()
}
if err := s.beaconDB.SaveFinalizedCheckpoint(ctx, postState.FinalizedCheckpoint()); err != nil {
return errors.Wrap(err, "could not save finalized checkpoint")
}
@@ -277,13 +301,12 @@ func (s *Service) onBlockInitialSyncStateTransition(ctx context.Context, signed
if featureconfig.Get().NewStateMgmt {
fRoot := bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root)
finalizedState, err := s.stateGen.StateByRoot(ctx, fRoot)
fBlock, err := s.beaconDB.Block(ctx, fRoot)
if err != nil {
return errors.Wrap(err, "could not get state by root for migration")
return errors.Wrap(err, "could not get finalized block to migrate")
}
if err := s.stateGen.MigrateToCold(ctx, finalizedState, fRoot); err != nil {
return errors.Wrap(err, "could not migrate with new finalized root")
if err := s.stateGen.MigrateToCold(ctx, fBlock.Block.Slot, fRoot); err != nil {
return errors.Wrap(err, "could not migrate to cold")
}
}
}

View File

@@ -229,21 +229,36 @@ func (s *Service) shouldUpdateCurrentJustified(ctx context.Context, newJustified
if helpers.SlotsSinceEpochStarts(s.CurrentSlot()) < params.BeaconConfig().SafeSlotsToUpdateJustified {
return true, nil
}
newJustifiedBlockSigned, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(newJustifiedCheckpt.Root))
if err != nil {
return false, err
var newJustifiedBlockSigned *ethpb.SignedBeaconBlock
justifiedRoot := bytesutil.ToBytes32(newJustifiedCheckpt.Root)
var err error
if featureconfig.Get().InitSyncBatchSaveBlocks && s.hasInitSyncBlock(justifiedRoot) {
newJustifiedBlockSigned = s.getInitSyncBlock(justifiedRoot)
} else {
newJustifiedBlockSigned, err = s.beaconDB.Block(ctx, justifiedRoot)
if err != nil {
return false, err
}
}
if newJustifiedBlockSigned == nil || newJustifiedBlockSigned.Block == nil {
return false, errors.New("nil new justified block")
}
newJustifiedBlock := newJustifiedBlockSigned.Block
if newJustifiedBlock.Slot <= helpers.StartSlot(s.justifiedCheckpt.Epoch) {
return false, nil
}
justifiedBlockSigned, err := s.beaconDB.Block(ctx, bytesutil.ToBytes32(s.justifiedCheckpt.Root))
if err != nil {
return false, err
var justifiedBlockSigned *ethpb.SignedBeaconBlock
cachedJustifiedRoot := bytesutil.ToBytes32(s.justifiedCheckpt.Root)
if featureconfig.Get().InitSyncBatchSaveBlocks && s.hasInitSyncBlock(cachedJustifiedRoot) {
justifiedBlockSigned = s.getInitSyncBlock(cachedJustifiedRoot)
} else {
justifiedBlockSigned, err = s.beaconDB.Block(ctx, cachedJustifiedRoot)
if err != nil {
return false, err
}
}
if justifiedBlockSigned == nil || justifiedBlockSigned.Block == nil {
return false, errors.New("nil justified block")
}
@@ -267,6 +282,7 @@ func (s *Service) updateJustified(ctx context.Context, state *stateTrie.BeaconSt
if err != nil {
return err
}
if canUpdate {
s.prevJustifiedCheckpt = s.justifiedCheckpt
s.justifiedCheckpt = cpt
@@ -278,6 +294,7 @@ func (s *Service) updateJustified(ctx context.Context, state *stateTrie.BeaconSt
justifiedState := s.initSyncState[justifiedRoot]
// If justified state is nil, resume back to normal syncing process and save
// justified check point.
var err error
if justifiedState == nil {
if s.beaconDB.HasState(ctx, justifiedRoot) {
return s.beaconDB.SaveJustifiedCheckpoint(ctx, cpt)
@@ -376,6 +393,11 @@ func (s *Service) ancestor(ctx context.Context, root []byte, slot uint64) ([]byt
if err != nil {
return nil, errors.Wrap(err, "could not get ancestor block")
}
if featureconfig.Get().InitSyncBatchSaveBlocks && s.hasInitSyncBlock(bytesutil.ToBytes32(root)) {
signed = s.getInitSyncBlock(bytesutil.ToBytes32(root))
}
if signed == nil || signed.Block == nil {
return nil, errors.New("nil block")
}

View File

@@ -13,6 +13,7 @@ import (
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
@@ -25,6 +26,7 @@ type BlockReceiver interface {
ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *ethpb.SignedBeaconBlock) error
ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedBeaconBlock) error
HasInitSyncBlock(root [32]byte) bool
}
// ReceiveBlock is a function that defines the operations that are preformed on
@@ -88,7 +90,7 @@ func (s *Service) ReceiveBlockNoPubsub(ctx context.Context, block *ethpb.SignedB
defer s.epochParticipationLock.Unlock()
s.epochParticipation[helpers.SlotToEpoch(blockCopy.Block.Slot)] = precompute.Balances
root, err := ssz.HashTreeRoot(blockCopy.Block)
root, err := stateutil.BlockRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
@@ -139,7 +141,7 @@ func (s *Service) ReceiveBlockNoPubsubForkchoice(ctx context.Context, block *eth
return err
}
root, err := ssz.HashTreeRoot(blockCopy.Block)
root, err := stateutil.BlockRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received block")
}
@@ -191,7 +193,7 @@ func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedB
return err
}
root, err := ssz.HashTreeRoot(blockCopy.Block)
root, err := stateutil.BlockRoot(blockCopy.Block)
if err != nil {
return errors.Wrap(err, "could not get signing root on received blockCopy")
}
@@ -235,3 +237,8 @@ func (s *Service) ReceiveBlockNoVerify(ctx context.Context, block *ethpb.SignedB
return nil
}
// HasInitSyncBlock returns true if the block of the input root exists in initial sync blocks cache.
func (s *Service) HasInitSyncBlock(root [32]byte) bool {
return s.hasInitSyncBlock(root)
}

View File

@@ -75,6 +75,8 @@ type Service struct {
checkpointStateLock sync.Mutex
stateGen *stategen.State
opsService *attestations.Service
initSyncBlocks map[[32]byte]*ethpb.SignedBeaconBlock
initSyncBlocksLock sync.RWMutex
}
// Config options for the service.
@@ -117,6 +119,7 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
checkpointState: cache.NewCheckpointStateCache(),
opsService: cfg.OpsService,
stateGen: cfg.StateGen,
initSyncBlocks: make(map[[32]byte]*ethpb.SignedBeaconBlock),
}, nil
}
@@ -140,7 +143,7 @@ func (s *Service) Start() {
if featureconfig.Get().NewStateMgmt {
beaconState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
log.Fatalf("Could not fetch beacon state by root: %v", err)
}
} else {
beaconState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(cp.Root))
@@ -178,9 +181,11 @@ func (s *Service) Start() {
s.prevFinalizedCheckpt = stateTrie.CopyCheckpoint(finalizedCheckpoint)
s.resumeForkChoice(justifiedCheckpoint, finalizedCheckpoint)
if finalizedCheckpoint.Epoch > 1 {
if err := s.pruneGarbageState(ctx, helpers.StartSlot(finalizedCheckpoint.Epoch)-params.BeaconConfig().SlotsPerEpoch); err != nil {
log.WithError(err).Warn("Could not prune old states")
if !featureconfig.Get().NewStateMgmt {
if finalizedCheckpoint.Epoch > 1 {
if err := s.pruneGarbageState(ctx, helpers.StartSlot(finalizedCheckpoint.Epoch)-params.BeaconConfig().SlotsPerEpoch); err != nil {
log.WithError(err).Warn("Could not prune old states")
}
}
}
@@ -332,9 +337,8 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState *stateTrie.B
return errors.Wrap(err, "could not save genesis state")
}
if err := s.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: 0,
Root: genesisBlkRoot[:],
BoundaryRoot: genesisBlkRoot[:],
Slot: 0,
Root: genesisBlkRoot[:],
}); err != nil {
return err
}
@@ -421,7 +425,7 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
var finalizedState *stateTrie.BeaconState
if featureconfig.Get().NewStateMgmt {
finalizedRoot = s.beaconDB.LastArchivedIndexRoot(ctx)
finalizedState, err = s.stateGen.Resume(ctx, finalizedRoot)
finalizedState, err = s.stateGen.Resume(ctx)
if err != nil {
return errors.Wrap(err, "could not get finalized state from db")
}

View File

@@ -234,3 +234,8 @@ func (ms *ChainService) IsValidAttestation(ctx context.Context, att *ethpb.Attes
// ClearCachedStates does nothing.
func (ms *ChainService) ClearCachedStates() {}
// HasInitSyncBlock mocks the same method in the chain service.
func (ms *ChainService) HasInitSyncBlock(root [32]byte) bool {
return false
}

View File

@@ -11,11 +11,16 @@ go_library(
"eth1_data.go",
"hot_state_cache.go",
"skip_slot_cache.go",
"state_summary.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/cache",
visibility = ["//beacon-chain:__subpackages__"],
visibility = [
"//beacon-chain:__subpackages__",
"//tools:__subpackages__",
],
deps = [
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"reflect"
"testing"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
@@ -11,7 +12,7 @@ import (
)
func TestCheckpointStateCacheKeyFn_OK(t *testing.T) {
cp := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
cp := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 64,
})
@@ -45,7 +46,7 @@ func TestCheckpointStateCacheKeyFn_InvalidObj(t *testing.T) {
func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
cache := NewCheckpointStateCache()
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: []byte{'A'}}
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'A'}, 32)}
st, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 64,
})
@@ -75,7 +76,7 @@ func TestCheckpointStateCache_StateByCheckpoint(t *testing.T) {
t.Error("incorrectly cached state")
}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: []byte{'B'}}
cp2 := &ethpb.Checkpoint{Epoch: 2, Root: bytesutil.PadTo([]byte{'B'}, 32)}
st2, err := stateTrie.InitializeFromProto(&pb.BeaconState{
Slot: 128,
})

View File

@@ -9,6 +9,7 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
)
var _ = PendingDepositsFetcher(&DepositCache{})
@@ -33,8 +34,12 @@ func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
func TestRemovePendingDeposit_OK(t *testing.T) {
db := DepositCache{}
depToRemove := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
otherDep := &ethpb.Deposit{Proof: [][]byte{[]byte("B")}}
proof1 := make([][]byte, 33)
proof1[0] = bytesutil.PadTo([]byte{'A'}, 32)
proof2 := make([][]byte, 33)
proof2[0] = bytesutil.PadTo([]byte{'A'}, 32)
depToRemove := &ethpb.Deposit{Proof: proof1}
otherDep := &ethpb.Deposit{Proof: proof2}
db.pendingDeposits = []*dbpb.DepositContainer{
{Deposit: depToRemove, Index: 1},
{Deposit: otherDep, Index: 5},
@@ -57,7 +62,9 @@ func TestRemovePendingDeposit_IgnoresNilDeposit(t *testing.T) {
func TestPendingDeposit_RoundTrip(t *testing.T) {
dc := DepositCache{}
dep := &ethpb.Deposit{Proof: [][]byte{[]byte("A")}}
proof := make([][]byte, 33)
proof[0] = bytesutil.PadTo([]byte{'A'}, 32)
dep := &ethpb.Deposit{Proof: proof}
dc.InsertPendingDeposit(context.Background(), dep, 111, 100, [32]byte{})
dc.RemovePendingDeposit(context.Background(), dep)
if len(dc.pendingDeposits) != 0 {

65
beacon-chain/cache/state_summary.go vendored Normal file
View File

@@ -0,0 +1,65 @@
package cache
import (
"sync"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
// StateSummaryCache caches state summary object.
type StateSummaryCache struct {
initSyncStateSummaries map[[32]byte]*pb.StateSummary
initSyncStateSummariesLock sync.RWMutex
}
// NewStateSummaryCache creates a new state summary cache.
func NewStateSummaryCache() *StateSummaryCache {
return &StateSummaryCache{
initSyncStateSummaries: make(map[[32]byte]*pb.StateSummary),
}
}
// Put saves a state summary to the initial sync state summaries cache.
func (s *StateSummaryCache) Put(r [32]byte, b *pb.StateSummary) {
s.initSyncStateSummariesLock.Lock()
defer s.initSyncStateSummariesLock.Unlock()
s.initSyncStateSummaries[r] = b
}
// Has checks if a state summary exists in the initial sync state summaries cache using the root
// of the block.
func (s *StateSummaryCache) Has(r [32]byte) bool {
s.initSyncStateSummariesLock.RLock()
defer s.initSyncStateSummariesLock.RUnlock()
_, ok := s.initSyncStateSummaries[r]
return ok
}
// Get retrieves a state summary from the initial sync state summaries cache using the root of
// the block.
func (s *StateSummaryCache) Get(r [32]byte) *pb.StateSummary {
s.initSyncStateSummariesLock.RLock()
defer s.initSyncStateSummariesLock.RUnlock()
b := s.initSyncStateSummaries[r]
return b
}
// GetAll retrieves all the beacon state summaries from the initial sync state summaries cache, the returned
// state summaries are unordered.
func (s *StateSummaryCache) GetAll() []*pb.StateSummary {
s.initSyncStateSummariesLock.RLock()
defer s.initSyncStateSummariesLock.RUnlock()
blks := make([]*pb.StateSummary, 0, len(s.initSyncStateSummaries))
for _, b := range s.initSyncStateSummaries {
blks = append(blks, b)
}
return blks
}
// Clear clears out the initial sync state summaries cache.
func (s *StateSummaryCache) Clear() {
s.initSyncStateSummariesLock.Lock()
defer s.initSyncStateSummariesLock.Unlock()
s.initSyncStateSummaries = make(map[[32]byte]*pb.StateSummary)
}

View File

@@ -58,6 +58,25 @@ func verifySigningRoot(obj interface{}, pub []byte, signature []byte, domain uin
return nil
}
func verifyBlockRoot(blk *ethpb.BeaconBlock, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
if err != nil {
return errors.Wrap(err, "could not convert bytes to public key")
}
sig, err := bls.SignatureFromBytes(signature)
if err != nil {
return errors.Wrap(err, "could not convert bytes to signature")
}
root, err := stateutil.BlockRoot(blk)
if err != nil {
return errors.Wrap(err, "could not get signing root")
}
if !sig.Verify(root[:], publicKey, domain) {
return ErrSigFailedToVerify
}
return nil
}
// Deprecated: This method uses deprecated ssz.SigningRoot.
func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, pub []byte, signature []byte, domain uint64) error {
publicKey, err := bls.PublicKeyFromBytes(pub)
@@ -223,7 +242,7 @@ func ProcessBlockHeader(
if err != nil {
return nil, err
}
if err := verifySigningRoot(block.Block, proposer.PublicKey, block.Signature, domain); err != nil {
if err := verifyBlockRoot(block.Block, proposer.PublicKey, block.Signature, domain); err != nil {
return nil, ErrSigFailedToVerify
}
@@ -286,7 +305,7 @@ func ProcessBlockHeaderNoVerify(
return nil, fmt.Errorf("proposer at index %d was previously slashed", idx)
}
bodyRoot, err := ssz.HashTreeRoot(block.Body)
bodyRoot, err := stateutil.BlockBodyRoot(block.Body)
if err != nil {
return nil, err
}

View File

@@ -26,6 +26,7 @@ go_library(
"//tools:__subpackages__",
],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -1,8 +1,11 @@
package db
import "github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
import (
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
)
// NewDB initializes a new DB.
func NewDB(dirPath string) (Database, error) {
return kv.NewKVStore(dirPath)
func NewDB(dirPath string, stateSummaryCache *cache.StateSummaryCache) (Database, error) {
return kv.NewKVStore(dirPath, stateSummaryCache)
}

View File

@@ -1,13 +1,14 @@
package db
import (
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kafka"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
)
// NewDB initializes a new DB with kafka wrapper.
func NewDB(dirPath string) (Database, error) {
db, err := kv.NewKVStore(dirPath)
func NewDB(dirPath string, stateSummaryCache *cache.StateSummaryCache) (Database, error) {
db, err := kv.NewKVStore(dirPath, stateSummaryCache)
if err != nil {
return nil, err
}

View File

@@ -27,6 +27,8 @@ type ReadOnlyDatabase interface {
HasBlock(ctx context.Context, blockRoot [32]byte) bool
GenesisBlock(ctx context.Context) (*ethpb.SignedBeaconBlock, error)
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
HighestSlotBlocks(ctx context.Context) ([]*ethpb.SignedBeaconBlock, error)
HighestSlotBlocksBelow(ctx context.Context, slot uint64) ([]*ethpb.SignedBeaconBlock, error)
// Validator related methods.
ValidatorIndex(ctx context.Context, publicKey []byte) (uint64, bool, error)
HasValidatorIndex(ctx context.Context, publicKey []byte) bool
@@ -36,6 +38,8 @@ type ReadOnlyDatabase interface {
HasState(ctx context.Context, blockRoot [32]byte) bool
StateSummary(ctx context.Context, blockRoot [32]byte) (*ethereum_beacon_p2p_v1.StateSummary, error)
HasStateSummary(ctx context.Context, blockRoot [32]byte) bool
HighestSlotStates(ctx context.Context) ([]*state.BeaconState, error)
HighestSlotStatesBelow(ctx context.Context, slot uint64) ([]*state.BeaconState, error)
// Slashing operations.
ProposerSlashing(ctx context.Context, slashingRoot [32]byte) (*eth.ProposerSlashing, error)
AttesterSlashing(ctx context.Context, slashingRoot [32]byte) (*eth.AttesterSlashing, error)
@@ -52,11 +56,9 @@ type ReadOnlyDatabase interface {
ArchivedCommitteeInfo(ctx context.Context, epoch uint64) (*ethereum_beacon_p2p_v1.ArchivedCommitteeInfo, error)
ArchivedBalances(ctx context.Context, epoch uint64) ([]uint64, error)
ArchivedValidatorParticipation(ctx context.Context, epoch uint64) (*eth.ValidatorParticipation, error)
ArchivedPointState(ctx context.Context, index uint64) (*state.BeaconState, error)
ArchivedPointRoot(ctx context.Context, index uint64) [32]byte
HasArchivedPoint(ctx context.Context, index uint64) bool
LastArchivedIndexRoot(ctx context.Context) [32]byte
LastArchivedIndexState(ctx context.Context) (*state.BeaconState, error)
// Deposit contract related handlers.
DepositContractAddress(ctx context.Context) ([]byte, error)
// Powchain operations.
@@ -88,6 +90,7 @@ type NoHeadAccessDatabase interface {
DeleteState(ctx context.Context, blockRoot [32]byte) error
DeleteStates(ctx context.Context, blockRoots [][32]byte) error
SaveStateSummary(ctx context.Context, summary *ethereum_beacon_p2p_v1.StateSummary) error
SaveStateSummaries(ctx context.Context, summaries []*ethereum_beacon_p2p_v1.StateSummary) error
// Slashing operations.
SaveProposerSlashing(ctx context.Context, slashing *eth.ProposerSlashing) error
SaveAttesterSlashing(ctx context.Context, slashing *eth.AttesterSlashing) error
@@ -104,7 +107,6 @@ type NoHeadAccessDatabase interface {
SaveArchivedCommitteeInfo(ctx context.Context, epoch uint64, info *ethereum_beacon_p2p_v1.ArchivedCommitteeInfo) error
SaveArchivedBalances(ctx context.Context, epoch uint64, balances []uint64) error
SaveArchivedValidatorParticipation(ctx context.Context, epoch uint64, part *eth.ValidatorParticipation) error
SaveArchivedPointState(ctx context.Context, state *state.BeaconState, index uint64) error
SaveArchivedPointRoot(ctx context.Context, blockRoot [32]byte, index uint64) error
SaveLastArchivedIndex(ctx context.Context, index uint64) error
// Deposit contract related handlers.

View File

@@ -238,6 +238,11 @@ func (e Exporter) SaveStateSummary(ctx context.Context, summary *pb.StateSummary
return e.db.SaveStateSummary(ctx, summary)
}
// SaveStateSummaries -- passthrough.
func (e Exporter) SaveStateSummaries(ctx context.Context, summaries []*pb.StateSummary) error {
return e.db.SaveStateSummaries(ctx, summaries)
}
// SaveStates -- passthrough.
func (e Exporter) SaveStates(ctx context.Context, states []*state.BeaconState, blockRoots [][32]byte) error {
return e.db.SaveStates(ctx, states, blockRoots)
@@ -328,21 +333,11 @@ func (e Exporter) SavePowchainData(ctx context.Context, data *db.ETH1ChainData)
return e.db.SavePowchainData(ctx, data)
}
// SaveArchivedPointState -- passthrough
func (e Exporter) SaveArchivedPointState(ctx context.Context, state *state.BeaconState, index uint64) error {
return e.db.SaveArchivedPointState(ctx, state, index)
}
// SaveArchivedPointRoot -- passthrough
func (e Exporter) SaveArchivedPointRoot(ctx context.Context, blockRoot [32]byte, index uint64) error {
return e.db.SaveArchivedPointRoot(ctx, blockRoot, index)
}
// ArchivedPointState -- passthrough
func (e Exporter) ArchivedPointState(ctx context.Context, index uint64) (*state.BeaconState, error) {
return e.db.ArchivedPointState(ctx, index)
}
// ArchivedPointRoot -- passthrough
func (e Exporter) ArchivedPointRoot(ctx context.Context, index uint64) [32]byte {
return e.db.ArchivedPointRoot(ctx, index)
@@ -358,9 +353,24 @@ func (e Exporter) LastArchivedIndexRoot(ctx context.Context) [32]byte {
return e.db.LastArchivedIndexRoot(ctx)
}
// LastArchivedIndexState -- passthrough
func (e Exporter) LastArchivedIndexState(ctx context.Context) (*state.BeaconState, error) {
return e.db.LastArchivedIndexState(ctx)
// HighestSlotBlocks -- passthrough
func (e Exporter) HighestSlotBlocks(ctx context.Context) ([]*ethpb.SignedBeaconBlock, error) {
return e.db.HighestSlotBlocks(ctx)
}
// HighestSlotBlocksBelow -- passthrough
func (e Exporter) HighestSlotBlocksBelow(ctx context.Context, slot uint64) ([]*ethpb.SignedBeaconBlock, error) {
return e.db.HighestSlotBlocksBelow(ctx, slot)
}
// HighestSlotStates -- passthrough
func (e Exporter) HighestSlotStates(ctx context.Context) ([]*state.BeaconState, error) {
return e.db.HighestSlotStates(ctx)
}
// HighestSlotStatesBelow -- passthrough
func (e Exporter) HighestSlotStatesBelow(ctx context.Context, slot uint64) ([]*state.BeaconState, error) {
return e.db.HighestSlotStatesBelow(ctx, slot)
}
// SaveLastArchivedIndex -- passthrough

View File

@@ -8,6 +8,7 @@ go_library(
"attestations.go",
"backup.go",
"blocks.go",
"check_state.go",
"checkpoint.go",
"deposit_contract.go",
"encoding.go",
@@ -25,10 +26,12 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/db/kv",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/db:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library",
@@ -72,6 +75,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",

View File

@@ -3,33 +3,11 @@ package kv
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
// SaveArchivedPointState saves an archived point state to the DB. This is used for cold state management.
// An archive point index is `slot / slots_per_archive_point`.
func (k *Store) SaveArchivedPointState(ctx context.Context, state *state.BeaconState, index uint64) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveArchivedPointState")
defer span.End()
if state == nil {
return errors.New("nil state")
}
enc, err := encode(state.InnerStateUnsafe())
if err != nil {
return err
}
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(archivedIndexStateBucket)
return bucket.Put(uint64ToBytes(index), enc)
})
}
// SaveArchivedPointRoot saves an archived point root to the DB. This is used for cold state management.
func (k *Store) SaveArchivedPointRoot(ctx context.Context, blockRoot [32]byte, index uint64) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveArchivedPointRoot")
@@ -71,63 +49,6 @@ func (k *Store) LastArchivedIndexRoot(ctx context.Context) [32]byte {
return bytesutil.ToBytes32(blockRoot)
}
// LastArchivedIndexState from the db.
func (k *Store) LastArchivedIndexState(ctx context.Context) (*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.LastArchivedIndexState")
defer span.End()
var s *pb.BeaconState
err := k.db.View(func(tx *bolt.Tx) error {
indexRootBucket := tx.Bucket(archivedIndexRootBucket)
lastArchivedIndex := indexRootBucket.Get(lastArchivedIndexKey)
if lastArchivedIndex == nil {
return nil
}
indexStateBucket := tx.Bucket(archivedIndexStateBucket)
enc := indexStateBucket.Get(lastArchivedIndex)
if enc == nil {
return nil
}
var err error
s, err = createState(enc)
return err
})
if err != nil {
return nil, err
}
if s == nil {
return nil, nil
}
return state.InitializeFromProtoUnsafe(s)
}
// ArchivedPointState returns the state of an archived point from the DB.
// This is essential for cold state management and to restore a cold state.
func (k *Store) ArchivedPointState(ctx context.Context, index uint64) (*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.ArchivedPointState")
defer span.End()
var s *pb.BeaconState
err := k.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(archivedIndexStateBucket)
enc := bucket.Get(uint64ToBytes(index))
if enc == nil {
return nil
}
var err error
s, err = createState(enc)
return err
})
if err != nil {
return nil, err
}
if s == nil {
return nil, nil
}
return state.InitializeFromProtoUnsafe(s)
}
// ArchivedPointRoot returns the block root of an archived point from the DB.
// This is essential for cold state management and to restore a cold state.
func (k *Store) ArchivedPointRoot(ctx context.Context, index uint64) [32]byte {
@@ -153,9 +74,7 @@ func (k *Store) HasArchivedPoint(ctx context.Context, index uint64) bool {
// #nosec G104. Always returns nil.
k.db.View(func(tx *bolt.Tx) error {
iBucket := tx.Bucket(archivedIndexRootBucket)
sBucket := tx.Bucket(archivedIndexStateBucket)
exists = iBucket.Get(uint64ToBytes(index)) != nil &&
sBucket.Get(uint64ToBytes(index)) != nil
exists = iBucket.Get(uint64ToBytes(index)) != nil
return nil
})
return exists

View File

@@ -2,12 +2,7 @@ package kv
import (
"context"
"reflect"
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
)
func TestArchivedPointIndexRoot_CanSaveRetrieve(t *testing.T) {
@@ -31,93 +26,14 @@ func TestArchivedPointIndexRoot_CanSaveRetrieve(t *testing.T) {
}
}
func TestArchivedPointIndexState_CanSaveRetrieve(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
i1 := uint64(100)
s := &pb.BeaconState{Slot: 100}
st, err := state.InitializeFromProto(s)
if err != nil {
t.Fatal(err)
}
received, err := db.ArchivedPointState(ctx, i1)
if err != nil {
t.Fatal(err)
}
if received != nil {
t.Fatal("Should not have been saved")
}
if err := db.SaveArchivedPointState(ctx, st, i1); err != nil {
t.Fatal(err)
}
received, err = db.ArchivedPointState(ctx, i1)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(received, st) {
t.Error("Should have been saved")
}
}
func TestArchivedPointIndexHas_CanRetrieve(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
i1 := uint64(100)
s := &pb.BeaconState{Slot: 100}
st, err := state.InitializeFromProto(s)
if err != nil {
t.Fatal(err)
}
r1 := [32]byte{'A'}
if db.HasArchivedPoint(ctx, i1) {
t.Fatal("Should have have an archived point")
}
if err := db.SaveArchivedPointState(ctx, st, i1); err != nil {
t.Fatal(err)
}
if db.HasArchivedPoint(ctx, i1) {
t.Fatal("Should have have an archived point")
}
if err := db.SaveArchivedPointRoot(ctx, r1, i1); err != nil {
t.Fatal(err)
}
if !db.HasArchivedPoint(ctx, i1) {
t.Fatal("Should have an archived point")
}
}
func TestLastArchivedPoint_CanRetrieve(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
slot1 := uint64(100)
s1 := &pb.BeaconState{Slot: slot1}
st1, err := state.InitializeFromProto(s1)
if err != nil {
t.Fatal(err)
}
if err := db.SaveArchivedPointState(ctx, st1, 1); err != nil {
t.Fatal(err)
}
if err := db.SaveArchivedPointRoot(ctx, [32]byte{'A'}, 1); err != nil {
t.Fatal(err)
}
slot2 := uint64(200)
s2 := &pb.BeaconState{Slot: slot2}
st2, err := state.InitializeFromProto(s2)
if err != nil {
t.Fatal(err)
}
if err := db.SaveArchivedPointState(ctx, st2, 3); err != nil {
t.Fatal(err)
}
if err := db.SaveArchivedPointRoot(ctx, [32]byte{'B'}, 3); err != nil {
t.Fatal(err)
}
@@ -125,13 +41,6 @@ func TestLastArchivedPoint_CanRetrieve(t *testing.T) {
if err := db.SaveLastArchivedIndex(ctx, 1); err != nil {
t.Fatal(err)
}
lastSaved, err := db.LastArchivedIndexState(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(lastSaved.InnerStateUnsafe(), st1.InnerStateUnsafe()) {
t.Error("Did not get wanted saved state")
}
if db.LastArchivedIndexRoot(ctx) != [32]byte{'A'} {
t.Error("Did not get wanted root")
}
@@ -139,13 +48,6 @@ func TestLastArchivedPoint_CanRetrieve(t *testing.T) {
if err := db.SaveLastArchivedIndex(ctx, 3); err != nil {
t.Fatal(err)
}
lastSaved, err = db.LastArchivedIndexState(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(lastSaved.InnerStateUnsafe(), st2.InnerStateUnsafe()) {
t.Error("Did not get wanted saved state")
}
if db.LastArchivedIndexRoot(ctx) != [32]byte{'B'} {
t.Error("Did not get wanted root")
}

View File

@@ -7,9 +7,10 @@ import (
"math"
"strconv"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
@@ -190,7 +191,7 @@ func (k *Store) DeleteBlocks(ctx context.Context, blockRoots [][32]byte) error {
func (k *Store) SaveBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlock")
defer span.End()
blockRoot, err := ssz.HashTreeRoot(signed.Block)
blockRoot, err := stateutil.BlockRoot(signed.Block)
if err != nil {
return err
}
@@ -225,18 +226,18 @@ func (k *Store) SaveBlocks(ctx context.Context, blocks []*ethpb.SignedBeaconBloc
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for _, block := range blocks {
if err := k.setBlockSlotBitField(ctx, tx, block.Block.Slot); err != nil {
return err
}
blockRoot, err := ssz.HashTreeRoot(block.Block)
blockRoot, err := stateutil.BlockRoot(block.Block)
if err != nil {
return err
}
bkt := tx.Bucket(blocksBucket)
if existingBlock := bkt.Get(blockRoot[:]); existingBlock != nil {
return nil
continue
}
enc, err := encode(block)
if err != nil {
@@ -247,6 +248,7 @@ func (k *Store) SaveBlocks(ctx context.Context, blocks []*ethpb.SignedBeaconBloc
return errors.Wrap(err, "could not update DB indices")
}
k.blockCache.Set(string(blockRoot[:]), block, int64(len(enc)))
if err := bkt.Put(blockRoot[:], enc); err != nil {
return err
}
@@ -261,7 +263,7 @@ func (k *Store) SaveHeadBlockRoot(ctx context.Context, blockRoot [32]byte) error
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
if featureconfig.Get().NewStateMgmt {
if tx.Bucket(stateSummaryBucket).Get(blockRoot[:]) == nil {
if tx.Bucket(stateSummaryBucket).Get(blockRoot[:]) == nil && !k.stateSummaryCache.Has(blockRoot) {
return errors.New("no state summary found with head block root")
}
} else {
@@ -317,7 +319,7 @@ func (k *Store) HighestSlotBlocks(ctx context.Context) ([]*ethpb.SignedBeaconBlo
return err
}
blocks, err = k.blocksAtSlotBitfieldIndex(ctx, tx, uint64(highestIndex))
blocks, err = k.blocksAtSlotBitfieldIndex(ctx, tx, highestIndex)
if err != nil {
return err
}
@@ -329,18 +331,21 @@ func (k *Store) HighestSlotBlocks(ctx context.Context) ([]*ethpb.SignedBeaconBlo
// HighestSlotBlocksBelow returns the block with the highest slot below the input slot from the db.
func (k *Store) HighestSlotBlocksBelow(ctx context.Context, slot uint64) ([]*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HighestSlotBlockAt")
ctx, span := trace.StartSpan(ctx, "BeaconDB.HighestSlotBlocksBelow")
defer span.End()
blocks := make([]*ethpb.SignedBeaconBlock, 0)
err := k.db.View(func(tx *bolt.Tx) error {
sBkt := tx.Bucket(slotsHasObjectBucket)
savedSlots := sBkt.Get(savedBlockSlotsKey)
if len(savedSlots) == 0 {
savedSlots = bytesutil.MakeEmptyBitlists(int(slot))
}
highestIndex, err := bytesutil.HighestBitIndexAt(savedSlots, int(slot))
if err != nil {
return err
}
blocks, err = k.blocksAtSlotBitfieldIndex(ctx, tx, uint64(highestIndex))
blocks, err = k.blocksAtSlotBitfieldIndex(ctx, tx, highestIndex)
if err != nil {
return err
}
@@ -350,15 +355,24 @@ func (k *Store) HighestSlotBlocksBelow(ctx context.Context, slot uint64) ([]*eth
return blocks, err
}
// blocksAtSlotBitfieldIndex retrieves the block in DB given the input index. The index represents
// blocksAtSlotBitfieldIndex retrieves the blocks in DB given the input index. The index represents
// the position of the slot bitfield the saved block maps to.
func (k *Store) blocksAtSlotBitfieldIndex(ctx context.Context, tx *bolt.Tx, index uint64) ([]*ethpb.SignedBeaconBlock, error) {
func (k *Store) blocksAtSlotBitfieldIndex(ctx context.Context, tx *bolt.Tx, index int) ([]*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.blocksAtSlotBitfieldIndex")
defer span.End()
highestSlot := index - 1
highestSlot = uint64(math.Max(0, float64(highestSlot)))
f := filters.NewFilter().SetStartSlot(highestSlot).SetEndSlot(highestSlot)
highestSlot = int(math.Max(0, float64(highestSlot)))
if highestSlot == 0 {
gBlock, err := k.GenesisBlock(ctx)
if err != nil {
return nil, err
}
return []*ethpb.SignedBeaconBlock{gBlock}, nil
}
f := filters.NewFilter().SetStartSlot(uint64(highestSlot)).SetEndSlot(uint64(highestSlot))
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {

View File

@@ -508,6 +508,41 @@ func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
}
}
func TestStore_GenesisBlock_CanGetHighestAt(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
genesisBlock := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
genesisRoot, _ := ssz.HashTreeRoot(genesisBlock.Block)
db.SaveGenesisBlockRoot(ctx, genesisRoot)
db.SaveBlock(ctx, genesisBlock)
block1 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
db.SaveBlock(ctx, block1)
highestAt, err := db.HighestSlotBlocksBelow(ctx, 2)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block1, highestAt[0]) {
t.Errorf("Wanted %v, received %v", block1, highestAt)
}
highestAt, err = db.HighestSlotBlocksBelow(ctx, 1)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(genesisBlock, highestAt[0]) {
t.Errorf("Wanted %v, received %v", genesisBlock, highestAt)
}
highestAt, err = db.HighestSlotBlocksBelow(ctx, 0)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(genesisBlock, highestAt[0]) {
t.Errorf("Wanted %v, received %v", genesisBlock, highestAt)
}
}
func TestStore_SaveBlocks_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
@@ -535,6 +570,39 @@ func TestStore_SaveBlocks_CanGetHighest(t *testing.T) {
}
}
func TestStore_SaveBlocks_HasCachedBlocks(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
b := make([]*ethpb.SignedBeaconBlock, 500)
for i := 0; i < 500; i++ {
b[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
ParentRoot: []byte("parent"),
Slot: uint64(i),
},
}
}
if err := db.SaveBlock(ctx, b[0]); err != nil {
t.Fatal(err)
}
if err := db.SaveBlocks(ctx, b); err != nil {
t.Fatal(err)
}
f := filters.NewFilter().SetStartSlot(0).SetEndSlot(500)
blks, err := db.Blocks(ctx, f)
if err != nil {
t.Fatal(err)
}
if len(blks) != 500 {
t.Log(len(blks))
t.Error("Did not get wanted blocks")
}
}
func TestStore_DeleteBlock_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)

View File

@@ -0,0 +1,33 @@
package kv
import (
"errors"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
bolt "go.etcd.io/bbolt"
)
var historicalStateDeletedKey = []byte("historical-states-deleted")
func (kv *Store) ensureNewStateServiceCompatible() error {
if !featureconfig.Get().NewStateMgmt {
return kv.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(newStateServiceCompatibleBucket)
return bkt.Put(historicalStateDeletedKey, []byte{0x01})
})
}
var historicalStateDeleted bool
kv.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(newStateServiceCompatibleBucket)
v := bkt.Get(historicalStateDeletedKey)
historicalStateDeleted = len(v) == 1 && v[0] == 0x01
return nil
})
if historicalStateDeleted {
return errors.New("historical states were pruned in db, do not run with flag --new-state-mgmt")
}
return nil
}

View File

@@ -5,6 +5,7 @@ import (
"errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/traceutil"
bolt "go.etcd.io/bbolt"
@@ -65,7 +66,7 @@ func (k *Store) SaveJustifiedCheckpoint(ctx context.Context, checkpoint *ethpb.C
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(checkpointBucket)
if featureconfig.Get().NewStateMgmt {
if tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) == nil {
if tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) == nil && !k.stateSummaryCache.Has(bytesutil.ToBytes32(checkpoint.Root)) {
return errors.New("missing state summary for finalized root")
}
} else {
@@ -93,7 +94,7 @@ func (k *Store) SaveFinalizedCheckpoint(ctx context.Context, checkpoint *ethpb.C
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(checkpointBucket)
if featureconfig.Get().NewStateMgmt {
if tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) == nil {
if tx.Bucket(stateSummaryBucket).Get(checkpoint.Root) == nil && !k.stateSummaryCache.Has(bytesutil.ToBytes32(checkpoint.Root)) {
return errors.New("missing state summary for finalized root")
}
} else {
@@ -109,6 +110,7 @@ func (k *Store) SaveFinalizedCheckpoint(ctx context.Context, checkpoint *ethpb.C
if err := bucket.Put(finalizedCheckpointKey, enc); err != nil {
return err
}
return k.updateFinalizedBlockRoots(ctx, tx, checkpoint)
})
}

View File

@@ -55,6 +55,7 @@ func (k *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, chec
return err
}
}
blockRoots, err := k.BlockRoots(ctx, filters.NewFilter().
SetStartEpoch(previousFinalizedCheckpoint.Epoch).
SetEndEpoch(checkpoint.Epoch+1),

View File

@@ -10,6 +10,7 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
prombolt "github.com/prysmaticlabs/prombbolt"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/db/iface"
bolt "go.etcd.io/bbolt"
)
@@ -38,12 +39,13 @@ type Store struct {
validatorIndexCache *ristretto.Cache
stateSlotBitLock sync.Mutex
blockSlotBitLock sync.Mutex
stateSummaryCache *cache.StateSummaryCache
}
// NewKVStore initializes a new boltDB key-value store at the directory
// path specified, creates the kv-buckets based on the schema, and stores
// an open connection db object as a property of the Store struct.
func NewKVStore(dirPath string) (*Store, error) {
func NewKVStore(dirPath string, stateSummaryCache *cache.StateSummaryCache) (*Store, error) {
if err := os.MkdirAll(dirPath, 0700); err != nil {
return nil, err
}
@@ -79,6 +81,7 @@ func NewKVStore(dirPath string) (*Store, error) {
databasePath: dirPath,
blockCache: blockCache,
validatorIndexCache: validatorCache,
stateSummaryCache: stateSummaryCache,
}
if err := kv.db.Update(func(tx *bolt.Tx) error {
@@ -100,7 +103,6 @@ func NewKVStore(dirPath string) (*Store, error) {
powchainBucket,
stateSummaryBucket,
archivedIndexRootBucket,
archivedIndexStateBucket,
slotsHasObjectBucket,
// Indices buckets.
attestationHeadBlockRootBucket,
@@ -111,13 +113,17 @@ func NewKVStore(dirPath string) (*Store, error) {
blockSlotIndicesBucket,
blockParentRootIndicesBucket,
finalizedBlockRootsIndexBucket,
// Migration bucket.
migrationBucket,
// New State Management service bucket.
newStateServiceCompatibleBucket,
)
}); err != nil {
return nil, err
}
if err := kv.ensureNewStateServiceCompatible(); err != nil {
return nil, err
}
err = prometheus.Register(createBoltCollector(kv.db))
return kv, err

View File

@@ -8,6 +8,7 @@ import (
"path"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -21,7 +22,7 @@ func setupDB(t testing.TB) *Store {
if err := os.RemoveAll(path); err != nil {
t.Fatalf("Failed to remove directory: %v", err)
}
db, err := NewKVStore(path)
db, err := NewKVStore(path, cache.NewStateSummaryCache())
if err != nil {
t.Fatalf("Failed to instantiate DB: %v", err)
}

View File

@@ -23,7 +23,6 @@ var (
archivedValidatorParticipationBucket = []byte("archived-validator-participation")
powchainBucket = []byte("powchain")
archivedIndexRootBucket = []byte("archived-index-root")
archivedIndexStateBucket = []byte("archived-index-state")
slotsHasObjectBucket = []byte("slots-has-objects")
// Key indices buckets.
@@ -47,6 +46,6 @@ var (
savedBlockSlotsKey = []byte("saved-block-slots")
savedStateSlotsKey = []byte("saved-state-slots")
// Migration bucket.
migrationBucket = []byte("migrations")
// New state management service compatibility bucket.
newStateServiceCompatibleBucket = []byte("new-state-compatible")
)

View File

@@ -308,20 +308,34 @@ func slotByBlockRoot(ctx context.Context, tx *bolt.Tx, blockRoot []byte) (uint64
return stateSummary.Slot, nil
}
bkt := tx.Bucket(stateBucket)
bkt := tx.Bucket(blocksBucket)
enc := bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state enc can't be nil")
// fallback and check the state.
bkt = tx.Bucket(stateBucket)
enc = bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state enc can't be nil")
}
s, err := createState(enc)
if err != nil {
return 0, err
}
if s == nil {
return 0, errors.New("state can't be nil")
}
return s.Slot, nil
}
s, err := createState(enc)
b := &ethpb.SignedBeaconBlock{}
err := decode(enc, b)
if err != nil {
return 0, err
}
if s == nil {
return 0, errors.New("state can't be nil")
if b.Block == nil {
return 0, errors.New("block can't be nil")
}
return s.Slot, nil
return b.Block.Slot, nil
}
// HighestSlotStates returns the states with the highest slot from the db.
@@ -339,39 +353,10 @@ func (k *Store) HighestSlotStates(ctx context.Context) ([]*state.BeaconState, er
if err != nil {
return err
}
highestSlot := highestIndex - 1
highestSlot = int(math.Max(0, float64(highestSlot)))
f := filters.NewFilter().SetStartSlot(uint64(highestSlot)).SetEndSlot(uint64(highestSlot))
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {
return err
}
if len(keys) == 0 {
return errors.New("could not get one block root to get state")
}
stateBkt := tx.Bucket(stateBucket)
for i := range keys {
enc := stateBkt.Get(keys[i][:])
if enc == nil {
continue
}
pbState, err := createState(enc)
if err != nil {
return err
}
s, err := state.InitializeFromProtoUnsafe(pbState)
if err != nil {
return err
}
states = append(states, s)
}
states, err = k.statesAtSlotBitfieldIndex(ctx, tx, highestIndex)
return err
})
if err != nil {
return nil, err
}
@@ -383,6 +368,89 @@ func (k *Store) HighestSlotStates(ctx context.Context) ([]*state.BeaconState, er
return states, nil
}
// HighestSlotStatesBelow returns the states with the highest slot below the input slot
// from the db. Ideally there should just be one state per slot, but given validator
// can double propose, a single slot could have multiple block roots and
// reuslts states. This returns a list of states.
func (k *Store) HighestSlotStatesBelow(ctx context.Context, slot uint64) ([]*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HighestSlotStatesBelow")
defer span.End()
var states []*state.BeaconState
err := k.db.View(func(tx *bolt.Tx) error {
slotBkt := tx.Bucket(slotsHasObjectBucket)
savedSlots := slotBkt.Get(savedStateSlotsKey)
if len(savedSlots) == 0 {
savedSlots = bytesutil.MakeEmptyBitlists(int(slot))
}
highestIndex, err := bytesutil.HighestBitIndexAt(savedSlots, int(slot))
if err != nil {
return err
}
states, err = k.statesAtSlotBitfieldIndex(ctx, tx, highestIndex)
return err
})
if err != nil {
return nil, err
}
if len(states) == 0 {
return nil, errors.New("could not get one state")
}
return states, nil
}
// statesAtSlotBitfieldIndex retrieves the states in DB given the input index. The index represents
// the position of the slot bitfield the saved state maps to.
func (k *Store) statesAtSlotBitfieldIndex(ctx context.Context, tx *bolt.Tx, index int) ([]*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.statesAtSlotBitfieldIndex")
defer span.End()
highestSlot := index - 1
highestSlot = int(math.Max(0, float64(highestSlot)))
if highestSlot == 0 {
gState, err := k.GenesisState(ctx)
if err != nil {
return nil, err
}
return []*state.BeaconState{gState}, nil
}
f := filters.NewFilter().SetStartSlot(uint64(highestSlot)).SetEndSlot(uint64(highestSlot))
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {
return nil, err
}
if len(keys) == 0 {
return nil, errors.New("could not get one block root to get state")
}
stateBkt := tx.Bucket(stateBucket)
states := make([]*state.BeaconState, 0, len(keys))
for i := range keys {
enc := stateBkt.Get(keys[i][:])
if enc == nil {
continue
}
pbState, err := createState(enc)
if err != nil {
return nil, err
}
s, err := state.InitializeFromProtoUnsafe(pbState)
if err != nil {
return nil, err
}
states = append(states, s)
}
return states, err
}
// setStateSlotBitField sets the state slot bit in DB.
// This helps to track which slot has a saved state in db.
func (k *Store) setStateSlotBitField(ctx context.Context, tx *bolt.Tx, slot uint64) error {

View File

@@ -23,6 +23,26 @@ func (k *Store) SaveStateSummary(ctx context.Context, summary *pb.StateSummary)
})
}
// SaveStateSummaries saves state summary objects to the DB.
func (k *Store) SaveStateSummaries(ctx context.Context, summaries []*pb.StateSummary) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveStateSummaries")
defer span.End()
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateSummaryBucket)
for _, summary := range summaries {
enc, err := encode(summary)
if err != nil {
return err
}
if err := bucket.Put(summary.Root, enc); err != nil {
return err
}
}
return nil
})
}
// StateSummary returns the state summary object from the db using input block root.
func (k *Store) StateSummary(ctx context.Context, blockRoot [32]byte) (*pb.StateSummary, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.StateSummary")

View File

@@ -15,7 +15,7 @@ func TestStateSummary_CanSaveRretrieve(t *testing.T) {
ctx := context.Background()
r1 := bytesutil.ToBytes32([]byte{'A'})
r2 := bytesutil.ToBytes32([]byte{'B'})
s1 := &pb.StateSummary{Slot: 1, Root: r1[:], BoundaryRoot: r2[:]}
s1 := &pb.StateSummary{Slot: 1, Root: r1[:]}
// State summary should not exist yet.
if db.HasStateSummary(ctx, r1) {
@@ -38,7 +38,7 @@ func TestStateSummary_CanSaveRretrieve(t *testing.T) {
}
// Save a new state summary.
s2 := &pb.StateSummary{Slot: 2, Root: r2[:], BoundaryRoot: r1[:]}
s2 := &pb.StateSummary{Slot: 2, Root: r2[:]}
// State summary should not exist yet.
if db.HasStateSummary(ctx, r2) {

View File

@@ -368,3 +368,133 @@ func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
}
}
func TestStore_SaveDeleteState_CanGetHighestBelow(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
s0 := &pb.BeaconState{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err := state.InitializeFromProto(s0)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r); err != nil {
t.Fatal(err)
}
s1 := &pb.BeaconState{Slot: 100}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 100}}
r1, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err = state.InitializeFromProto(s1)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r1); err != nil {
t.Fatal(err)
}
highest, err := db.HighestSlotStates(context.Background())
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s1) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
}
s2 := &pb.BeaconState{Slot: 1000}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1000}}
r2, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err = state.InitializeFromProto(s2)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r2); err != nil {
t.Fatal(err)
}
highest, err = db.HighestSlotStatesBelow(context.Background(), 2)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s0) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0)
}
highest, err = db.HighestSlotStatesBelow(context.Background(), 101)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s1) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
}
highest, err = db.HighestSlotStatesBelow(context.Background(), 1001)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s2) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s2)
}
}
func TestStore_GenesisState_CanGetHighestBelow(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
s := &pb.BeaconState{}
genesisState, err := state.InitializeFromProto(s)
if err != nil {
t.Fatal(err)
}
genesisRoot := [32]byte{'a'}
db.SaveGenesisBlockRoot(context.Background(), genesisRoot)
db.SaveState(context.Background(), genesisState, genesisRoot)
s0 := &pb.BeaconState{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err := state.InitializeFromProto(s0)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r); err != nil {
t.Fatal(err)
}
highest, err := db.HighestSlotStatesBelow(context.Background(), 2)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s0) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0)
}
highest, err = db.HighestSlotStatesBelow(context.Background(), 1)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe()) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0)
}
highest, err = db.HighestSlotStatesBelow(context.Background(), 0)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), genesisState.InnerStateUnsafe()) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s0)
}
}

View File

@@ -7,6 +7,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/db/testing",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//shared/testutil:go_default_library",

View File

@@ -8,6 +8,7 @@ import (
"path"
"testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -23,7 +24,7 @@ func SetupDB(t testing.TB) db.Database {
if err := os.RemoveAll(p); err != nil {
t.Fatalf("failed to remove directory: %v", err)
}
s, err := kv.NewKVStore(p)
s, err := kv.NewKVStore(p, cache.NewStateSummaryCache())
if err != nil {
t.Fatal(err)
}

View File

@@ -8,6 +8,7 @@ go_library(
deps = [
"//beacon-chain/archiver:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/flags:go_default_library",

View File

@@ -18,6 +18,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/archiver"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
@@ -58,20 +59,21 @@ const testSkipPowFlag = "test-skip-pow"
// full PoS node. It handles the lifecycle of the entire system and registers
// services to a service registry.
type BeaconNode struct {
ctx *cli.Context
services *shared.ServiceRegistry
lock sync.RWMutex
stop chan struct{} // Channel to wait for termination notifications.
db db.Database
attestationPool attestations.Pool
exitPool *voluntaryexits.Pool
slashingsPool *slashings.Pool
depositCache *depositcache.DepositCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
forkChoiceStore forkchoice.ForkChoicer
stateGen *stategen.State
ctx *cli.Context
services *shared.ServiceRegistry
lock sync.RWMutex
stop chan struct{} // Channel to wait for termination notifications.
db db.Database
stateSummaryCache *cache.StateSummaryCache
attestationPool attestations.Pool
exitPool *voluntaryexits.Pool
slashingsPool *slashings.Pool
depositCache *depositcache.DepositCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
forkChoiceStore forkchoice.ForkChoicer
stateGen *stategen.State
}
// NewBeaconNode creates a new node instance, sets up configuration options, and registers
@@ -92,15 +94,16 @@ func NewBeaconNode(ctx *cli.Context) (*BeaconNode, error) {
registry := shared.NewServiceRegistry()
beacon := &BeaconNode{
ctx: ctx,
services: registry,
stop: make(chan struct{}),
stateFeed: new(event.Feed),
blockFeed: new(event.Feed),
opFeed: new(event.Feed),
attestationPool: attestations.NewPool(),
exitPool: voluntaryexits.NewPool(),
slashingsPool: slashings.NewPool(),
ctx: ctx,
services: registry,
stop: make(chan struct{}),
stateFeed: new(event.Feed),
blockFeed: new(event.Feed),
opFeed: new(event.Feed),
attestationPool: attestations.NewPool(),
exitPool: voluntaryexits.NewPool(),
slashingsPool: slashings.NewPool(),
stateSummaryCache: cache.NewStateSummaryCache(),
}
if err := beacon.startDB(ctx); err != nil {
@@ -233,7 +236,7 @@ func (b *BeaconNode) startDB(ctx *cli.Context) error {
clearDB := ctx.Bool(cmd.ClearDB.Name)
forceClearDB := ctx.Bool(cmd.ForceClearDB.Name)
d, err := db.NewDB(dbPath)
d, err := db.NewDB(dbPath, b.stateSummaryCache)
if err != nil {
return err
}
@@ -252,7 +255,7 @@ func (b *BeaconNode) startDB(ctx *cli.Context) error {
if err := d.ClearDB(); err != nil {
return err
}
d, err = db.NewDB(dbPath)
d, err = db.NewDB(dbPath, b.stateSummaryCache)
if err != nil {
return err
}
@@ -264,7 +267,7 @@ func (b *BeaconNode) startDB(ctx *cli.Context) error {
}
func (b *BeaconNode) startStateGen() {
b.stateGen = stategen.New(b.db)
b.stateGen = stategen.New(b.db, b.stateSummaryCache)
}
func (b *BeaconNode) registerP2P(ctx *cli.Context) error {
@@ -437,6 +440,7 @@ func (b *BeaconNode) registerSyncService(ctx *cli.Context) error {
AttPool: b.attestationPool,
ExitPool: b.exitPool,
SlashingPool: b.slashingsPool,
StateSummaryCache: b.stateSummaryCache,
})
return b.services.RegisterService(rs)

View File

@@ -50,5 +50,6 @@ go_test(
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
],
)

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bls"
"gopkg.in/d4l3k/messagediff.v1"
)
func TestAggregateAttestations_SingleAttestation(t *testing.T) {
@@ -48,10 +49,14 @@ func TestAggregateAttestations_MultipleAttestationsSameRoot(t *testing.T) {
sk := bls.RandKey()
sig := sk.Sign([]byte("dummy_test_data"), 0 /*domain*/)
data := &ethpb.AttestationData{
Source: &ethpb.Checkpoint{},
Target: &ethpb.Checkpoint{},
}
attsToBeAggregated := []*ethpb.Attestation{
{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b110001}, Signature: sig.Marshal()},
{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b100010}, Signature: sig.Marshal()},
{Data: &ethpb.AttestationData{}, AggregationBits: bitfield.Bitlist{0b101100}, Signature: sig.Marshal()},
{Data: data, AggregationBits: bitfield.Bitlist{0b110001}, Signature: sig.Marshal()},
{Data: data, AggregationBits: bitfield.Bitlist{0b100010}, Signature: sig.Marshal()},
{Data: data, AggregationBits: bitfield.Bitlist{0b101100}, Signature: sig.Marshal()},
}
if err := s.aggregateAttestations(context.Background(), attsToBeAggregated); err != nil {
@@ -66,7 +71,10 @@ func TestAggregateAttestations_MultipleAttestationsSameRoot(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(wanted, s.pool.AggregatedAttestations()) {
got := s.pool.AggregatedAttestations()
if !reflect.DeepEqual(wanted, got) {
diff, _ := messagediff.PrettyDiff(got[0], wanted[0])
t.Log(diff)
t.Error("Did not aggregate attestations")
}
}

View File

@@ -14,9 +14,9 @@ go_library(
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//shared/hashutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
@@ -24,6 +24,7 @@ go_test(
name = "go_default_test",
srcs = [
"aggregated_test.go",
"benchmark_test.go",
"block_test.go",
"forkchoice_test.go",
"unaggregated_test.go",

View File

@@ -3,17 +3,19 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
)
// SaveAggregatedAttestation saves an aggregated attestation in cache.
func (p *AttCaches) SaveAggregatedAttestation(att *ethpb.Attestation) error {
if att == nil || att.Data == nil {
return nil
}
if !helpers.IsAggregated(att) {
return errors.New("attestation is not aggregated")
}
r, err := ssz.HashTreeRoot(att.Data)
r, err := hashFn(att.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}
@@ -78,10 +80,13 @@ func (p *AttCaches) AggregatedAttestationsBySlotIndex(slot uint64, committeeInde
// DeleteAggregatedAttestation deletes the aggregated attestations in cache.
func (p *AttCaches) DeleteAggregatedAttestation(att *ethpb.Attestation) error {
if att == nil || att.Data == nil {
return nil
}
if !helpers.IsAggregated(att) {
return errors.New("attestation is not aggregated")
}
r, err := ssz.HashTreeRoot(att.Data)
r, err := hashFn(att.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation data")
}
@@ -95,7 +100,7 @@ func (p *AttCaches) DeleteAggregatedAttestation(att *ethpb.Attestation) error {
filtered := make([]*ethpb.Attestation, 0)
for _, a := range attList {
if !att.AggregationBits.Contains(a.AggregationBits) {
if att.AggregationBits.Len() == a.AggregationBits.Len() && !att.AggregationBits.Contains(a.AggregationBits) {
filtered = append(filtered, a)
}
}
@@ -110,7 +115,10 @@ func (p *AttCaches) DeleteAggregatedAttestation(att *ethpb.Attestation) error {
// HasAggregatedAttestation checks if the input attestations has already existed in cache.
func (p *AttCaches) HasAggregatedAttestation(att *ethpb.Attestation) (bool, error) {
r, err := ssz.HashTreeRoot(att.Data)
if att == nil || att.Data == nil {
return false, nil
}
r, err := hashFn(att.Data)
if err != nil {
return false, errors.Wrap(err, "could not tree hash attestation")
}

View File

@@ -13,7 +13,7 @@ import (
func TestKV_Aggregated_NotAggregated(t *testing.T) {
cache := NewAttCaches()
att := &ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11}}
att := &ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11}, Data: &ethpb.AttestationData{}}
wanted := "attestation is not aggregated"
if err := cache.SaveAggregatedAttestation(att); !strings.Contains(err.Error(), wanted) {
@@ -52,7 +52,8 @@ func TestKV_Aggregated_CanDelete(t *testing.T) {
att1 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}}
att2 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}}
att3 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b1101}}
atts := []*ethpb.Attestation{att1, att2, att3}
att4 := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 3}, AggregationBits: bitfield.Bitlist{0b10101}}
atts := []*ethpb.Attestation{att1, att2, att3, att4}
for _, att := range atts {
if err := cache.SaveAggregatedAttestation(att); err != nil {

View File

@@ -0,0 +1,19 @@
package kv_test
import (
"testing"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations/kv"
)
func BenchmarkAttCaches(b *testing.B) {
ac := kv.NewAttCaches()
att := &ethpb.Attestation{}
for i := 0; i < b.N; i++ {
ac.SaveUnaggregatedAttestation(att)
ac.DeleteAggregatedAttestation(att)
}
}

View File

@@ -3,13 +3,15 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
)
// SaveBlockAttestation saves an block attestation in cache.
func (p *AttCaches) SaveBlockAttestation(att *ethpb.Attestation) error {
r, err := ssz.HashTreeRoot(att.Data)
if att == nil {
return nil
}
r, err := hashFn(att.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}
@@ -59,7 +61,10 @@ func (p *AttCaches) BlockAttestations() []*ethpb.Attestation {
// DeleteBlockAttestation deletes a block attestation in cache.
func (p *AttCaches) DeleteBlockAttestation(att *ethpb.Attestation) error {
r, err := ssz.HashTreeRoot(att.Data)
if att == nil {
return nil
}
r, err := hashFn(att.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}

View File

@@ -3,13 +3,15 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
)
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
func (p *AttCaches) SaveForkchoiceAttestation(att *ethpb.Attestation) error {
r, err := ssz.HashTreeRoot(att)
if att == nil {
return nil
}
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}
@@ -47,7 +49,10 @@ func (p *AttCaches) ForkchoiceAttestations() []*ethpb.Attestation {
// DeleteForkchoiceAttestation deletes a forkchoice attestation in cache.
func (p *AttCaches) DeleteForkchoiceAttestation(att *ethpb.Attestation) error {
r, err := ssz.HashTreeRoot(att)
if att == nil {
return nil
}
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}

View File

@@ -4,8 +4,11 @@ import (
"sync"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/hashutil"
)
var hashFn = hashutil.HashProto
// AttCaches defines the caches used to satisfy attestation pool interface.
// These caches are KV store for various attestations
// such are unaggregated, aggregated or attestations within a block.

View File

@@ -3,18 +3,20 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
)
// SaveUnaggregatedAttestation saves an unaggregated attestation in cache.
func (p *AttCaches) SaveUnaggregatedAttestation(att *ethpb.Attestation) error {
if att == nil {
return nil
}
if helpers.IsAggregated(att) {
return errors.New("attestation is aggregated")
}
r, err := ssz.HashTreeRoot(att)
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}
@@ -52,11 +54,14 @@ func (p *AttCaches) UnaggregatedAttestations() []*ethpb.Attestation {
// DeleteUnaggregatedAttestation deletes the unaggregated attestations in cache.
func (p *AttCaches) DeleteUnaggregatedAttestation(att *ethpb.Attestation) error {
if att == nil {
return nil
}
if helpers.IsAggregated(att) {
return errors.New("attestation is aggregated")
}
r, err := ssz.HashTreeRoot(att)
r, err := hashFn(att)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
}

View File

@@ -35,6 +35,7 @@ go_library(
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/bytesutil:go_default_library",

View File

@@ -13,6 +13,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/pagination"
"google.golang.org/grpc/codes"
@@ -241,7 +242,7 @@ func (bs *Server) chainHeadRetrieval(ctx context.Context) (*ethpb.ChainHead, err
if headBlock == nil {
return nil, status.Error(codes.Internal, "Head block of chain was nil")
}
headBlockRoot, err := ssz.HashTreeRoot(headBlock.Block)
headBlockRoot, err := stateutil.BlockRoot(headBlock.Block)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get head block root: %v", err)
}

View File

@@ -29,7 +29,6 @@ go_library(
"//shared/sliceutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_protolambda_zssz//merkle:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@io_opencensus_go//trace:go_default_library",

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/memorypool"
)
@@ -209,8 +210,9 @@ func handleByteArrays(val [][]byte, indices []uint64, convertAll bool) ([][32]by
func handleEth1DataSlice(val []*ethpb.Eth1Data, indices []uint64, convertAll bool) ([][32]byte, error) {
roots := [][32]byte{}
hasher := hashutil.CustomSHA256Hasher()
rootCreater := func(input *ethpb.Eth1Data) error {
newRoot, err := stateutil.Eth1Root(input)
newRoot, err := stateutil.Eth1Root(hasher, input)
if err != nil {
return err
}
@@ -237,8 +239,9 @@ func handleEth1DataSlice(val []*ethpb.Eth1Data, indices []uint64, convertAll boo
func handleValidatorSlice(val []*ethpb.Validator, indices []uint64, convertAll bool) ([][32]byte, error) {
roots := [][32]byte{}
hasher := hashutil.CustomSHA256Hasher()
rootCreater := func(input *ethpb.Validator) error {
newRoot, err := stateutil.ValidatorRoot(input)
newRoot, err := stateutil.ValidatorRoot(hasher, input)
if err != nil {
return err
}
@@ -265,8 +268,9 @@ func handleValidatorSlice(val []*ethpb.Validator, indices []uint64, convertAll b
func handlePendingAttestation(val []*pb.PendingAttestation, indices []uint64, convertAll bool) ([][32]byte, error) {
roots := [][32]byte{}
hasher := hashutil.CustomSHA256Hasher()
rootCreator := func(input *pb.PendingAttestation) error {
newRoot, err := stateutil.PendingAttestationRoot(input)
newRoot, err := stateutil.PendingAttestationRoot(hasher, input)
if err != nil {
return err
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
"github.com/protolambda/zssz/merkle"
coreutils "github.com/prysmaticlabs/prysm/beacon-chain/core/state/stateutils"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
pbp2p "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
@@ -213,7 +212,7 @@ func (b *BeaconState) HashTreeRoot(ctx context.Context) ([32]byte, error) {
// pads the leaves to a power-of-two length.
func merkleize(leaves [][]byte) [][][]byte {
hashFunc := hashutil.CustomSHA256Hasher()
layers := make([][][]byte, merkle.GetDepth(uint64(len(leaves)))+1)
layers := make([][][]byte, stateutil.GetDepth(uint64(len(leaves)))+1)
for len(leaves) != 32 {
leaves = append(leaves, make([]byte, 32))
}
@@ -240,6 +239,7 @@ func merkleize(leaves [][]byte) [][][]byte {
}
func (b *BeaconState) rootSelector(field fieldIndex) ([32]byte, error) {
hasher := hashutil.CustomSHA256Hasher()
switch field {
case genesisTime:
return stateutil.Uint64Root(b.state.GenesisTime), nil
@@ -282,7 +282,7 @@ func (b *BeaconState) rootSelector(field fieldIndex) ([32]byte, error) {
case historicalRoots:
return stateutil.HistoricalRootsRoot(b.state.HistoricalRoots)
case eth1Data:
return stateutil.Eth1Root(b.state.Eth1Data)
return stateutil.Eth1Root(hasher, b.state.Eth1Data)
case eth1DataVotes:
if featureconfig.Get().EnableFieldTrie {
if b.rebuildTrie[field] {
@@ -360,11 +360,11 @@ func (b *BeaconState) rootSelector(field fieldIndex) ([32]byte, error) {
case justificationBits:
return bytesutil.ToBytes32(b.state.JustificationBits), nil
case previousJustifiedCheckpoint:
return stateutil.CheckpointRoot(b.state.PreviousJustifiedCheckpoint)
return stateutil.CheckpointRoot(hasher, b.state.PreviousJustifiedCheckpoint)
case currentJustifiedCheckpoint:
return stateutil.CheckpointRoot(b.state.CurrentJustifiedCheckpoint)
return stateutil.CheckpointRoot(hasher, b.state.CurrentJustifiedCheckpoint)
case finalizedCheckpoint:
return stateutil.CheckpointRoot(b.state.FinalizedCheckpoint)
return stateutil.CheckpointRoot(hasher, b.state.FinalizedCheckpoint)
}
return [32]byte{}, errors.New("invalid field index provided")
}

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"cold.go",
"epoch_boundary_root.go",
"errors.go",
"getter.go",
"hot.go",
@@ -39,7 +38,6 @@ go_test(
name = "go_default_test",
srcs = [
"cold_test.go",
"epoch_boundary_root_test.go",
"getter_test.go",
"hot_test.go",
"migrate_test.go",
@@ -49,6 +47,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",

View File

@@ -22,11 +22,11 @@ func (s *State) saveColdState(ctx context.Context, blockRoot [32]byte, state *st
return errSlotNonArchivedPoint
}
archivedPointIndex := state.Slot() / s.slotsPerArchivedPoint
if err := s.beaconDB.SaveArchivedPointState(ctx, state, archivedPointIndex); err != nil {
if err := s.beaconDB.SaveState(ctx, state, blockRoot); err != nil {
return err
}
if err := s.beaconDB.SaveArchivedPointRoot(ctx, blockRoot, archivedPointIndex); err != nil {
archivedIndex := state.Slot() / s.slotsPerArchivedPoint
if err := s.beaconDB.SaveArchivedPointRoot(ctx, blockRoot, archivedIndex); err != nil {
return err
}
@@ -44,12 +44,9 @@ func (s *State) loadColdStateByRoot(ctx context.Context, blockRoot [32]byte) (*s
ctx, span := trace.StartSpan(ctx, "stateGen.loadColdStateByRoot")
defer span.End()
summary, err := s.beaconDB.StateSummary(ctx, blockRoot)
summary, err := s.stateSummary(ctx, blockRoot)
if err != nil {
return nil, err
}
if summary == nil {
return nil, errUnknownStateSummary
return nil, errors.Wrap(err, "could not get state summary")
}
// Use the archived point state if the summary slot lies on top of the archived point.
@@ -70,7 +67,11 @@ func (s *State) loadColdStateByRoot(ctx context.Context, blockRoot [32]byte) (*s
// This loads the cold state for the input archived point.
func (s *State) loadColdStateByArchivedPoint(ctx context.Context, archivedPoint uint64) (*state.BeaconState, error) {
return s.beaconDB.ArchivedPointState(ctx, archivedPoint)
states, err := s.beaconDB.HighestSlotStatesBelow(ctx, archivedPoint*s.slotsPerArchivedPoint+1)
if err != nil {
return nil, err
}
return states[0], nil
}
// This loads a cold state by slot and block root combinations.
@@ -149,7 +150,7 @@ func (s *State) archivedPointByIndex(ctx context.Context, archiveIndex uint64) (
ctx, span := trace.StartSpan(ctx, "stateGen.loadArchivedPointByIndex")
defer span.End()
if s.beaconDB.HasArchivedPoint(ctx, archiveIndex) {
return s.beaconDB.ArchivedPointState(ctx, archiveIndex)
return s.loadColdStateByArchivedPoint(ctx, archiveIndex)
}
// If for certain reasons, archived point does not exist in DB,
@@ -180,9 +181,6 @@ func (s *State) recoverArchivedPointByIndex(ctx context.Context, archiveIndex ui
if err := s.beaconDB.SaveArchivedPointRoot(ctx, lastRoot, archiveIndex); err != nil {
return nil, err
}
if err := s.beaconDB.SaveArchivedPointState(ctx, archivedState, archiveIndex); err != nil {
return nil, err
}
return archivedState, nil
}
@@ -194,13 +192,10 @@ func (s *State) blockRootSlot(ctx context.Context, blockRoot [32]byte) (uint64,
ctx, span := trace.StartSpan(ctx, "stateGen.blockRootSlot")
defer span.End()
if s.beaconDB.HasStateSummary(ctx, blockRoot) {
summary, err := s.beaconDB.StateSummary(ctx, blockRoot)
if s.StateSummaryExists(ctx, blockRoot) {
summary, err := s.stateSummary(ctx, blockRoot)
if err != nil {
return 0, nil
}
if summary == nil {
return 0, errUnknownStateSummary
return 0, err
}
return summary.Slot, nil
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -19,7 +20,7 @@ func TestSaveColdState_NonArchivedPoint(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 2
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
@@ -34,7 +35,7 @@ func TestSaveColdState_CanSave(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
@@ -51,14 +52,6 @@ func TestSaveColdState_CanSave(t *testing.T) {
if service.beaconDB.ArchivedPointRoot(ctx, 1) != r {
t.Error("Did not get wanted root")
}
receivedState, err := service.beaconDB.ArchivedPointState(ctx, 1)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(receivedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe()) {
t.Error("Did not get wanted state")
}
}
func TestLoadColdStateByRoot_NoStateSummary(t *testing.T) {
@@ -66,8 +59,8 @@ func TestLoadColdStateByRoot_NoStateSummary(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
if _, err := service.loadColdStateByRoot(ctx, [32]byte{'a'}); err != errUnknownStateSummary {
service := New(db, cache.NewStateSummaryCache())
if _, err := service.loadColdStateByRoot(ctx, [32]byte{'a'}); !strings.Contains(err.Error(), errUnknownStateSummary.Error()) {
t.Fatal("Did not get correct error")
}
}
@@ -77,22 +70,25 @@ func TestLoadColdStateByRoot_ByArchivedPoint(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 1); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
r := [32]byte{'a'}
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Root: r[:],
Root: blkRoot[:],
Slot: 1,
}); err != nil {
t.Fatal(err)
}
loadedState, err := service.loadColdStateByRoot(ctx, r)
loadedState, err := service.loadColdStateByRoot(ctx, blkRoot)
if err != nil {
t.Fatal(err)
}
@@ -106,13 +102,17 @@ func TestLoadColdStateByRoot_IntermediatePlayback(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 2
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 1); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, [32]byte{}, 1); err != nil {
t.Fatal(err)
}
@@ -139,20 +139,27 @@ func TestLoadColdStateBySlotIntermediatePlayback_BeforeCutoff(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = params.BeaconConfig().SlotsPerEpoch * 2
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 0); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, [32]byte{}, 0); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 1); err != nil {
futureBeaconState, _ := testutil.DeterministicGenesisState(t, 32)
futureBeaconState.SetSlot(service.slotsPerArchivedPoint)
if err := service.beaconDB.SaveState(ctx, futureBeaconState, [32]byte{'A'}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, [32]byte{}, 1); err != nil {
if err := service.beaconDB.SaveArchivedPointRoot(ctx, [32]byte{'A'}, 1); err != nil {
t.Fatal(err)
}
@@ -171,14 +178,18 @@ func TestLoadColdStateBySlotIntermediatePlayback_AfterCutoff(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = params.BeaconConfig().SlotsPerEpoch
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 0); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, [32]byte{}, 0); err != nil {
if err := service.beaconDB.SaveArchivedPointRoot(ctx, blkRoot, 0); err != nil {
t.Fatal(err)
}
@@ -197,7 +208,7 @@ func TestLoadColdStateByRoot_UnknownArchivedState(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 1
if _, err := service.loadColdIntermediateStateBySlot(ctx, 0); !strings.Contains(err.Error(), errUnknownArchivedState.Error()) {
t.Log(err)
@@ -210,13 +221,19 @@ func TestArchivedPointByIndex_HasPoint(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
index := uint64(999)
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, index); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
if err := service.beaconDB.SaveArchivedPointRoot(ctx, blkRoot, index); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, [32]byte{'A'}, index); err != nil {
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveBlock(ctx, blk); err != nil {
t.Fatal(err)
}
@@ -234,7 +251,7 @@ func TestArchivedPointByIndex_DoesntHavePoint(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
gBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
gRoot, err := ssz.HashTreeRoot(gBlk.Block)
@@ -244,6 +261,9 @@ func TestArchivedPointByIndex_DoesntHavePoint(t *testing.T) {
if err := service.beaconDB.SaveBlock(ctx, gBlk); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveGenesisBlockRoot(ctx, gRoot); err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveState(ctx, beaconState, gRoot); err != nil {
t.Fatal(err)
@@ -258,13 +278,6 @@ func TestArchivedPointByIndex_DoesntHavePoint(t *testing.T) {
if recoveredState.Slot() != service.slotsPerArchivedPoint*2 {
t.Error("Diff state slot")
}
savedArchivedState, err := service.beaconDB.ArchivedPointState(ctx, 2)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(recoveredState.InnerStateUnsafe(), savedArchivedState.InnerStateUnsafe()) {
t.Error("Diff saved archived state")
}
}
func TestRecoverArchivedPointByIndex_CanRecover(t *testing.T) {
@@ -272,7 +285,7 @@ func TestRecoverArchivedPointByIndex_CanRecover(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
gBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
gRoot, err := ssz.HashTreeRoot(gBlk.Block)
@@ -282,6 +295,9 @@ func TestRecoverArchivedPointByIndex_CanRecover(t *testing.T) {
if err := service.beaconDB.SaveBlock(ctx, gBlk); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveGenesisBlockRoot(ctx, gRoot); err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveState(ctx, beaconState, gRoot); err != nil {
t.Fatal(err)
@@ -296,13 +312,6 @@ func TestRecoverArchivedPointByIndex_CanRecover(t *testing.T) {
if recoveredState.Slot() != service.slotsPerArchivedPoint {
t.Error("Diff state slot")
}
savedArchivedState, err := service.beaconDB.ArchivedPointState(ctx, 1)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(recoveredState.InnerStateUnsafe(), savedArchivedState.InnerStateUnsafe()) {
t.Error("Diff savled state")
}
}
func TestBlockRootSlot_Exists(t *testing.T) {
@@ -310,7 +319,7 @@ func TestBlockRootSlot_Exists(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
bRoot := [32]byte{'A'}
bSlot := uint64(100)
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
@@ -335,7 +344,7 @@ func TestBlockRootSlot_CanRecoverAndSave(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
bSlot := uint64(100)
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: bSlot}}
bRoot, _ := ssz.HashTreeRoot(b.Block)

View File

@@ -1,24 +0,0 @@
package stategen
// This sets an epoch boundary slot to root mapping.
// The slot is the key and the root is the value.
func (s *State) setEpochBoundaryRoot(slot uint64, root [32]byte) {
s.epochBoundaryLock.Lock()
defer s.epochBoundaryLock.Unlock()
s.epochBoundarySlotToRoot[slot] = root
}
// This reads epoch boundary slot to root mapping.
func (s *State) epochBoundaryRoot(slot uint64) ([32]byte, bool) {
s.epochBoundaryLock.RLock()
defer s.epochBoundaryLock.RUnlock()
r, ok := s.epochBoundarySlotToRoot[slot]
return r, ok
}
// This deletes an entry of epoch boundary slot to root mapping.
func (s *State) deleteEpochBoundaryRoot(slot uint64) {
s.epochBoundaryLock.Lock()
defer s.epochBoundaryLock.Unlock()
delete(s.epochBoundarySlotToRoot, slot)
}

View File

@@ -1,32 +0,0 @@
package stategen
import "testing"
func TestEpochBoundaryRoot_CanSetGetDelete(t *testing.T) {
s := &State{
epochBoundarySlotToRoot: make(map[uint64][32]byte),
}
slot := uint64(100)
r := [32]byte{'A'}
_, exists := s.epochBoundaryRoot(slot)
if exists {
t.Fatal("should not be cached")
}
s.setEpochBoundaryRoot(slot, r)
rReceived, exists := s.epochBoundaryRoot(slot)
if !exists {
t.Fatal("should be cached")
}
if rReceived != r {
t.Error("did not cache right value")
}
s.deleteEpochBoundaryRoot(100)
_, exists = s.epochBoundaryRoot(slot)
if exists {
t.Fatal("should not be cached")
}
}

View File

@@ -5,6 +5,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
@@ -15,6 +17,11 @@ func (s *State) StateByRoot(ctx context.Context, blockRoot [32]byte) (*state.Bea
ctx, span := trace.StartSpan(ctx, "stateGen.StateByRoot")
defer span.End()
// Genesis case. If block root is zero hash, short circuit to use genesis state stored in DB.
if blockRoot == params.BeaconConfig().ZeroHash {
return s.beaconDB.State(ctx, blockRoot)
}
slot, err := s.blockRootSlot(ctx, blockRoot)
if err != nil {
return nil, errors.Wrap(err, "could not get state summary")
@@ -46,5 +53,24 @@ func (s *State) StateBySlot(ctx context.Context, slot uint64) (*state.BeaconStat
// StateSummaryExists returns true if the corresponding state of the input block either
// exists in the DB or it can be generated by state gen.
func (s *State) StateSummaryExists(ctx context.Context, blockRoot [32]byte) bool {
return s.beaconDB.HasStateSummary(ctx, blockRoot)
return s.beaconDB.HasStateSummary(ctx, blockRoot) || s.stateSummaryCache.Has(blockRoot)
}
// This returns the state summary object of a given block root, it first checks the cache
// then checks the DB. An error is returned if state summary object is nil.
func (s *State) stateSummary(ctx context.Context, blockRoot [32]byte) (*pb.StateSummary, error) {
var summary *pb.StateSummary
var err error
if s.stateSummaryCache.Has(blockRoot) {
summary = s.stateSummaryCache.Get(blockRoot)
} else {
summary, err = s.beaconDB.StateSummary(ctx, blockRoot)
if err != nil {
return nil, err
}
}
if summary == nil {
return nil, errUnknownStateSummary
}
return summary, nil
}

View File

@@ -5,6 +5,9 @@ import (
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -16,14 +19,19 @@ func TestStateByRoot_ColdState(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.splitInfo.slot = 2
service.slotsPerArchivedPoint = 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 1); err != nil {
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
bRoot, _ := ssz.HashTreeRoot(b.Block)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
service.beaconDB.SaveState(ctx, beaconState, bRoot)
r := [32]byte{'a'}
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Root: r[:],
@@ -46,24 +54,25 @@ func TestStateByRoot_HotStateDB(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
boundaryRoot := [32]byte{'A'}
blkRoot := [32]byte{'B'}
if err := service.beaconDB.SaveState(ctx, beaconState, boundaryRoot); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
targetSlot := uint64(10)
targetRoot := [32]byte{'a'}
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: targetSlot,
Root: blkRoot[:],
BoundaryRoot: boundaryRoot[:],
Slot: targetSlot,
Root: targetRoot[:],
}); err != nil {
t.Fatal(err)
}
loadedState, err := service.StateByRoot(ctx, blkRoot)
loadedState, err := service.StateByRoot(ctx, targetRoot)
if err != nil {
t.Fatal(err)
}
@@ -77,13 +86,12 @@ func TestStateByRoot_HotStateCached(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Root: r[:],
BoundaryRoot: r[:],
Root: r[:],
}); err != nil {
t.Fatal(err)
}
@@ -103,21 +111,26 @@ func TestStateBySlot_ColdState(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = params.BeaconConfig().SlotsPerEpoch * 2
service.splitInfo.slot = service.slotsPerArchivedPoint + 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
bRoot, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveState(ctx, beaconState, bRoot); err != nil {
t.Fatal(err)
}
db.SaveGenesisBlockRoot(ctx, bRoot)
r := [32]byte{}
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 0); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, r, 0); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointState(ctx, beaconState, 1); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveArchivedPointRoot(ctx, r, 1); err != nil {
t.Fatal(err)
}
@@ -138,41 +151,23 @@ func TestStateBySlot_ColdState(t *testing.T) {
}
}
func TestStateBySlot_HotStateCached(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
service.hotStateCache.Put(r, beaconState)
service.setEpochBoundaryRoot(0, r)
slot := uint64(10)
loadedState, err := service.StateBySlot(ctx, slot)
if err != nil {
t.Fatal(err)
}
if loadedState.Slot() != slot {
t.Error("Did not correctly load state")
}
}
func TestStateBySlot_HotStateDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
service.setEpochBoundaryRoot(0, r)
if err := service.beaconDB.SaveState(ctx, beaconState, r); err != nil {
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
bRoot, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveState(ctx, beaconState, bRoot); err != nil {
t.Fatal(err)
}
db.SaveGenesisBlockRoot(ctx, bRoot)
slot := uint64(10)
loadedState, err := service.StateBySlot(ctx, slot)
@@ -183,3 +178,45 @@ func TestStateBySlot_HotStateDB(t *testing.T) {
t.Error("Did not correctly load state")
}
}
func TestStateSummary_CanGetFromCacheOrDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db, cache.NewStateSummaryCache())
r := [32]byte{'a'}
summary := &pb.StateSummary{Slot: 100}
_, err := service.stateSummary(ctx, r)
if err != errUnknownStateSummary {
t.Fatal("Did not get wanted error")
}
service.stateSummaryCache.Put(r, summary)
got, err := service.stateSummary(ctx, r)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(got, summary) {
t.Error("Did not get wanted summary")
}
r = [32]byte{'b'}
summary = &pb.StateSummary{Root: r[:], Slot: 101}
_, err = service.stateSummary(ctx, r)
if err != errUnknownStateSummary {
t.Fatal("Did not get wanted error")
}
if err := service.beaconDB.SaveStateSummary(ctx, summary); err != nil {
t.Fatal(err)
}
got, err = service.stateSummary(ctx, r)
if err != nil {
t.Fatal("Did not get wanted error")
}
if !proto.Equal(got, summary) {
t.Error("Did not get wanted summary")
}
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -37,20 +36,13 @@ func (s *State) saveHotState(ctx context.Context, blockRoot [32]byte, state *sta
}
// On an intermediate slots, save the hot state summary.
epochRoot, err := s.loadEpochBoundaryRoot(ctx, blockRoot, state)
if err != nil {
return errors.Wrap(err, "could not get epoch boundary root to save hot state")
}
if err := s.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: state.Slot(),
Root: blockRoot[:],
BoundaryRoot: epochRoot[:],
}); err != nil {
return err
}
s.stateSummaryCache.Put(blockRoot, &pb.StateSummary{
Slot: state.Slot(),
Root: blockRoot[:],
})
// Store the copied state in the cache.
s.hotStateCache.Put(blockRoot, state.Copy())
s.hotStateCache.Put(blockRoot, state)
return nil
}
@@ -62,52 +54,41 @@ func (s *State) loadHotStateByRoot(ctx context.Context, blockRoot [32]byte) (*st
ctx, span := trace.StartSpan(ctx, "stateGen.loadHotStateByRoot")
defer span.End()
// Load the hot state cache.
// Load the hot state from cache.
cachedState := s.hotStateCache.Get(blockRoot)
if cachedState != nil {
return cachedState, nil
}
summary, err := s.beaconDB.StateSummary(ctx, blockRoot)
// Load the hot state from DB.
if s.beaconDB.HasState(ctx, blockRoot) {
return s.beaconDB.State(ctx, blockRoot)
}
summary, err := s.stateSummary(ctx, blockRoot)
if err != nil {
return nil, errors.Wrap(err, "could not get state summary")
}
startState, err := s.lastSavedState(ctx, summary.Slot)
if err != nil {
return nil, err
}
if summary == nil {
return nil, errUnknownStateSummary
if startState == nil {
return nil, errUnknownBoundaryState
}
boundaryState, err := s.beaconDB.State(ctx, bytesutil.ToBytes32(summary.BoundaryRoot))
if err != nil {
return nil, err
}
if boundaryState == nil {
// Boundary state not available, get the last available state and start from there.
// This could happen if users toggle feature flags in between sync.
r, err := s.lastSavedState(ctx, helpers.StartSlot(summary.Slot))
if err != nil {
return nil, err
}
boundaryState, err = s.beaconDB.State(ctx, r)
if err != nil {
return nil, err
}
if boundaryState == nil {
return nil, errUnknownBoundaryState
}
}
// Don't need to replay the blocks if we're already on an epoch boundary,
// the target slot is the same as the state slot.
// Don't need to replay the blocks if start state is the same state for the block root.
var hotState *state.BeaconState
targetSlot := summary.Slot
if targetSlot == boundaryState.Slot() {
hotState = boundaryState
if targetSlot == startState.Slot() {
hotState = startState
} else {
blks, err := s.LoadBlocks(ctx, boundaryState.Slot()+1, targetSlot, bytesutil.ToBytes32(summary.Root))
blks, err := s.LoadBlocks(ctx, startState.Slot()+1, targetSlot, bytesutil.ToBytes32(summary.Root))
if err != nil {
return nil, errors.Wrap(err, "could not load blocks for hot state using root")
}
hotState, err = s.ReplayBlocks(ctx, boundaryState, blks, targetSlot)
hotState, err = s.ReplayBlocks(ctx, startState, blks, targetSlot)
if err != nil {
return nil, errors.Wrap(err, "could not replay blocks for hot state using root")
}
@@ -127,24 +108,8 @@ func (s *State) loadHotStateBySlot(ctx context.Context, slot uint64) (*state.Bea
ctx, span := trace.StartSpan(ctx, "stateGen.loadHotStateBySlot")
defer span.End()
// Gather epoch boundary information, that is where node starts to replay the blocks.
boundarySlot := helpers.StartSlot(helpers.SlotToEpoch(slot))
boundaryRoot, ok := s.epochBoundaryRoot(boundarySlot)
if !ok {
return nil, errUnknownBoundaryRoot
}
// Try the cache first then try the DB.
boundaryState := s.hotStateCache.Get(boundaryRoot)
var err error
if boundaryState == nil {
boundaryState, err = s.beaconDB.State(ctx, boundaryRoot)
if err != nil {
return nil, err
}
if boundaryState == nil {
return nil, errUnknownBoundaryState
}
}
// Gather last saved state, that is where node starts to replay the blocks.
startState, err := s.lastSavedState(ctx, slot)
// Gather the last saved block root and the slot number.
lastValidRoot, lastValidSlot, err := s.lastSavedBlock(ctx, slot)
@@ -153,52 +118,10 @@ func (s *State) loadHotStateBySlot(ctx context.Context, slot uint64) (*state.Bea
}
// Load and replay blocks to get the intermediate state.
replayBlks, err := s.LoadBlocks(ctx, boundaryState.Slot()+1, lastValidSlot, lastValidRoot)
replayBlks, err := s.LoadBlocks(ctx, startState.Slot()+1, lastValidSlot, lastValidRoot)
if err != nil {
return nil, err
}
return s.ReplayBlocks(ctx, boundaryState, replayBlks, slot)
}
// This loads the epoch boundary root of a given state based on the state slot.
// If the epoch boundary does not have a valid root, it then recovers by going
// back to find the last slot before boundary which has a valid block.
func (s *State) loadEpochBoundaryRoot(ctx context.Context, blockRoot [32]byte, state *state.BeaconState) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "stateGen.loadEpochBoundaryRoot")
defer span.End()
boundarySlot := helpers.CurrentEpoch(state) * params.BeaconConfig().SlotsPerEpoch
// First checks if epoch boundary root already exists in cache.
r, ok := s.epochBoundarySlotToRoot[boundarySlot]
if ok {
return r, nil
}
// At epoch boundary, return the root which is just itself.
if state.Slot() == boundarySlot {
return blockRoot, nil
}
// Node uses genesis getters if the epoch boundary slot is genesis slot.
if boundarySlot == 0 {
r, err := s.genesisRoot(ctx)
if err != nil {
return [32]byte{}, nil
}
s.setEpochBoundaryRoot(boundarySlot, r)
return r, nil
}
// Now to find the epoch boundary root via DB.
r, _, err := s.lastSavedBlock(ctx, boundarySlot)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get last saved block for epoch boundary root")
}
// Set the epoch boundary root cache.
s.setEpochBoundaryRoot(boundarySlot, r)
return r, nil
return s.ReplayBlocks(ctx, startState, replayBlks, slot)
}

View File

@@ -7,8 +7,11 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
//pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
@@ -19,7 +22,7 @@ func TestSaveHotState_AlreadyHas(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
@@ -46,7 +49,7 @@ func TestSaveHotState_CanSaveOnEpochBoundary(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
@@ -60,7 +63,7 @@ func TestSaveHotState_CanSaveOnEpochBoundary(t *testing.T) {
if !service.beaconDB.HasState(ctx, r) {
t.Error("Should have saved the state")
}
if !service.beaconDB.HasStateSummary(ctx, r) {
if !service.stateSummaryCache.Has(r) {
t.Error("Should have saved the state summary")
}
testutil.AssertLogsContain(t, hook, "Saved full state on epoch boundary")
@@ -71,7 +74,7 @@ func TestSaveHotState_NoSaveNotEpochBoundary(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch - 1)
@@ -93,7 +96,7 @@ func TestSaveHotState_NoSaveNotEpochBoundary(t *testing.T) {
if service.beaconDB.HasState(ctx, r) {
t.Error("Should not have saved the state")
}
if !service.beaconDB.HasStateSummary(ctx, r) {
if !service.stateSummaryCache.Has(r) {
t.Error("Should have saved the state summary")
}
testutil.AssertLogsDoNotContain(t, hook, "Saved full state on epoch boundary")
@@ -103,7 +106,7 @@ func TestLoadHoteStateByRoot_Cached(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
@@ -124,25 +127,26 @@ func TestLoadHoteStateByRoot_FromDBCanProcess(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
boundaryRoot := [32]byte{'A'}
blkRoot := [32]byte{'B'}
if err := service.beaconDB.SaveState(ctx, beaconState, boundaryRoot); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
targetSlot := uint64(10)
targetRoot := [32]byte{'a'}
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: targetSlot,
Root: blkRoot[:],
BoundaryRoot: boundaryRoot[:],
Slot: targetSlot,
Root: targetRoot[:],
}); err != nil {
t.Fatal(err)
}
// This tests where hot state was not cached and needs processing.
loadedState, err := service.loadHotStateByRoot(ctx, blkRoot)
loadedState, err := service.loadHotStateByRoot(ctx, targetRoot)
if err != nil {
t.Fatal(err)
}
@@ -156,25 +160,26 @@ func TestLoadHoteStateByRoot_FromDBBoundaryCase(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
boundaryRoot := [32]byte{'A'}
if err := service.beaconDB.SaveState(ctx, beaconState, boundaryRoot); err != nil {
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
blkRoot, _ := ssz.HashTreeRoot(blk.Block)
service.beaconDB.SaveGenesisBlockRoot(ctx, blkRoot)
if err := service.beaconDB.SaveState(ctx, beaconState, blkRoot); err != nil {
t.Fatal(err)
}
targetSlot := uint64(0)
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{
Slot: targetSlot,
Root: boundaryRoot[:],
BoundaryRoot: boundaryRoot[:],
Slot: targetSlot,
Root: blkRoot[:],
}); err != nil {
t.Fatal(err)
}
// This tests where hot state was not cached but doesn't need processing
// because it on the epoch boundary slot.
loadedState, err := service.loadHotStateByRoot(ctx, boundaryRoot)
loadedState, err := service.loadHotStateByRoot(ctx, blkRoot)
if err != nil {
t.Fatal(err)
}
@@ -184,35 +189,21 @@ func TestLoadHoteStateByRoot_FromDBBoundaryCase(t *testing.T) {
}
}
func TestLoadHoteStateBySlot_CanAdvanceSlotUsingCache(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
service.hotStateCache.Put(r, beaconState)
service.setEpochBoundaryRoot(0, r)
slot := uint64(10)
loadedState, err := service.loadHotStateBySlot(ctx, slot)
if err != nil {
t.Fatal(err)
}
if loadedState.Slot() != slot {
t.Error("Did not correctly load state")
}
}
func TestLoadHoteStateBySlot_CanAdvanceSlotUsingDB(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
r := [32]byte{'A'}
service.setEpochBoundaryRoot(0, r)
if err := service.beaconDB.SaveState(ctx, beaconState, r); err != nil {
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := service.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
gRoot, _ := ssz.HashTreeRoot(b.Block)
if err := service.beaconDB.SaveGenesisBlockRoot(ctx, gRoot); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, beaconState, gRoot); err != nil {
t.Fatal(err)
}
@@ -225,92 +216,3 @@ func TestLoadHoteStateBySlot_CanAdvanceSlotUsingDB(t *testing.T) {
t.Error("Did not correctly load state")
}
}
func TestLoadEpochBoundaryRoot_Exists(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
r := [32]byte{'a'}
service.setEpochBoundaryRoot(params.BeaconConfig().SlotsPerEpoch, r)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
boundaryRoot, err := service.loadEpochBoundaryRoot(ctx, r, beaconState)
if err != nil {
t.Fatal(err)
}
if r != boundaryRoot {
t.Error("Did not correctly load boundary root")
}
}
func TestLoadEpochBoundaryRoot_SameSlot(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
r := [32]byte{'a'}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
boundaryRoot, err := service.loadEpochBoundaryRoot(ctx, r, beaconState)
if err != nil {
t.Fatal(err)
}
if r != boundaryRoot {
t.Error("Did not correctly load boundary root")
}
}
func TestLoadEpochBoundaryRoot_Genesis(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
r := [32]byte{'a'}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
if err := db.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
gRoot, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveGenesisBlockRoot(ctx, gRoot); err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
boundaryRoot, err := service.loadEpochBoundaryRoot(ctx, r, beaconState)
if err != nil {
t.Fatal(err)
}
if boundaryRoot != gRoot {
t.Error("Did not correctly load boundary root")
}
}
func TestLoadEpochBoundaryRoot_LastSavedBlock(t *testing.T) {
ctx := context.Background()
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
b1 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: service.splitInfo.slot + 5}}
if err := service.beaconDB.SaveBlock(ctx, b1); err != nil {
t.Fatal(err)
}
b1Root, _ := ssz.HashTreeRoot(b1.Block)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch + 10)
boundaryRoot, err := service.loadEpochBoundaryRoot(ctx, [32]byte{}, beaconState)
if err != nil {
t.Fatal(err)
}
if boundaryRoot != b1Root {
t.Error("Did not correctly load boundary root")
}
}

View File

@@ -6,7 +6,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -15,22 +14,28 @@ import (
// MigrateToCold advances the split point in between the cold and hot state sections.
// It moves the recent finalized states from the hot section to the cold section and
// only preserve the ones that's on archived point.
func (s *State) MigrateToCold(ctx context.Context, finalizedState *state.BeaconState, finalizedRoot [32]byte) error {
func (s *State) MigrateToCold(ctx context.Context, finalizedSlot uint64, finalizedRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "stateGen.MigrateToCold")
defer span.End()
// Verify migration is sensible. The new finalized point must increase the current split slot, and
// on an epoch boundary for hot state summary scheme to work.
currentSplitSlot := s.splitInfo.slot
if currentSplitSlot > finalizedState.Slot() {
if currentSplitSlot > finalizedSlot {
return nil
}
if !helpers.IsEpochStart(finalizedState.Slot()) {
if !helpers.IsEpochStart(finalizedSlot) {
return nil
}
// Migrate all state summary objects from cache to DB.
if err := s.beaconDB.SaveStateSummaries(ctx, s.stateSummaryCache.GetAll()); err != nil {
return err
}
s.stateSummaryCache.Clear()
// Move the states between split slot to finalized slot from hot section to the cold section.
filter := filters.NewFilter().SetStartSlot(currentSplitSlot).SetEndSlot(finalizedState.Slot() - 1)
filter := filters.NewFilter().SetStartSlot(currentSplitSlot).SetEndSlot(finalizedSlot - 1)
blockRoots, err := s.beaconDB.BlockRoots(ctx, filter)
if err != nil {
return err
@@ -46,22 +51,13 @@ func (s *State) MigrateToCold(ctx context.Context, finalizedState *state.BeaconS
}
archivedPointIndex := stateSummary.Slot / s.slotsPerArchivedPoint
alreadyArchived := s.beaconDB.HasArchivedPoint(ctx, archivedPointIndex)
if stateSummary.Slot%s.slotsPerArchivedPoint == 0 && !alreadyArchived {
if s.beaconDB.HasState(ctx, r) {
hotState, err := s.beaconDB.State(ctx, r)
if stateSummary.Slot%s.slotsPerArchivedPoint == 0 {
if !s.beaconDB.HasState(ctx, r) {
recoveredArchivedState, err := s.ComputeStateUpToSlot(ctx, stateSummary.Slot)
if err != nil {
return err
}
if err := s.beaconDB.SaveArchivedPointState(ctx, hotState.Copy(), archivedPointIndex); err != nil {
return err
}
} else {
hotState, err := s.ComputeStateUpToSlot(ctx, stateSummary.Slot)
if err != nil {
return err
}
if err := s.beaconDB.SaveArchivedPointState(ctx, hotState.Copy(), archivedPointIndex); err != nil {
if err := s.beaconDB.SaveState(ctx, recoveredArchivedState.Copy(), r); err != nil {
return err
}
}
@@ -76,26 +72,24 @@ func (s *State) MigrateToCold(ctx context.Context, finalizedState *state.BeaconS
"archiveIndex": archivedPointIndex,
"root": hex.EncodeToString(bytesutil.Trunc(r[:])),
}).Info("Saved archived point during state migration")
}
// Do not delete the current finalized state in case user wants to
// switch back to old state service, deleting the recent finalized state
// could cause issue switching back.
if s.beaconDB.HasState(ctx, r) && r != finalizedRoot {
if err := s.beaconDB.DeleteState(ctx, r); err != nil {
return err
} else {
// Do not delete the current finalized state in case user wants to
// switch back to old state service, deleting the recent finalized state
// could cause issue switching back.
if s.beaconDB.HasState(ctx, r) && r != finalizedRoot {
if err := s.beaconDB.DeleteState(ctx, r); err != nil {
return err
}
log.WithFields(logrus.Fields{
"slot": stateSummary.Slot,
"root": hex.EncodeToString(bytesutil.Trunc(r[:])),
}).Info("Deleted state during migration")
}
log.WithFields(logrus.Fields{
"slot": stateSummary.Slot,
"root": hex.EncodeToString(bytesutil.Trunc(r[:])),
}).Info("Deleted state during migration")
}
s.deleteEpochBoundaryRoot(stateSummary.Slot)
}
// Update the split slot and root.
s.splitInfo = &splitSlotAndRoot{slot: finalizedState.Slot(), root: finalizedRoot}
s.splitInfo = &splitSlotAndRoot{slot: finalizedSlot, root: finalizedRoot}
log.WithFields(logrus.Fields{
"slot": s.splitInfo.slot,
"root": hex.EncodeToString(bytesutil.Trunc(s.splitInfo.root[:])),

View File

@@ -6,6 +6,7 @@ import (
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -19,11 +20,9 @@ func TestMigrateToCold_NoBlock(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
if err := service.MigrateToCold(ctx, beaconState, [32]byte{}); err != nil {
if err := service.MigrateToCold(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{}); err != nil {
t.Fatal(err)
}
@@ -36,12 +35,9 @@ func TestMigrateToCold_HigherSplitSlot(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.splitInfo.slot = 2
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(1)
if err := service.MigrateToCold(ctx, beaconState, [32]byte{}); err != nil {
if err := service.MigrateToCold(ctx, 1, [32]byte{}); err != nil {
t.Fatal(err)
}
@@ -54,11 +50,8 @@ func TestMigrateToCold_NotEpochStart(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch + 1)
if err := service.MigrateToCold(ctx, beaconState, [32]byte{}); err != nil {
service := New(db, cache.NewStateSummaryCache())
if err := service.MigrateToCold(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{}); err != nil {
t.Fatal(err)
}
@@ -71,7 +64,8 @@ func TestMigrateToCold_MigrationCompletes(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 2
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
@@ -88,9 +82,24 @@ func TestMigrateToCold_MigrationCompletes(t *testing.T) {
if err := service.beaconDB.SaveState(ctx, beaconState, bRoot); err != nil {
t.Fatal(err)
}
service.slotsPerArchivedPoint = 2 // Ensure we can land on archived point.
if err := service.MigrateToCold(ctx, beaconState, [32]byte{}); err != nil {
newBeaconState, _ := testutil.DeterministicGenesisState(t, 32)
newBeaconState.SetSlot(3)
b = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{Slot: 3},
}
if err := service.beaconDB.SaveBlock(ctx, b); err != nil {
t.Fatal(err)
}
bRoot, _ = ssz.HashTreeRoot(b.Block)
if err := service.beaconDB.SaveStateSummary(ctx, &pb.StateSummary{Root: bRoot[:], Slot: 3}); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveState(ctx, newBeaconState, bRoot); err != nil {
t.Fatal(err)
}
if err := service.MigrateToCold(ctx, beaconState.Slot(), [32]byte{}); err != nil {
t.Fatal(err)
}

View File

@@ -13,7 +13,6 @@ import (
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
@@ -35,14 +34,10 @@ func (s *State) ComputeStateUpToSlot(ctx context.Context, targetSlot uint64) (*s
return nil, errors.Wrap(err, "could not get last saved block")
}
lastBlockRootForState, err := s.lastSavedState(ctx, targetSlot)
lastState, err := s.lastSavedState(ctx, targetSlot)
if err != nil {
return nil, errors.Wrap(err, "could not get last valid state")
}
lastState, err := s.beaconDB.State(ctx, lastBlockRootForState)
if err != nil {
return nil, err
}
if lastState == nil {
return nil, errUnknownState
}
@@ -249,64 +244,52 @@ func (s *State) lastSavedBlock(ctx context.Context, slot uint64) ([32]byte, uint
return gRoot, 0, nil
}
// Lower bound set as last archived slot is a reasonable assumption given
// block is saved at an archived point.
filter := filters.NewFilter().SetStartSlot(s.splitInfo.slot).SetEndSlot(slot)
rs, err := s.beaconDB.BlockRoots(ctx, filter)
lastSaved, err := s.beaconDB.HighestSlotBlocksBelow(ctx, slot+1)
if err != nil {
return [32]byte{}, 0, err
}
if len(rs) == 0 {
// Return zero hash if there hasn't been any block in the DB yet.
return params.BeaconChainConfig{}.ZeroHash, 0, nil
}
lastRoot := rs[len(rs)-1]
b, err := s.beaconDB.Block(ctx, lastRoot)
if err != nil {
return [32]byte{}, 0, err
}
if b == nil || b.Block == nil {
return [32]byte{}, 0, errUnknownBlock
}
return lastRoot, b.Block.Slot, nil
// Given this is used to query canonical block. There should only be one saved canonical block of a given slot.
if len(lastSaved) != 1 {
return [32]byte{}, 0, fmt.Errorf("highest saved block does not equal to 1, it equals to %d", len(lastSaved))
}
if lastSaved[0] == nil || lastSaved[0].Block == nil {
return [32]byte{}, 0, errUnknownBlock
}
r, err := ssz.HashTreeRoot(lastSaved[0].Block)
if err != nil {
return [32]byte{}, 0, err
}
return r, lastSaved[0].Block.Slot, nil
}
// This finds the last saved state in DB from searching backwards from input slot,
// it returns the block root of the block which was used to produce the state.
// This is used by both hot and cold state management.
func (s *State) lastSavedState(ctx context.Context, slot uint64) ([32]byte, error) {
func (s *State) lastSavedState(ctx context.Context, slot uint64) (*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "stateGen.lastSavedState")
defer span.End()
// Handle the genesis case where the input slot is 0.
if slot == 0 {
gRoot, err := s.genesisRoot(ctx)
if err != nil {
return [32]byte{}, err
}
return gRoot, nil
return s.beaconDB.GenesisState(ctx)
}
// Lower bound set as last archived slot is a reasonable assumption given
// state is saved at an archived point.
filter := filters.NewFilter().SetStartSlot(s.splitInfo.slot).SetEndSlot(slot)
rs, err := s.beaconDB.BlockRoots(ctx, filter)
lastSaved, err := s.beaconDB.HighestSlotStatesBelow(ctx, slot+1)
if err != nil {
return [32]byte{}, err
return nil, errUnknownState
}
if len(rs) == 0 {
// Return zero hash if there hasn't been any block in the DB yet.
return params.BeaconChainConfig{}.ZeroHash, nil
// Given this is used to query canonical state. There should only be one saved canonical block of a given slot.
if len(lastSaved) != 1 {
return nil, fmt.Errorf("highest saved state does not equal to 1, it equals to %d", len(lastSaved))
}
for i := len(rs) - 1; i >= 0; i-- {
// Stop until a state is saved.
if s.beaconDB.HasState(ctx, rs[i]) {
return rs[i], nil
}
if lastSaved[0] == nil {
return nil, errUnknownState
}
return [32]byte{}, errUnknownState
return lastSaved[0], nil
}
// This returns the genesis root.

View File

@@ -8,6 +8,7 @@ import (
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
@@ -23,7 +24,7 @@ func TestComputeStateUpToSlot_GenesisState(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
gBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
gRoot, err := ssz.HashTreeRoot(gBlk.Block)
@@ -56,7 +57,7 @@ func TestComputeStateUpToSlot_CanProcessUpTo(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
gBlk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}
gRoot, err := ssz.HashTreeRoot(gBlk.Block)
@@ -66,6 +67,9 @@ func TestComputeStateUpToSlot_CanProcessUpTo(t *testing.T) {
if err := service.beaconDB.SaveBlock(ctx, gBlk); err != nil {
t.Fatal(err)
}
if err := service.beaconDB.SaveGenesisBlockRoot(ctx, gRoot); err != nil {
t.Fatal(err)
}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
if err := service.beaconDB.SaveState(ctx, beaconState, gRoot); err != nil {
t.Fatal(err)
@@ -106,7 +110,7 @@ func TestReplayBlocks_AllSkipSlots(t *testing.T) {
beaconState.SetCurrentJustifiedCheckpoint(cp)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
service := New(db)
service := New(db, cache.NewStateSummaryCache())
targetSlot := params.BeaconConfig().SlotsPerEpoch - 1
newState, err := service.ReplayBlocks(context.Background(), beaconState, []*ethpb.SignedBeaconBlock{}, targetSlot)
if err != nil {
@@ -142,7 +146,7 @@ func TestReplayBlocks_SameSlot(t *testing.T) {
beaconState.SetCurrentJustifiedCheckpoint(cp)
beaconState.SetCurrentEpochAttestations([]*pb.PendingAttestation{})
service := New(db)
service := New(db, cache.NewStateSummaryCache())
targetSlot := beaconState.Slot()
newState, err := service.ReplayBlocks(context.Background(), beaconState, []*ethpb.SignedBeaconBlock{}, targetSlot)
if err != nil {
@@ -424,17 +428,8 @@ func TestLastSavedBlock_NoSavedBlock(t *testing.T) {
splitInfo: &splitSlotAndRoot{slot: 128},
}
b1 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 127}}
if err := s.beaconDB.SaveBlock(ctx, b1); err != nil {
t.Fatal(err)
}
r, slot, err := s.lastSavedBlock(ctx, s.splitInfo.slot+1)
if err != nil {
t.Fatal(err)
}
if slot != 0 || r != params.BeaconConfig().ZeroHash {
t.Error("Did not get no saved block info")
if _, _, err := s.lastSavedBlock(ctx, s.splitInfo.slot+1); err != errUnknownBlock {
t.Error("Did not get wanted error")
}
}
@@ -498,11 +493,11 @@ func TestLastSavedState_CanGet(t *testing.T) {
t.Fatal(err)
}
savedRoot, err := s.lastSavedState(ctx, s.splitInfo.slot+100)
savedState, err := s.lastSavedState(ctx, s.splitInfo.slot+100)
if err != nil {
t.Fatal(err)
}
if savedRoot != b2Root {
if !proto.Equal(st.InnerStateUnsafe(), savedState.InnerStateUnsafe()) {
t.Error("Did not save correct root")
}
}
@@ -521,12 +516,9 @@ func TestLastSavedState_NoSavedBlockState(t *testing.T) {
t.Fatal(err)
}
r, err := s.lastSavedState(ctx, s.splitInfo.slot+1)
if err != nil {
t.Fatal(err)
}
if r != params.BeaconConfig().ZeroHash {
t.Error("Did not get no saved block info")
_, err := s.lastSavedState(ctx, s.splitInfo.slot+1)
if err != errUnknownState {
t.Error("Did not get wanted error")
}
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/db"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"go.opencensus.io/trace"
)
@@ -24,6 +23,7 @@ type State struct {
epochBoundaryLock sync.RWMutex
hotStateCache *cache.HotStateCache
splitInfo *splitSlotAndRoot
stateSummaryCache *cache.StateSummaryCache
}
// This tracks the split point. The point where slot and the block root of
@@ -34,25 +34,28 @@ type splitSlotAndRoot struct {
}
// New returns a new state management object.
func New(db db.NoHeadAccessDatabase) *State {
func New(db db.NoHeadAccessDatabase, stateSummaryCache *cache.StateSummaryCache) *State {
return &State{
beaconDB: db,
epochBoundarySlotToRoot: make(map[uint64][32]byte),
hotStateCache: cache.NewHotStateCache(),
splitInfo: &splitSlotAndRoot{slot: 0, root: params.BeaconConfig().ZeroHash},
slotsPerArchivedPoint: archivedInterval,
stateSummaryCache: stateSummaryCache,
}
}
// Resume resumes a new state management object from previously saved finalized check point in DB.
func (s *State) Resume(ctx context.Context, lastArchivedRoot [32]byte) (*state.BeaconState, error) {
func (s *State) Resume(ctx context.Context) (*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "stateGen.Resume")
defer span.End()
lastArchivedState, err := s.beaconDB.LastArchivedIndexState(ctx)
lastArchivedRoot := s.beaconDB.LastArchivedIndexRoot(ctx)
lastArchivedState, err := s.beaconDB.State(ctx, lastArchivedRoot)
if err != nil {
return nil, err
}
// Resume as genesis state if there's no last archived state.
if lastArchivedState == nil {
return s.beaconDB.GenesisState(ctx)
@@ -60,17 +63,11 @@ func (s *State) Resume(ctx context.Context, lastArchivedRoot [32]byte) (*state.B
s.splitInfo = &splitSlotAndRoot{slot: lastArchivedState.Slot(), root: lastArchivedRoot}
if err := s.beaconDB.SaveStateSummary(ctx,
&pb.StateSummary{Slot: lastArchivedState.Slot(), Root: lastArchivedRoot[:], BoundaryRoot: lastArchivedRoot[:]}); err != nil {
return nil, err
}
// In case the finalized state slot was skipped.
slot := lastArchivedState.Slot()
if !helpers.IsEpochStart(slot) {
slot = helpers.StartSlot(helpers.SlotToEpoch(slot) + 1)
}
s.setEpochBoundaryRoot(slot, lastArchivedRoot)
return lastArchivedState, nil
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -15,15 +16,15 @@ func TestResume(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
root := [32]byte{'A'}
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch - 2)
service.beaconDB.SaveArchivedPointState(ctx, beaconState, 1)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)
service.beaconDB.SaveState(ctx, beaconState, root)
service.beaconDB.SaveArchivedPointRoot(ctx, root, 1)
service.beaconDB.SaveLastArchivedIndex(ctx, 1)
resumeState, err := service.Resume(ctx, root)
resumeState, err := service.Resume(ctx)
if err != nil {
t.Fatal(err)
}
@@ -31,11 +32,11 @@ func TestResume(t *testing.T) {
if !proto.Equal(beaconState.InnerStateUnsafe(), resumeState.InnerStateUnsafe()) {
t.Error("Diff saved state")
}
if !service.beaconDB.HasStateSummary(ctx, root) {
t.Error("Did not save state summary")
if service.splitInfo.slot != params.BeaconConfig().SlotsPerEpoch {
t.Errorf("Did not get watned slot")
}
if cachedRoot, _ := service.epochBoundaryRoot(params.BeaconConfig().SlotsPerEpoch); cachedRoot != root {
t.Error("Did not save boundary root")
if root != service.splitInfo.root {
t.Errorf("Did not get wanted root")
}
}

View File

@@ -4,7 +4,8 @@ import (
"context"
"testing"
"github.com/gogo/protobuf/proto"
//"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -17,7 +18,7 @@ func TestSaveState_ColdStateCanBeSaved(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
@@ -39,14 +40,6 @@ func TestSaveState_ColdStateCanBeSaved(t *testing.T) {
t.Error("Did not get wanted root")
}
receivedState, err := service.beaconDB.ArchivedPointState(ctx, 1)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(receivedState.InnerStateUnsafe(), beaconState.InnerStateUnsafe()) {
t.Error("Did not get wanted state")
}
testutil.AssertLogsContain(t, hook, "Saved full state on archived point")
}
@@ -56,7 +49,7 @@ func TestSaveState_HotStateCanBeSaved(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
// This goes to hot section, verify it can save on epoch boundary.
@@ -71,7 +64,7 @@ func TestSaveState_HotStateCanBeSaved(t *testing.T) {
if !service.beaconDB.HasState(ctx, r) {
t.Error("Should have saved the state")
}
if !service.beaconDB.HasStateSummary(ctx, r) {
if !service.stateSummaryCache.Has(r) {
t.Error("Should have saved the state summary")
}
testutil.AssertLogsContain(t, hook, "Saved full state on epoch boundary")
@@ -83,7 +76,7 @@ func TestSaveState_HotStateCached(t *testing.T) {
db := testDB.SetupDB(t)
defer testDB.TeardownDB(t, db)
service := New(db)
service := New(db, cache.NewStateSummaryCache())
service.slotsPerArchivedPoint = 1
beaconState, _ := testutil.DeterministicGenesisState(t, 32)
beaconState.SetSlot(params.BeaconConfig().SlotsPerEpoch)

View File

@@ -8,6 +8,7 @@ go_library(
"blocks.go",
"hash_function.go",
"helpers.go",
"merkleize.go",
"state_root.go",
"trie_helpers.go",
"validators.go",
@@ -27,15 +28,16 @@ go_library(
"@com_github_dgraph_io_ristretto//:go_default_library",
"@com_github_minio_sha256_simd//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_protolambda_zssz//merkle:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"blocks_test.go",
"state_root_cache_fuzz_test.go",
"state_root_test.go",
"trie_helpers_test.go",
@@ -45,6 +47,7 @@ go_test(
"//proto/beacon/p2p/v1:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/interop:go_default_library",
"//shared/params:go_default_library",
"//shared/testutil:go_default_library",
@@ -53,3 +56,28 @@ go_test(
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)
go_test(
name = "go_benchmark_test",
size = "medium",
srcs = ["benchmark_test.go"],
args = [
"-test.bench=.",
"-test.benchmem",
"-test.v",
],
local = True,
tags = [
"benchmark",
"manual",
"no-cache",
],
deps = [
"//beacon-chain/state/stateutil:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_protolambda_zssz//merkle:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)

View File

@@ -5,7 +5,6 @@ import (
"errors"
"sync"
"github.com/protolambda/zssz/merkle"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
)
@@ -29,7 +28,7 @@ func (h *stateRootHasher) arraysRoot(input [][]byte, length uint64, fieldName st
hashFunc := hashutil.CustomSHA256Hasher()
lock.Lock()
if _, ok := layersCache[fieldName]; !ok && h.rootsCache != nil {
depth := merkle.GetDepth(length)
depth := GetDepth(length)
layersCache[fieldName] = make([][][32]byte, depth+1)
}
lock.Unlock()
@@ -101,7 +100,7 @@ func (h *stateRootHasher) merkleizeWithCache(leaves [][32]byte, length uint64,
return root
}
hashLayer := leaves
layers := make([][][32]byte, merkle.GetDepth(length)+1)
layers := make([][][32]byte, GetDepth(length)+1)
if items, ok := layersCache[fieldName]; ok && h.rootsCache != nil {
if len(items[0]) == len(leaves) {
layers = items

View File

@@ -9,6 +9,7 @@ import (
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -24,16 +25,16 @@ func EpochAttestationsRoot(atts []*pb.PendingAttestation) ([32]byte, error) {
// PendingAttestationRoot describes a method from which the hash tree root
// of a pending attestation is returned.
func PendingAttestationRoot(att *pb.PendingAttestation) ([32]byte, error) {
func PendingAttestationRoot(hasher HashFn, att *pb.PendingAttestation) ([32]byte, error) {
fieldRoots := [][32]byte{}
if att != nil {
// Bitfield.
aggregationRoot, err := bitlistRoot(att.AggregationBits, 2048)
aggregationRoot, err := bitlistRoot(hasher, att.AggregationBits, params.BeaconConfig().MaxValidatorsPerCommittee)
if err != nil {
return [32]byte{}, err
}
// Attestation data.
attDataRoot, err := attestationDataRoot(att.Data)
attDataRoot, err := attestationDataRoot(hasher, att.Data)
if err != nil {
return [32]byte{}, err
}
@@ -49,7 +50,7 @@ func PendingAttestationRoot(att *pb.PendingAttestation) ([32]byte, error) {
fieldRoots = [][32]byte{aggregationRoot, attDataRoot, inclusionRoot, proposerRoot}
}
return bitwiseMerkleizeArrays(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleizeArrays(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
func marshalAttestationData(data *ethpb.AttestationData) []byte {
@@ -88,7 +89,67 @@ func marshalAttestationData(data *ethpb.AttestationData) []byte {
return enc
}
func attestationDataRoot(data *ethpb.AttestationData) ([32]byte, error) {
func attestationRoot(hasher HashFn, att *ethpb.Attestation) ([32]byte, error) {
fieldRoots := make([][32]byte, 3)
// Bitfield.
aggregationRoot, err := bitlistRoot(hasher, att.AggregationBits, params.BeaconConfig().MaxValidatorsPerCommittee)
if err != nil {
return [32]byte{}, err
}
fieldRoots[0] = aggregationRoot
dataRoot, err := attestationDataRoot(hasher, att.Data)
if err != nil {
return [32]byte{}, err
}
fieldRoots[1] = dataRoot
signatureBuf := bytesutil.ToBytes96(att.Signature)
packedSig, err := pack([][]byte{signatureBuf[:]})
if err != nil {
return [32]byte{}, err
}
sigRoot, err := bitwiseMerkleize(hasher, packedSig, uint64(len(packedSig)), uint64(len(packedSig)))
if err != nil {
return [32]byte{}, err
}
fieldRoots[2] = sigRoot
return bitwiseMerkleizeArrays(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
func blockAttestationRoot(atts []*ethpb.Attestation) ([32]byte, error) {
hasher := hashutil.CustomSHA256Hasher()
roots := make([][]byte, len(atts))
for i := 0; i < len(atts); i++ {
pendingRoot, err := attestationRoot(hasher, atts[i])
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not attestation merkleization")
}
roots[i] = pendingRoot[:]
}
attsRootsRoot, err := bitwiseMerkleize(
hasher,
roots,
uint64(len(roots)),
params.BeaconConfig().MaxAttestations,
)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute block attestations merkleization")
}
attsLenBuf := new(bytes.Buffer)
if err := binary.Write(attsLenBuf, binary.LittleEndian, uint64(len(atts))); err != nil {
return [32]byte{}, errors.Wrap(err, "could not marshal epoch attestations length")
}
// We need to mix in the length of the slice.
attsLenRoot := make([]byte, 32)
copy(attsLenRoot, attsLenBuf.Bytes())
res := mixInLength(attsRootsRoot, attsLenRoot)
return res, nil
}
func attestationDataRoot(hasher HashFn, data *ethpb.AttestationData) ([32]byte, error) {
fieldRoots := make([][]byte, 5)
if data != nil {
@@ -109,24 +170,24 @@ func attestationDataRoot(data *ethpb.AttestationData) ([32]byte, error) {
fieldRoots[2] = blockRoot[:]
// Source
sourceRoot, err := CheckpointRoot(data.Source)
sourceRoot, err := CheckpointRoot(hasher, data.Source)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute source checkpoint merkleization")
}
fieldRoots[3] = sourceRoot[:]
// Target
targetRoot, err := CheckpointRoot(data.Target)
targetRoot, err := CheckpointRoot(hasher, data.Target)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute target checkpoint merkleization")
}
fieldRoots[4] = targetRoot[:]
}
return bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleize(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
func (h *stateRootHasher) pendingAttestationRoot(att *pb.PendingAttestation) ([32]byte, error) {
func (h *stateRootHasher) pendingAttestationRoot(hasher HashFn, att *pb.PendingAttestation) ([32]byte, error) {
// Marshal attestation to determine if it exists in the cache.
enc := make([]byte, 2192)
fieldRoots := make([][]byte, 4)
@@ -153,14 +214,14 @@ func (h *stateRootHasher) pendingAttestationRoot(att *pb.PendingAttestation) ([3
}
// Bitfield.
aggregationRoot, err := bitlistRoot(att.AggregationBits, 2048)
aggregationRoot, err := bitlistRoot(hasher, att.AggregationBits, 2048)
if err != nil {
return [32]byte{}, err
}
fieldRoots[0] = aggregationRoot[:]
// Attestation data.
attDataRoot, err := attestationDataRoot(att.Data)
attDataRoot, err := attestationDataRoot(hasher, att.Data)
if err != nil {
return [32]byte{}, err
}
@@ -174,7 +235,7 @@ func (h *stateRootHasher) pendingAttestationRoot(att *pb.PendingAttestation) ([3
proposerRoot := bytesutil.ToBytes32(proposerBuf)
fieldRoots[3] = proposerRoot[:]
}
res, err := bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
res, err := bitwiseMerkleize(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
if err != nil {
return [32]byte{}, err
}
@@ -185,9 +246,10 @@ func (h *stateRootHasher) pendingAttestationRoot(att *pb.PendingAttestation) ([3
}
func (h *stateRootHasher) epochAttestationsRoot(atts []*pb.PendingAttestation) ([32]byte, error) {
hasher := hashutil.CustomSHA256Hasher()
roots := make([][]byte, len(atts))
for i := 0; i < len(atts); i++ {
pendingRoot, err := h.pendingAttestationRoot(atts[i])
pendingRoot, err := h.pendingAttestationRoot(hasher, atts[i])
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not attestation merkleization")
}
@@ -195,6 +257,7 @@ func (h *stateRootHasher) epochAttestationsRoot(atts []*pb.PendingAttestation) (
}
attsRootsRoot, err := bitwiseMerkleize(
hasher,
roots,
uint64(len(roots)),
params.BeaconConfig().MaxAttestations*params.BeaconConfig().SlotsPerEpoch,

View File

@@ -0,0 +1,90 @@
package stateutil_benchmark
import (
"testing"
"github.com/protolambda/zssz/merkle"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
func BenchmarkBlockHTR(b *testing.B) {
genState, keys := testutil.DeterministicGenesisState(b, 200)
conf := testutil.DefaultBlockGenConfig()
blk, err := testutil.GenerateFullBlock(genState, keys, conf, 10)
if err != nil {
b.Fatal(err)
}
atts := make([]*ethpb.Attestation, 0, 128)
for i := 0; i < 128; i++ {
atts = append(atts, blk.Block.Body.Attestations[0])
}
deposits, _, err := testutil.DeterministicDepositsAndKeys(16)
if err != nil {
b.Fatal(err)
}
blk.Block.Body.Attestations = atts
blk.Block.Body.Deposits = deposits
b.Run("SSZ_HTR", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
if _, err := ssz.HashTreeRoot(blk.Block); err != nil {
b.Fatal(err)
}
}
})
b.Run("Custom_SSZ_HTR", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
if _, err := stateutil.BlockRoot(blk.Block); err != nil {
b.Fatal(err)
}
}
})
}
func BenchmarkMerkleize(b *testing.B) {
roots := make([][32]byte, 8192)
for i := 0; i < 8192; i++ {
roots[0] = [32]byte{byte(i)}
}
oldMerkleize := func(chunks [][32]byte, count uint64, limit uint64) ([32]byte, error) {
leafIndexer := func(i uint64) []byte {
return chunks[i][:]
}
return merkle.Merkleize(hashutil.CustomSHA256Hasher(), count, limit, leafIndexer), nil
}
newMerkleize := func(chunks [][32]byte, count uint64, limit uint64) ([32]byte, error) {
leafIndexer := func(i uint64) []byte {
return chunks[i][:]
}
return stateutil.Merkleize(stateutil.NewHasherFunc(hashutil.CustomSHA256Hasher()), count, limit, leafIndexer), nil
}
b.Run("Non Buffered Merkleizer", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.N = 1000
for i := 0; i < b.N; i++ {
_, _ = oldMerkleize(roots, 8192, 8192)
}
})
b.Run("Buffered Merkleizer", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.N = 1000
for i := 0; i < b.N; i++ {
_, _ = newMerkleize(roots, 8192, 8192)
}
})
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
@@ -29,13 +30,96 @@ func BlockHeaderRoot(header *ethpb.BeaconBlockHeader) ([32]byte, error) {
bodyRoot := bytesutil.ToBytes32(header.BodyRoot)
fieldRoots[3] = bodyRoot[:]
}
return bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleize(hashutil.CustomSHA256Hasher(), fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
// BlockRoot returns the block hash tree root of the provided block.
func BlockRoot(blk *ethpb.BeaconBlock) ([32]byte, error) {
if !featureconfig.Get().EnableBlockHTR {
return ssz.HashTreeRoot(blk)
}
fieldRoots := make([][32]byte, 4)
if blk != nil {
headerSlotBuf := make([]byte, 8)
binary.LittleEndian.PutUint64(headerSlotBuf, blk.Slot)
headerSlotRoot := bytesutil.ToBytes32(headerSlotBuf)
fieldRoots[0] = headerSlotRoot
parentRoot := bytesutil.ToBytes32(blk.ParentRoot)
fieldRoots[1] = parentRoot
stateRoot := bytesutil.ToBytes32(blk.StateRoot)
fieldRoots[2] = stateRoot
bodyRoot, err := BlockBodyRoot(blk.Body)
if err != nil {
return [32]byte{}, err
}
fieldRoots[3] = bodyRoot
}
return bitwiseMerkleizeArrays(hashutil.CustomSHA256Hasher(), fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
// BlockBodyRoot returns the hash tree root of the block body.
func BlockBodyRoot(body *ethpb.BeaconBlockBody) ([32]byte, error) {
if !featureconfig.Get().EnableBlockHTR {
return ssz.HashTreeRoot(body)
}
hasher := hashutil.CustomSHA256Hasher()
fieldRoots := make([][32]byte, 8)
if body != nil {
rawRandao := bytesutil.ToBytes96(body.RandaoReveal)
packedRandao, err := pack([][]byte{rawRandao[:]})
if err != nil {
return [32]byte{}, err
}
randaoRoot, err := bitwiseMerkleize(hasher, packedRandao, uint64(len(packedRandao)), uint64(len(packedRandao)))
if err != nil {
return [32]byte{}, err
}
fieldRoots[0] = randaoRoot
eth1Root, err := Eth1Root(hasher, body.Eth1Data)
if err != nil {
return [32]byte{}, err
}
fieldRoots[1] = eth1Root
graffitiRoot := bytesutil.ToBytes32(body.Graffiti)
fieldRoots[2] = graffitiRoot
proposerSlashingsRoot, err := ssz.HashTreeRootWithCapacity(body.ProposerSlashings, 16)
if err != nil {
return [32]byte{}, err
}
fieldRoots[3] = proposerSlashingsRoot
attesterSlashingsRoot, err := ssz.HashTreeRootWithCapacity(body.AttesterSlashings, 1)
if err != nil {
return [32]byte{}, err
}
fieldRoots[4] = attesterSlashingsRoot
attsRoot, err := blockAttestationRoot(body.Attestations)
if err != nil {
return [32]byte{}, err
}
fieldRoots[5] = attsRoot
depositRoot, err := ssz.HashTreeRootWithCapacity(body.Deposits, 16)
if err != nil {
return [32]byte{}, err
}
fieldRoots[6] = depositRoot
exitRoot, err := ssz.HashTreeRootWithCapacity(body.VoluntaryExits, 16)
if err != nil {
return [32]byte{}, err
}
fieldRoots[7] = exitRoot
}
return bitwiseMerkleizeArrays(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
// Eth1Root computes the HashTreeRoot Merkleization of
// a BeaconBlockHeader struct according to the eth2
// Simple Serialize specification.
func Eth1Root(eth1Data *ethpb.Eth1Data) ([32]byte, error) {
func Eth1Root(hasher HashFn, eth1Data *ethpb.Eth1Data) ([32]byte, error) {
enc := make([]byte, 0, 96)
fieldRoots := make([][]byte, 3)
for i := 0; i < len(fieldRoots); i++ {
@@ -63,7 +147,7 @@ func Eth1Root(eth1Data *ethpb.Eth1Data) ([32]byte, error) {
}
}
}
root, err := bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
root, err := bitwiseMerkleize(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
if err != nil {
return [32]byte{}, err
}
@@ -79,8 +163,9 @@ func Eth1Root(eth1Data *ethpb.Eth1Data) ([32]byte, error) {
func Eth1DataVotesRoot(eth1DataVotes []*ethpb.Eth1Data) ([32]byte, error) {
eth1VotesRoots := make([][]byte, 0)
enc := make([]byte, len(eth1DataVotes)*32)
hasher := hashutil.CustomSHA256Hasher()
for i := 0; i < len(eth1DataVotes); i++ {
eth1, err := Eth1Root(eth1DataVotes[i])
eth1, err := Eth1Root(hasher, eth1DataVotes[i])
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute eth1data merkleization")
}
@@ -97,7 +182,7 @@ func Eth1DataVotesRoot(eth1DataVotes []*ethpb.Eth1Data) ([32]byte, error) {
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not chunk eth1 votes roots")
}
eth1VotesRootsRoot, err := bitwiseMerkleize(eth1Chunks, uint64(len(eth1Chunks)), params.BeaconConfig().SlotsPerEth1VotingPeriod)
eth1VotesRootsRoot, err := bitwiseMerkleize(hasher, eth1Chunks, uint64(len(eth1Chunks)), params.BeaconConfig().SlotsPerEth1VotingPeriod)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute eth1data votes merkleization")
}

View File

@@ -0,0 +1,43 @@
package stateutil_test
import (
"testing"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
func TestBlockRoot(t *testing.T) {
genState, keys := testutil.DeterministicGenesisState(t, 100)
blk, err := testutil.GenerateFullBlock(genState, keys, testutil.DefaultBlockGenConfig(), 10)
if err != nil {
t.Fatal(err)
}
expectedRoot, err := ssz.HashTreeRoot(blk.Block)
if err != nil {
t.Fatal(err)
}
receivedRoot, err := stateutil.BlockRoot(blk.Block)
if err != nil {
t.Fatal(err)
}
if receivedRoot != expectedRoot {
t.Fatalf("Wanted %#x but got %#x", expectedRoot, receivedRoot)
}
blk, err = testutil.GenerateFullBlock(genState, keys, testutil.DefaultBlockGenConfig(), 100)
if err != nil {
t.Fatal(err)
}
expectedRoot, err = ssz.HashTreeRoot(blk.Block)
if err != nil {
t.Fatal(err)
}
receivedRoot, err = stateutil.BlockRoot(blk.Block)
if err != nil {
t.Fatal(err)
}
if receivedRoot != expectedRoot {
t.Fatalf("Wanted %#x but got %#x", expectedRoot, receivedRoot)
}
}

View File

@@ -2,25 +2,45 @@ package stateutil
import "encoding/binary"
// HashFn describes a hash function and its associated bytes buffer
type HashFn struct {
f func(input []byte) [32]byte
bytesBuffer [64]byte
type HashFn func(input []byte) [32]byte
// Hasher describes an interface through which we can
// perform hash operations on byte arrays,indices,etc.
type Hasher interface {
Hash(a []byte) [32]byte
Combi(a [32]byte, b [32]byte) [32]byte
MixIn(a [32]byte, i uint64) [32]byte
}
// Combi describes a method which merges two 32-byte arrays and hashes
// them.
func (h HashFn) Combi(a [32]byte, b [32]byte) [32]byte {
copy(h.bytesBuffer[:32], a[:])
copy(h.bytesBuffer[32:], b[:])
return h.f(h.bytesBuffer[:])
type HasherFunc struct {
b [64]byte
hashFunc HashFn
}
// MixIn describes a method where we add in the provided
// integer to the end of the byte array and hash it.
func (h HashFn) MixIn(a [32]byte, i uint64) [32]byte {
copy(h.bytesBuffer[:32], a[:])
copy(h.bytesBuffer[32:], make([]byte, 32, 32))
binary.LittleEndian.PutUint64(h.bytesBuffer[32:], i)
return h.f(h.bytesBuffer[:])
// NewHasherFunc is the constructor for the object
// that fulfills the Hasher interface.
func NewHasherFunc(h HashFn) *HasherFunc {
return &HasherFunc{
b: [64]byte{},
hashFunc: h,
}
}
// Hash utilizes the provided hash function for
// the object.
func (h *HasherFunc) Hash(a []byte) [32]byte {
return h.hashFunc(a)
}
func (h *HasherFunc) Combi(a [32]byte, b [32]byte) [32]byte {
copy(h.b[:32], a[:])
copy(h.b[32:], b[:])
return h.Hash(h.b[:])
}
func (h *HasherFunc) MixIn(a [32]byte, i uint64) [32]byte {
copy(h.b[:32], a[:])
copy(h.b[32:], make([]byte, 32, 32))
binary.LittleEndian.PutUint64(h.b[32:], i)
return h.Hash(h.b[:])
}

View File

@@ -6,16 +6,14 @@ import (
"github.com/minio/sha256-simd"
"github.com/pkg/errors"
"github.com/protolambda/zssz/merkle"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/shared/hashutil"
)
func bitlistRoot(bfield bitfield.Bitfield, maxCapacity uint64) ([32]byte, error) {
func bitlistRoot(hasher HashFn, bfield bitfield.Bitfield, maxCapacity uint64) ([32]byte, error) {
limit := (maxCapacity + 255) / 256
if bfield == nil || bfield.Len() == 0 {
length := make([]byte, 32)
root, err := bitwiseMerkleize([][]byte{}, 0, limit)
root, err := bitwiseMerkleize(hasher, [][]byte{}, 0, limit)
if err != nil {
return [32]byte{}, err
}
@@ -31,7 +29,7 @@ func bitlistRoot(bfield bitfield.Bitfield, maxCapacity uint64) ([32]byte, error)
}
output := make([]byte, 32)
copy(output, buf.Bytes())
root, err := bitwiseMerkleize(chunks, uint64(len(chunks)), limit)
root, err := bitwiseMerkleize(hasher, chunks, uint64(len(chunks)), limit)
if err != nil {
return [32]byte{}, err
}
@@ -42,31 +40,27 @@ func bitlistRoot(bfield bitfield.Bitfield, maxCapacity uint64) ([32]byte, error)
// number of chunks is a power of two, Merkleize the chunks, and return the root.
// Note that merkleize on a single chunk is simply that chunk, i.e. the identity
// when the number of chunks is one.
func bitwiseMerkleize(chunks [][]byte, count uint64, limit uint64) ([32]byte, error) {
func bitwiseMerkleize(hasher HashFn, chunks [][]byte, count uint64, limit uint64) ([32]byte, error) {
if count > limit {
return [32]byte{}, errors.New("merkleizing list that is too large, over limit")
}
hashFn := &HashFn{
f: hashutil.CustomSHA256Hasher(),
}
hashFn := NewHasherFunc(hasher)
leafIndexer := func(i uint64) []byte {
return chunks[i]
}
return merkle.Merkleize(hashFn.f, count, limit, leafIndexer), nil
return Merkleize(hashFn, count, limit, leafIndexer), nil
}
// bitwiseMerkleizeArrays is used when a set of 32-byte root chunks are provided.
func bitwiseMerkleizeArrays(chunks [][32]byte, count uint64, limit uint64) ([32]byte, error) {
func bitwiseMerkleizeArrays(hasher HashFn, chunks [][32]byte, count uint64, limit uint64) ([32]byte, error) {
if count > limit {
return [32]byte{}, errors.New("merkleizing list that is too large, over limit")
}
hashFn := &HashFn{
f: hashutil.CustomSHA256Hasher(),
}
hashFn := NewHasherFunc(hasher)
leafIndexer := func(i uint64) []byte {
return chunks[i][:]
}
return merkle.Merkleize(hashFn.f, count, limit, leafIndexer), nil
return Merkleize(hashFn, count, limit, leafIndexer), nil
}
func pack(serializedItems [][]byte) ([][]byte, error) {

View File

@@ -0,0 +1,198 @@
package stateutil
import (
"github.com/prysmaticlabs/prysm/shared/trieutil"
)
// Merkleize.go is mostly a directly copy of the same filename from
// https://github.com/protolambda/zssz/blob/master/merkle/merkleize.go.
// The reason the method is copied instead of imported is due to us using a
// a custom hasher interface for a reduced memory footprint when using
// 'Merkleize'.
const (
mask0 = ^uint64((1 << (1 << iota)) - 1)
mask1
mask2
mask3
mask4
mask5
)
const (
bit0 = uint8(1 << iota)
bit1
bit2
bit3
bit4
bit5
)
// GetDepth retrieves the appropriate depth for the provided trie size.
func GetDepth(v uint64) (out uint8) {
// bitmagic: binary search through a uint32, offset down by 1 to not round powers of 2 up.
// Then adding 1 to it to not get the index of the first bit, but the length of the bits (depth of tree)
// Zero is a special case, it has a 0 depth.
// Example:
// (in out): (0 0), (1 1), (2 1), (3 2), (4 2), (5 3), (6 3), (7 3), (8 3), (9 4)
if v == 0 {
return 0
}
v--
if v&mask5 != 0 {
v >>= bit5
out |= bit5
}
if v&mask4 != 0 {
v >>= bit4
out |= bit4
}
if v&mask3 != 0 {
v >>= bit3
out |= bit3
}
if v&mask2 != 0 {
v >>= bit2
out |= bit2
}
if v&mask1 != 0 {
v >>= bit1
out |= bit1
}
if v&mask0 != 0 {
out |= bit0
}
out++
return
}
// Merkleize with log(N) space allocation
func Merkleize(hasher Hasher, count uint64, limit uint64, leaf func(i uint64) []byte) (out [32]byte) {
if count > limit {
panic("merkleizing list that is too large, over limit")
}
if limit == 0 {
return
}
if limit == 1 {
if count == 1 {
copy(out[:], leaf(0))
}
return
}
depth := GetDepth(count)
limitDepth := GetDepth(limit)
tmp := make([][32]byte, limitDepth+1, limitDepth+1)
j := uint8(0)
hArr := [32]byte{}
h := hArr[:]
merge := func(i uint64) {
// merge back up from bottom to top, as far as we can
for j = 0; ; j++ {
// stop merging when we are in the left side of the next combi
if i&(uint64(1)<<j) == 0 {
// if we are at the count, we want to merge in zero-hashes for padding
if i == count && j < depth {
v := hasher.Combi(hArr, trieutil.ZeroHashes[j])
copy(h, v[:])
} else {
break
}
} else {
// keep merging up if we are the right side
v := hasher.Combi(tmp[j], hArr)
copy(h, v[:])
}
}
// store the merge result (may be no merge, i.e. bottom leaf node)
copy(tmp[j][:], h)
}
// merge in leaf by leaf.
for i := uint64(0); i < count; i++ {
copy(h[:], leaf(i))
merge(i)
}
// complement with 0 if empty, or if not the right power of 2
if (uint64(1) << depth) != count {
copy(h[:], trieutil.ZeroHashes[0][:])
merge(count)
}
// the next power of two may be smaller than the ultimate virtual size,
// complement with zero-hashes at each depth.
for j := depth; j < limitDepth; j++ {
tmp[j+1] = hasher.Combi(tmp[j], trieutil.ZeroHashes[j])
}
return tmp[limitDepth]
}
// ConstructProof builds a merkle-branch of the given depth, at the given index (at that depth),
// for a list of leafs of a balanced binary tree.
func ConstructProof(hasher Hasher, count uint64, limit uint64, leaf func(i uint64) []byte, index uint64) (branch [][32]byte) {
if count > limit {
panic("merkleizing list that is too large, over limit")
}
if index >= limit {
panic("index out of range, over limit")
}
if limit <= 1 {
return
}
depth := GetDepth(count)
limitDepth := GetDepth(limit)
branch = append(branch, trieutil.ZeroHashes[:limitDepth]...)
tmp := make([][32]byte, limitDepth+1, limitDepth+1)
j := uint8(0)
hArr := [32]byte{}
h := hArr[:]
merge := func(i uint64) {
// merge back up from bottom to top, as far as we can
for j = 0; ; j++ {
// if i is a sibling of index at the given depth,
// and i is the last index of the subtree to that depth,
// then put h into the branch
if (i>>j)^1 == (index>>j) && (((1<<j)-1)&i) == ((1<<j)-1) {
// insert sibling into the proof
branch[j] = hArr
}
// stop merging when we are in the left side of the next combi
if i&(uint64(1)<<j) == 0 {
// if we are at the count, we want to merge in zero-hashes for padding
if i == count && j < depth {
v := hasher.Combi(hArr, trieutil.ZeroHashes[j])
copy(h, v[:])
} else {
break
}
} else {
// keep merging up if we are the right side
v := hasher.Combi(tmp[j], hArr)
copy(h, v[:])
}
}
// store the merge result (may be no merge, i.e. bottom leaf node)
copy(tmp[j][:], h)
}
// merge in leaf by leaf.
for i := uint64(0); i < count; i++ {
copy(h[:], leaf(i))
merge(i)
}
// complement with 0 if empty, or if not the right power of 2
if (uint64(1) << depth) != count {
copy(h[:], trieutil.ZeroHashes[0][:])
merge(count)
}
return
}

View File

@@ -10,6 +10,7 @@ import (
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -70,13 +71,14 @@ func (h *stateRootHasher) hashTreeRootState(state *pb.BeaconState) ([32]byte, er
return [32]byte{}, err
}
}
return bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleize(hashutil.CustomSHA256Hasher(), fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
func (h *stateRootHasher) computeFieldRoots(state *pb.BeaconState) ([][]byte, error) {
if state == nil {
return nil, errors.New("nil state")
}
hasher := hashutil.CustomSHA256Hasher()
// There are 20 fields in the beacon state.
fieldRoots := make([][]byte, 20)
@@ -124,7 +126,7 @@ func (h *stateRootHasher) computeFieldRoots(state *pb.BeaconState) ([][]byte, er
fieldRoots[6] = historicalRootsRt[:]
// Eth1Data data structure root.
eth1HashTreeRoot, err := Eth1Root(state.Eth1Data)
eth1HashTreeRoot, err := Eth1Root(hasher, state.Eth1Data)
if err != nil {
return nil, errors.Wrap(err, "could not compute eth1data merkleization")
}
@@ -190,21 +192,21 @@ func (h *stateRootHasher) computeFieldRoots(state *pb.BeaconState) ([][]byte, er
fieldRoots[16] = justifiedBitsRoot[:]
// PreviousJustifiedCheckpoint data structure root.
prevCheckRoot, err := CheckpointRoot(state.PreviousJustifiedCheckpoint)
prevCheckRoot, err := CheckpointRoot(hasher, state.PreviousJustifiedCheckpoint)
if err != nil {
return nil, errors.Wrap(err, "could not compute previous justified checkpoint merkleization")
}
fieldRoots[17] = prevCheckRoot[:]
// CurrentJustifiedCheckpoint data structure root.
currJustRoot, err := CheckpointRoot(state.CurrentJustifiedCheckpoint)
currJustRoot, err := CheckpointRoot(hasher, state.CurrentJustifiedCheckpoint)
if err != nil {
return nil, errors.Wrap(err, "could not compute current justified checkpoint merkleization")
}
fieldRoots[18] = currJustRoot[:]
// FinalizedCheckpoint data structure root.
finalRoot, err := CheckpointRoot(state.FinalizedCheckpoint)
finalRoot, err := CheckpointRoot(hasher, state.FinalizedCheckpoint)
if err != nil {
return nil, errors.Wrap(err, "could not compute finalized checkpoint merkleization")
}
@@ -237,13 +239,13 @@ func ForkRoot(fork *pb.Fork) ([32]byte, error) {
epochRoot := bytesutil.ToBytes32(forkEpochBuf)
fieldRoots[2] = epochRoot[:]
}
return bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleize(hashutil.CustomSHA256Hasher(), fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
// CheckpointRoot computes the HashTreeRoot Merkleization of
// a Checkpoint struct value according to the eth2
// Simple Serialize specification.
func CheckpointRoot(checkpoint *ethpb.Checkpoint) ([32]byte, error) {
func CheckpointRoot(hasher HashFn, checkpoint *ethpb.Checkpoint) ([32]byte, error) {
fieldRoots := make([][]byte, 2)
if checkpoint != nil {
epochBuf := make([]byte, 8)
@@ -253,14 +255,14 @@ func CheckpointRoot(checkpoint *ethpb.Checkpoint) ([32]byte, error) {
ckpRoot := bytesutil.ToBytes32(checkpoint.Root)
fieldRoots[1] = ckpRoot[:]
}
return bitwiseMerkleize(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleize(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
// HistoricalRootsRoot computes the HashTreeRoot Merkleization of
// a list of [32]byte historical block roots according to the eth2
// Simple Serialize specification.
func HistoricalRootsRoot(historicalRoots [][]byte) ([32]byte, error) {
result, err := bitwiseMerkleize(historicalRoots, uint64(len(historicalRoots)), params.BeaconConfig().HistoricalRootsLimit)
result, err := bitwiseMerkleize(hashutil.CustomSHA256Hasher(), historicalRoots, uint64(len(historicalRoots)), params.BeaconConfig().HistoricalRootsLimit)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute historical roots merkleization")
}
@@ -289,5 +291,5 @@ func SlashingsRoot(slashings []uint64) ([32]byte, error) {
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not pack slashings into chunks")
}
return bitwiseMerkleize(slashingChunks, uint64(len(slashingChunks)), uint64(len(slashingChunks)))
return bitwiseMerkleize(hashutil.CustomSHA256Hasher(), slashingChunks, uint64(len(slashingChunks)), uint64(len(slashingChunks)))
}

View File

@@ -3,7 +3,6 @@ package stateutil
import (
"bytes"
"github.com/protolambda/zssz/merkle"
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/trieutil"
)
@@ -19,7 +18,7 @@ func ReturnTrieLayer(elements [][32]byte, length uint64) [][]*[32]byte {
return [][]*[32]byte{{&leaves[0]}}
}
hashLayer := leaves
layers := make([][][32]byte, merkle.GetDepth(length)+1)
layers := make([][][32]byte, GetDepth(length)+1)
layers[0] = hashLayer
layers, _ = merkleizeTrieLeaves(layers, hashLayer, hasher)
refLayers := make([][]*[32]byte, len(layers))
@@ -38,7 +37,7 @@ func ReturnTrieLayer(elements [][32]byte, length uint64) [][]*[32]byte {
// it.
func ReturnTrieLayerVariable(elements [][32]byte, length uint64) [][]*[32]byte {
hasher := hashutil.CustomSHA256Hasher()
depth := merkle.GetDepth(length)
depth := GetDepth(length)
layers := make([][]*[32]byte, depth+1)
// Return zerohash at depth
if len(elements) == 0 {

View File

@@ -3,6 +3,8 @@ package stateutil_test
import (
"testing"
"github.com/prysmaticlabs/prysm/shared/hashutil"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
@@ -34,10 +36,11 @@ func TestReturnTrieLayerVariable_OK(t *testing.T) {
if err != nil {
t.Fatal(err)
}
hasher := hashutil.CustomSHA256Hasher()
validators := newState.Validators()
roots := make([][32]byte, 0, len(validators))
for _, val := range validators {
rt, err := stateutil.ValidatorRoot(val)
rt, err := stateutil.ValidatorRoot(hasher, val)
if err != nil {
t.Fatal(err)
}
@@ -84,9 +87,10 @@ func TestRecomputeFromLayer_FixedSizedArray(t *testing.T) {
func TestRecomputeFromLayer_VariableSizedArray(t *testing.T) {
newState, _ := testutil.DeterministicGenesisState(t, 32)
validators := newState.Validators()
hasher := hashutil.CustomSHA256Hasher()
roots := make([][32]byte, 0, len(validators))
for _, val := range validators {
rt, err := stateutil.ValidatorRoot(val)
rt, err := stateutil.ValidatorRoot(hasher, val)
if err != nil {
t.Fatal(err)
}
@@ -119,7 +123,7 @@ func TestRecomputeFromLayer_VariableSizedArray(t *testing.T) {
}
roots = make([][32]byte, 0, len(changedVals))
for _, val := range changedVals {
rt, err := stateutil.ValidatorRoot(val)
rt, err := stateutil.ValidatorRoot(hasher, val)
if err != nil {
t.Fatal(err)
}

View File

@@ -26,6 +26,7 @@ func ValidatorRegistryRoot(vals []*ethpb.Validator) ([32]byte, error) {
// a list of validator uint64 balances according to the eth2
// Simple Serialize specification.
func ValidatorBalancesRoot(balances []uint64) ([32]byte, error) {
hasher := hashutil.CustomSHA256Hasher()
balancesMarshaling := make([][]byte, 0)
for i := 0; i < len(balances); i++ {
balanceBuf := make([]byte, 8)
@@ -46,7 +47,7 @@ func ValidatorBalancesRoot(balances []uint64) ([32]byte, error) {
balLimit = uint64(len(balances))
}
}
balancesRootsRoot, err := bitwiseMerkleize(balancesChunks, uint64(len(balancesChunks)), balLimit)
balancesRootsRoot, err := bitwiseMerkleize(hasher, balancesChunks, uint64(len(balancesChunks)), balLimit)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute balances merkleization")
}
@@ -61,7 +62,7 @@ func ValidatorBalancesRoot(balances []uint64) ([32]byte, error) {
// ValidatorRoot describes a method from which the hash tree root
// of a validator is returned.
func ValidatorRoot(validator *ethpb.Validator) ([32]byte, error) {
func ValidatorRoot(hasher HashFn, validator *ethpb.Validator) ([32]byte, error) {
fieldRoots := [][32]byte{}
if validator != nil {
pubkey := bytesutil.ToBytes48(validator.PublicKey)
@@ -92,23 +93,24 @@ func ValidatorRoot(validator *ethpb.Validator) ([32]byte, error) {
if err != nil {
return [32]byte{}, err
}
pubKeyRoot, err := bitwiseMerkleize(pubKeyChunks, uint64(len(pubKeyChunks)), uint64(len(pubKeyChunks)))
pubKeyRoot, err := bitwiseMerkleize(hasher, pubKeyChunks, uint64(len(pubKeyChunks)), uint64(len(pubKeyChunks)))
if err != nil {
return [32]byte{}, err
}
fieldRoots = [][32]byte{pubKeyRoot, withdrawCreds, effectiveBalanceBuf, slashBuf, activationEligibilityBuf,
activationBuf, exitBuf, withdrawalBuf}
}
return bitwiseMerkleizeArrays(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
return bitwiseMerkleizeArrays(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
}
func (h *stateRootHasher) validatorRegistryRoot(validators []*ethpb.Validator) ([32]byte, error) {
hashKeyElements := make([]byte, len(validators)*32)
roots := make([][32]byte, len(validators))
emptyKey := hashutil.FastSum256(hashKeyElements)
hasher := hashutil.CustomSHA256Hasher()
bytesProcessed := 0
for i := 0; i < len(validators); i++ {
val, err := h.validatorRoot(validators[i])
val, err := h.validatorRoot(hasher, validators[i])
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute validators merkleization")
}
@@ -124,7 +126,7 @@ func (h *stateRootHasher) validatorRegistryRoot(validators []*ethpb.Validator) (
}
}
validatorsRootsRoot, err := bitwiseMerkleizeArrays(roots, uint64(len(roots)), params.BeaconConfig().ValidatorRegistryLimit)
validatorsRootsRoot, err := bitwiseMerkleizeArrays(hasher, roots, uint64(len(roots)), params.BeaconConfig().ValidatorRegistryLimit)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not compute validator registry merkleization")
}
@@ -142,7 +144,7 @@ func (h *stateRootHasher) validatorRegistryRoot(validators []*ethpb.Validator) (
return res, nil
}
func (h *stateRootHasher) validatorRoot(validator *ethpb.Validator) ([32]byte, error) {
func (h *stateRootHasher) validatorRoot(hasher HashFn, validator *ethpb.Validator) ([32]byte, error) {
// Validator marshaling for caching.
enc := make([]byte, 122)
fieldRoots := make([][32]byte, 2, 8)
@@ -188,7 +190,7 @@ func (h *stateRootHasher) validatorRoot(validator *ethpb.Validator) ([32]byte, e
if err != nil {
return [32]byte{}, err
}
pubKeyRoot, err := bitwiseMerkleize(pubKeyChunks, uint64(len(pubKeyChunks)), uint64(len(pubKeyChunks)))
pubKeyRoot, err := bitwiseMerkleize(hasher, pubKeyChunks, uint64(len(pubKeyChunks)), uint64(len(pubKeyChunks)))
if err != nil {
return [32]byte{}, err
}
@@ -222,7 +224,7 @@ func (h *stateRootHasher) validatorRoot(validator *ethpb.Validator) ([32]byte, e
fieldRoots = append(fieldRoots, withdrawalBuf)
}
valRoot, err := bitwiseMerkleizeArrays(fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
valRoot, err := bitwiseMerkleizeArrays(hasher, fieldRoots, uint64(len(fieldRoots)), uint64(len(fieldRoots)))
if err != nil {
return [32]byte{}, err
}

View File

@@ -108,6 +108,7 @@ go_test(
shard_count = 4,
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -181,7 +181,17 @@ func (s *Service) roundRobinSync(genesis time.Time) error {
}
}
}
startBlock := s.chain.HeadSlot() + 1
var startBlock uint64
if featureconfig.Get().InitSyncBatchSaveBlocks {
lastFinalizedEpoch := s.chain.FinalizedCheckpt().Epoch
lastFinalizedState, err := s.db.HighestSlotStatesBelow(ctx, helpers.StartSlot(lastFinalizedEpoch))
if err != nil {
return err
}
startBlock = lastFinalizedState[0].Slot() + 1
} else {
startBlock = s.chain.HeadSlot() + 1
}
skippedBlocks := blockBatchSize * uint64(lastEmptyRequests*len(peers))
if startBlock+skippedBlocks > helpers.StartSlot(finalizedEpoch+1) {
log.WithField("finalizedEpoch", finalizedEpoch).Debug("Requested block range is greater than the finalized epoch")
@@ -209,10 +219,12 @@ func (s *Service) roundRobinSync(genesis time.Time) error {
for _, blk := range blocks {
s.logSyncStatus(genesis, blk.Block, peers, counter)
if !s.db.HasBlock(ctx, bytesutil.ToBytes32(blk.Block.ParentRoot)) {
log.Debugf("Beacon node doesn't have a block in db with root %#x", blk.Block.ParentRoot)
parentRoot := bytesutil.ToBytes32(blk.Block.ParentRoot)
if !s.db.HasBlock(ctx, parentRoot) && !s.chain.HasInitSyncBlock(parentRoot) {
log.WithField("parentRoot", parentRoot).Debug("Beacon node doesn't have a block in DB or cache")
continue
}
s.blockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"blocks_fetcher.go",
"blocks_queue.go",
"fsm.go",
"log.go",
"round_robin.go",
"service.go",
@@ -43,6 +44,7 @@ go_test(
srcs = [
"blocks_fetcher_test.go",
"blocks_queue_test.go",
"fsm_test.go",
"round_robin_test.go",
],
embed = [":go_default_library"],

View File

@@ -5,6 +5,7 @@ import (
"context"
"fmt"
"io"
"math"
"math/rand"
"sort"
"sync"
@@ -16,14 +17,23 @@ import (
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
prysmsync "github.com/prysmaticlabs/prysm/beacon-chain/sync"
p2ppb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
const (
// maxPendingRequests limits how many concurrent fetch request one can initiate.
maxPendingRequests = 8
// peersPercentagePerRequest caps percentage of peers to be used in a request.
peersPercentagePerRequest = 0.75
)
var (
errNoPeersAvailable = errors.New("no peers available, waiting for reconnect")
errFetcherCtxIsDone = errors.New("fetcher's context is done, reinitialize")
@@ -40,6 +50,7 @@ type blocksFetcherConfig struct {
// On an incoming requests, requested block range is evenly divided
// among available peers (for fair network load distribution).
type blocksFetcher struct {
sync.Mutex
ctx context.Context
cancel context.CancelFunc
headFetcher blockchain.HeadFetcher
@@ -72,7 +83,7 @@ func newBlocksFetcher(ctx context.Context, cfg *blocksFetcherConfig) *blocksFetc
rateLimiter := leakybucket.NewCollector(
allowedBlocksPerSecond, /* rate */
allowedBlocksPerSecond, /* capacity */
false /* deleteEmptyBuckets */)
false /* deleteEmptyBuckets */)
return &blocksFetcher{
ctx: ctx,
@@ -80,8 +91,8 @@ func newBlocksFetcher(ctx context.Context, cfg *blocksFetcherConfig) *blocksFetc
headFetcher: cfg.headFetcher,
p2p: cfg.p2p,
rateLimiter: rateLimiter,
fetchRequests: make(chan *fetchRequestParams, queueMaxPendingRequests),
fetchResponses: make(chan *fetchRequestResponse, queueMaxPendingRequests),
fetchRequests: make(chan *fetchRequestParams, maxPendingRequests),
fetchResponses: make(chan *fetchRequestResponse, maxPendingRequests),
quit: make(chan struct{}),
}
}
@@ -120,6 +131,11 @@ func (f *blocksFetcher) loop() {
}()
for {
// Make sure there is are available peers before processing requests.
if _, err := f.waitForMinimumPeers(f.ctx); err != nil {
log.Error(err)
}
select {
case <-f.ctx.Done():
log.Debug("Context closed, exiting goroutine (blocks fetcher)")
@@ -221,17 +237,11 @@ func (f *blocksFetcher) collectPeerResponses(
return nil, ctx.Err()
}
peers = f.selectPeers(peers)
if len(peers) == 0 {
return nil, errNoPeersAvailable
}
// Shuffle peers to prevent a bad peer from
// stalling sync with invalid blocks.
randGenerator := rand.New(rand.NewSource(time.Now().Unix()))
randGenerator.Shuffle(len(peers), func(i, j int) {
peers[i], peers[j] = peers[j], peers[i]
})
p2pRequests := new(sync.WaitGroup)
errChan := make(chan error)
blocksChan := make(chan []*eth.SignedBeaconBlock)
@@ -249,7 +259,7 @@ func (f *blocksFetcher) collectPeerResponses(
}
// Spread load evenly among available peers.
perPeerCount := count / uint64(len(peers))
perPeerCount := mathutil.Min(count/uint64(len(peers)), allowedBlocksPerSecond)
remainder := int(count % uint64(len(peers)))
for i, pid := range peers {
start, step := start+uint64(i)*step, step*uint64(len(peers))
@@ -354,6 +364,7 @@ func (f *blocksFetcher) requestBlocks(
req *p2ppb.BeaconBlocksByRangeRequest,
pid peer.ID,
) ([]*eth.SignedBeaconBlock, error) {
f.Lock()
if f.rateLimiter.Remaining(pid.String()) < int64(req.Count) {
log.WithField("peer", pid).Debug("Slowing down for rate limit")
time.Sleep(f.rateLimiter.TillEmpty(pid.String()))
@@ -366,6 +377,7 @@ func (f *blocksFetcher) requestBlocks(
"step": req.Step,
"head": fmt.Sprintf("%#x", req.HeadBlockRoot),
}).Debug("Requesting blocks")
f.Unlock()
stream, err := f.p2p.Send(ctx, req, pid)
if err != nil {
return nil, err
@@ -407,3 +419,49 @@ func selectFailOverPeer(excludedPID peer.ID, peers []peer.ID) (peer.ID, error) {
return peers[0], nil
}
// waitForMinimumPeers spins and waits up until enough peers are available.
func (f *blocksFetcher) waitForMinimumPeers(ctx context.Context) ([]peer.ID, error) {
required := params.BeaconConfig().MaxPeersToSync
if flags.Get().MinimumSyncPeers < required {
required = flags.Get().MinimumSyncPeers
}
for {
if ctx.Err() != nil {
return nil, ctx.Err()
}
headEpoch := helpers.SlotToEpoch(f.headFetcher.HeadSlot())
_, _, peers := f.p2p.Peers().BestFinalized(params.BeaconConfig().MaxPeersToSync, headEpoch)
if len(peers) >= required {
return peers, nil
}
log.WithFields(logrus.Fields{
"suitable": len(peers),
"required": required}).Info("Waiting for enough suitable peers before syncing")
time.Sleep(handshakePollingInterval)
}
}
// selectPeers returns transformed list of peers (randomized, constrained if necessary).
func (f *blocksFetcher) selectPeers(peers []peer.ID) []peer.ID {
if len(peers) == 0 {
return peers
}
// Shuffle peers to prevent a bad peer from
// stalling sync with invalid blocks.
randGenerator := rand.New(rand.NewSource(time.Now().Unix()))
randGenerator.Shuffle(len(peers), func(i, j int) {
peers[i], peers[j] = peers[j], peers[i]
})
required := params.BeaconConfig().MaxPeersToSync
if flags.Get().MinimumSyncPeers < required {
required = flags.Get().MinimumSyncPeers
}
limit := uint64(math.Round(float64(len(peers)) * peersPercentagePerRequest))
limit = mathutil.Max(limit, uint64(required))
limit = mathutil.Min(limit, uint64(len(peers)))
return peers[:limit]
}

View File

@@ -97,7 +97,7 @@ func TestBlocksFetcherRoundRobin(t *testing.T) {
}{
{
name: "Single peer with all blocks",
expectedBlockSlots: makeSequence(1, 128), // up to 4th epoch
expectedBlockSlots: makeSequence(1, 3*blockBatchSize),
peers: []*peerData{
{
blocks: makeSequence(1, 131),
@@ -122,7 +122,7 @@ func TestBlocksFetcherRoundRobin(t *testing.T) {
},
{
name: "Single peer with all blocks (many small requests)",
expectedBlockSlots: makeSequence(1, 128), // up to 4th epoch
expectedBlockSlots: makeSequence(1, 80),
peers: []*peerData{
{
blocks: makeSequence(1, 131),
@@ -155,7 +155,7 @@ func TestBlocksFetcherRoundRobin(t *testing.T) {
},
{
name: "Multiple peers with all blocks",
expectedBlockSlots: makeSequence(1, 128), // up to 4th epoch
expectedBlockSlots: makeSequence(1, 96), // up to 4th epoch
peers: []*peerData{
{
blocks: makeSequence(1, 131),
@@ -218,6 +218,16 @@ func TestBlocksFetcherRoundRobin(t *testing.T) {
finalizedEpoch: 18,
headSlot: 640,
},
{
blocks: append(makeSequence(1, 64), makeSequence(500, 640)...),
finalizedEpoch: 18,
headSlot: 640,
},
{
blocks: append(makeSequence(1, 64), makeSequence(500, 640)...),
finalizedEpoch: 18,
headSlot: 640,
},
},
requests: []*fetchRequestParams{
{
@@ -233,8 +243,8 @@ func TestBlocksFetcherRoundRobin(t *testing.T) {
count: blockBatchSize,
},
{
start: 400,
count: 150,
start: 500,
count: 53,
},
{
start: 553,

View File

@@ -3,30 +3,31 @@ package initialsync
import (
"context"
"errors"
"sync"
"time"
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/shared/mathutil"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/shared/params"
)
const (
// queueMaxPendingRequests limits how many concurrent fetch request queue can initiate.
queueMaxPendingRequests = 8
// queueFetchRequestTimeout caps maximum amount of time before fetch requests is cancelled.
queueFetchRequestTimeout = 60 * time.Second
// queueMaxCachedBlocks hard limit on how many queue items to cache before forced dequeue.
queueMaxCachedBlocks = 8 * queueMaxPendingRequests * blockBatchSize
// queueStopCallTimeout is time allowed for queue to release resources when quitting.
queueStopCallTimeout = 1 * time.Second
// pollingInterval defines how often state machine needs to check for new events.
pollingInterval = 200 * time.Millisecond
// staleEpochTimeout is an period after which epoch's state is considered stale.
staleEpochTimeout = 5 * pollingInterval
// lookaheadEpochs is a default limit on how many forward epochs are loaded into queue.
lookaheadEpochs = 4
)
var (
errQueueCtxIsDone = errors.New("queue's context is done, reinitialize")
errQueueTakesTooLongToStop = errors.New("queue takes too long to stop")
errNoEpochState = errors.New("epoch state not found")
)
// blocksProvider exposes enough methods for queue to fetch incoming blocks.
@@ -46,60 +47,17 @@ type blocksQueueConfig struct {
p2p p2p.P2P
}
// blocksQueueState holds internal queue state (for easier management of state transitions).
type blocksQueueState struct {
scheduler *schedulerState
sender *senderState
cachedBlocks map[uint64]*cachedBlock
}
// blockState enums possible queue block states.
type blockState uint8
const (
// pendingBlock is a default block status when just added to queue.
pendingBlock = iota
// validBlock represents block that can be processed.
validBlock
// skippedBlock is a block for a slot that is not found on any available peers.
skippedBlock
// failedBlock represents block that can not be processed at the moment.
failedBlock
// blockStateLen is a sentinel to know number of possible block states.
blockStateLen
)
// schedulerState a state of scheduling process.
type schedulerState struct {
sync.Mutex
currentSlot uint64
blockBatchSize uint64
requestedBlocks map[blockState]uint64
}
// senderState is a state of block sending process.
type senderState struct {
sync.Mutex
}
// cachedBlock is a container for signed beacon block.
type cachedBlock struct {
*eth.SignedBeaconBlock
}
// blocksQueue is a priority queue that serves as a intermediary between block fetchers (producers)
// and block processing goroutine (consumer). Consumer can rely on order of incoming blocks.
type blocksQueue struct {
ctx context.Context
cancel context.CancelFunc
highestExpectedSlot uint64
state *blocksQueueState
blocksFetcher blocksProvider
headFetcher blockchain.HeadFetcher
fetchedBlocks chan *eth.SignedBeaconBlock // output channel for ready blocks
pendingFetchRequests chan struct{} // pending requests semaphore
pendingFetchedBlocks chan struct{} // notifier, pings block sending handler
quit chan struct{} // termination notifier
ctx context.Context
cancel context.CancelFunc
highestExpectedSlot uint64
state *stateMachine
blocksFetcher blocksProvider
headFetcher blockchain.HeadFetcher
fetchedBlocks chan *eth.SignedBeaconBlock // output channel for ready blocks
quit chan struct{} // termination notifier
}
// newBlocksQueue creates initialized priority queue.
@@ -114,26 +72,25 @@ func newBlocksQueue(ctx context.Context, cfg *blocksQueueConfig) *blocksQueue {
})
}
return &blocksQueue{
queue := &blocksQueue{
ctx: ctx,
cancel: cancel,
highestExpectedSlot: cfg.highestExpectedSlot,
state: &blocksQueueState{
scheduler: &schedulerState{
currentSlot: cfg.startSlot,
blockBatchSize: blockBatchSize,
requestedBlocks: make(map[blockState]uint64, blockStateLen),
},
sender: &senderState{},
cachedBlocks: make(map[uint64]*cachedBlock, queueMaxCachedBlocks),
},
blocksFetcher: blocksFetcher,
headFetcher: cfg.headFetcher,
fetchedBlocks: make(chan *eth.SignedBeaconBlock, blockBatchSize),
pendingFetchRequests: make(chan struct{}, queueMaxPendingRequests),
pendingFetchedBlocks: make(chan struct{}, queueMaxPendingRequests),
quit: make(chan struct{}),
blocksFetcher: blocksFetcher,
headFetcher: cfg.headFetcher,
fetchedBlocks: make(chan *eth.SignedBeaconBlock, allowedBlocksPerSecond),
quit: make(chan struct{}),
}
// Configure state machine.
queue.state = newStateMachine()
queue.state.addHandler(stateNew, eventSchedule, queue.onScheduleEvent(ctx))
queue.state.addHandler(stateScheduled, eventDataReceived, queue.onDataReceivedEvent(ctx))
queue.state.addHandler(stateDataParsed, eventReadyToSend, queue.onReadyToSendEvent(ctx))
queue.state.addHandler(stateSkipped, eventExtendWindow, queue.onExtendWindowEvent(ctx))
queue.state.addHandler(stateSent, eventCheckStale, queue.onCheckStaleEvent(ctx))
return queue
}
// start boots up the queue processing.
@@ -162,10 +119,7 @@ func (q *blocksQueue) stop() error {
func (q *blocksQueue) loop() {
defer close(q.quit)
// Wait for all goroutines to wrap up (forced by cancelled context), and do a cleanup.
wg := &sync.WaitGroup{}
defer func() {
wg.Wait()
q.blocksFetcher.stop()
close(q.fetchedBlocks)
}()
@@ -174,14 +128,16 @@ func (q *blocksQueue) loop() {
log.WithError(err).Debug("Can not start blocks provider")
}
// Reads from semaphore channel, thus allowing next goroutine to grab it and schedule next request.
releaseTicket := func() {
select {
case <-q.ctx.Done():
case <-q.pendingFetchRequests:
}
startEpoch := helpers.SlotToEpoch(q.headFetcher.HeadSlot())
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
// Define epoch states as finite state machines.
for i := startEpoch; i < startEpoch+lookaheadEpochs; i++ {
q.state.addEpochState(i)
}
ticker := time.NewTicker(pollingInterval)
tickerEvents := []eventID{eventSchedule, eventReadyToSend, eventCheckStale, eventExtendWindow}
for {
if q.headFetcher.HeadSlot() >= q.highestExpectedSlot {
log.Debug("Highest expected slot reached")
@@ -189,229 +145,214 @@ func (q *blocksQueue) loop() {
}
select {
case <-q.ctx.Done():
log.Debug("Context closed, exiting goroutine (blocks queue)")
return
case q.pendingFetchRequests <- struct{}{}:
wg.Add(1)
go func() {
defer wg.Done()
// Schedule request.
if err := q.scheduleFetchRequests(q.ctx); err != nil {
q.state.scheduler.incrementCounter(failedBlock, blockBatchSize)
releaseTicket()
case <-ticker.C:
for _, state := range q.state.epochs {
data := &fetchRequestParams{
start: helpers.StartSlot(state.epoch),
count: slotsPerEpoch,
}
}()
// Trigger events on each epoch's state machine.
for _, event := range tickerEvents {
if err := q.state.trigger(event, state.epoch, data); err != nil {
log.WithError(err).Debug("Can not trigger event")
}
}
// Do garbage collection, and advance sliding window forward.
if q.headFetcher.HeadSlot() >= helpers.StartSlot(state.epoch+1) {
highestEpochSlot, err := q.state.highestEpochSlot()
if err != nil {
log.WithError(err).Debug("Cannot obtain highest epoch state number")
continue
}
if err := q.state.removeEpochState(state.epoch); err != nil {
log.WithError(err).Debug("Can not remove epoch state")
}
if len(q.state.epochs) < lookaheadEpochs {
q.state.addEpochState(highestEpochSlot + 1)
}
}
}
case response, ok := <-q.blocksFetcher.requestResponses():
if !ok {
log.Debug("Fetcher closed output channel")
q.cancel()
return
}
// Release semaphore ticket.
go releaseTicket()
// Process incoming response into blocks.
wg.Add(1)
go func() {
defer func() {
select {
case <-q.ctx.Done():
case q.pendingFetchedBlocks <- struct{}{}: // notify sender of data availability
}
wg.Done()
}()
skippedBlocks, err := q.parseFetchResponse(q.ctx, response)
if err != nil {
q.state.scheduler.incrementCounter(failedBlock, response.count)
return
// Update state of an epoch for which data is received.
epoch := helpers.SlotToEpoch(response.start)
if ind, ok := q.state.findEpochState(epoch); ok {
state := q.state.epochs[ind]
if err := q.state.trigger(eventDataReceived, state.epoch, response); err != nil {
log.WithError(err).Debug("Can not trigger event")
state.setState(stateNew)
continue
}
q.state.scheduler.incrementCounter(skippedBlock, skippedBlocks)
}()
case <-q.pendingFetchedBlocks:
wg.Add(1)
go func() {
defer wg.Done()
if err := q.sendFetchedBlocks(q.ctx); err != nil {
log.WithError(err).Debug("Error sending received blocks")
}
}()
}
}
}
// scheduleFetchRequests enqueues block fetch requests to block fetcher.
func (q *blocksQueue) scheduleFetchRequests(ctx context.Context) error {
q.state.scheduler.Lock()
defer q.state.scheduler.Unlock()
if ctx.Err() != nil {
return ctx.Err()
}
s := q.state.scheduler
blocks := q.state.scheduler.requestedBlocks
func() {
resetStateCounters := func() {
for i := 0; i < blockStateLen; i++ {
blocks[blockState(i)] = 0
}
s.currentSlot = q.headFetcher.HeadSlot()
}
// Update state's current slot pointer.
count := blocks[pendingBlock] + blocks[skippedBlock] + blocks[failedBlock] + blocks[validBlock]
if count == 0 {
s.currentSlot = q.headFetcher.HeadSlot()
case <-q.ctx.Done():
log.Debug("Context closed, exiting goroutine (blocks queue)")
ticker.Stop()
return
}
// Too many failures (blocks that can't be processed at this time).
if blocks[failedBlock] >= s.blockBatchSize/2 {
s.blockBatchSize *= 2
resetStateCounters()
return
}
// Given enough valid blocks, we can set back block batch size.
if blocks[validBlock] >= blockBatchSize && s.blockBatchSize != blockBatchSize {
blocks[skippedBlock], blocks[validBlock] = blocks[skippedBlock]+blocks[validBlock], 0
s.blockBatchSize = blockBatchSize
}
// Too many items in scheduler, time to update current slot to point to current head's slot.
count = blocks[pendingBlock] + blocks[skippedBlock] + blocks[failedBlock] + blocks[validBlock]
if count >= queueMaxCachedBlocks {
s.blockBatchSize = blockBatchSize
resetStateCounters()
return
}
// All blocks processed, no pending blocks.
count = blocks[skippedBlock] + blocks[failedBlock] + blocks[validBlock]
if count > 0 && blocks[pendingBlock] == 0 {
s.blockBatchSize = blockBatchSize
resetStateCounters()
return
}
}()
offset := blocks[pendingBlock] + blocks[skippedBlock] + blocks[failedBlock] + blocks[validBlock]
start := q.state.scheduler.currentSlot + offset + 1
count := mathutil.Min(q.state.scheduler.blockBatchSize, q.highestExpectedSlot-start+1)
if count <= 0 {
return errStartSlotIsTooHigh
}
ctx, _ = context.WithTimeout(ctx, queueFetchRequestTimeout)
if err := q.blocksFetcher.scheduleRequest(ctx, start, count); err != nil {
return err
}
q.state.scheduler.requestedBlocks[pendingBlock] += count
return nil
}
// parseFetchResponse processes incoming responses.
func (q *blocksQueue) parseFetchResponse(ctx context.Context, response *fetchRequestResponse) (uint64, error) {
q.state.sender.Lock()
defer q.state.sender.Unlock()
if ctx.Err() != nil {
return 0, ctx.Err()
}
if response.err != nil {
return 0, response.err
}
// Extract beacon blocks.
responseBlocks := make(map[uint64]*eth.SignedBeaconBlock, len(response.blocks))
for _, blk := range response.blocks {
responseBlocks[blk.Block.Slot] = blk
}
// Cache blocks in [start, start + count) range, include skipped blocks.
var skippedBlocks uint64
end := response.start + mathutil.Max(response.count, uint64(len(response.blocks)))
for slot := response.start; slot < end; slot++ {
if block, ok := responseBlocks[slot]; ok {
q.state.cachedBlocks[slot] = &cachedBlock{
SignedBeaconBlock: block,
}
delete(responseBlocks, slot)
continue
// onScheduleEvent is an event called on newly arrived epochs. Transforms state to scheduled.
func (q *blocksQueue) onScheduleEvent(ctx context.Context) eventHandlerFn {
return func(es *epochState, in interface{}) (stateID, error) {
data := in.(*fetchRequestParams)
start := data.start
count := mathutil.Min(data.count, q.highestExpectedSlot-start+1)
if count <= 0 {
return es.state, errStartSlotIsTooHigh
}
q.state.cachedBlocks[slot] = &cachedBlock{}
skippedBlocks++
}
// If there are any items left in incoming response, cache them too.
for slot, block := range responseBlocks {
q.state.cachedBlocks[slot] = &cachedBlock{
SignedBeaconBlock: block,
if err := q.blocksFetcher.scheduleRequest(ctx, start, count); err != nil {
return es.state, err
}
return stateScheduled, nil
}
return skippedBlocks, nil
}
// sendFetchedBlocks analyses available blocks, and sends them downstream in a correct slot order.
// Blocks are checked starting from the current head slot, and up until next consecutive block is available.
func (q *blocksQueue) sendFetchedBlocks(ctx context.Context) error {
q.state.sender.Lock()
defer q.state.sender.Unlock()
ctx, span := trace.StartSpan(ctx, "initialsync.sendFetchedBlocks")
defer span.End()
startSlot := q.headFetcher.HeadSlot() + 1
nonSkippedSlot := uint64(0)
for slot := startSlot; slot <= q.highestExpectedSlot; slot++ {
// onDataReceivedEvent is an event called when data is received from fetcher.
func (q *blocksQueue) onDataReceivedEvent(ctx context.Context) eventHandlerFn {
return func(es *epochState, in interface{}) (stateID, error) {
if ctx.Err() != nil {
return ctx.Err()
return es.state, ctx.Err()
}
blockData, ok := q.state.cachedBlocks[slot]
response := in.(*fetchRequestResponse)
epoch := helpers.SlotToEpoch(response.start)
if response.err != nil {
// Current window is already too big, re-request previous epochs.
if response.err == errStartSlotIsTooHigh {
for _, state := range q.state.epochs {
isSkipped := state.state == stateSkipped || state.state == stateSkippedExt
if state.epoch < epoch && isSkipped {
state.setState(stateNew)
}
}
}
return es.state, response.err
}
ind, ok := q.state.findEpochState(epoch)
if !ok {
break
}
if blockData.SignedBeaconBlock != nil && blockData.Block != nil {
select {
case <-ctx.Done():
return ctx.Err()
case q.fetchedBlocks <- blockData.SignedBeaconBlock:
}
nonSkippedSlot = slot
return es.state, errNoEpochState
}
q.state.epochs[ind].blocks = response.blocks
return stateDataParsed, nil
}
// Remove processed blocks.
if nonSkippedSlot > 0 {
for slot := range q.state.cachedBlocks {
if slot <= nonSkippedSlot {
delete(q.state.cachedBlocks, slot)
}
}
}
return nil
}
// incrementCounter increments particular scheduler counter.
func (s *schedulerState) incrementCounter(counter blockState, n uint64) {
s.Lock()
defer s.Unlock()
// onReadyToSendEvent is an event called to allow epochs with available blocks to send them downstream.
func (q *blocksQueue) onReadyToSendEvent(ctx context.Context) eventHandlerFn {
return func(es *epochState, in interface{}) (stateID, error) {
if ctx.Err() != nil {
return es.state, ctx.Err()
}
// Assert that counter is within acceptable boundaries.
if counter < 1 || counter >= blockStateLen {
return
data := in.(*fetchRequestParams)
epoch := helpers.SlotToEpoch(data.start)
ind, ok := q.state.findEpochState(epoch)
if !ok {
return es.state, errNoEpochState
}
if len(q.state.epochs[ind].blocks) == 0 {
return stateSkipped, nil
}
send := func() (stateID, error) {
for _, block := range q.state.epochs[ind].blocks {
select {
case <-ctx.Done():
return es.state, ctx.Err()
case q.fetchedBlocks <- block:
}
}
return stateSent, nil
}
// Make sure that we send epochs in a correct order.
if q.state.isLowestEpochState(epoch) {
return send()
}
// Make sure that previous epoch is already processed.
for _, state := range q.state.epochs {
// Review only previous slots.
if state.epoch < epoch {
switch state.state {
case stateNew, stateScheduled, stateDataParsed:
return es.state, nil
default:
}
}
}
return send()
}
}
// onExtendWindowEvent is and event allowing handlers to extend sliding window,
// in case where progress is not possible otherwise.
func (q *blocksQueue) onExtendWindowEvent(ctx context.Context) eventHandlerFn {
return func(es *epochState, in interface{}) (stateID, error) {
if ctx.Err() != nil {
return es.state, ctx.Err()
}
data := in.(*fetchRequestParams)
epoch := helpers.SlotToEpoch(data.start)
if _, ok := q.state.findEpochState(epoch); !ok {
return es.state, errNoEpochState
}
// Only the highest epoch with skipped state can trigger extension.
highestEpochSlot, err := q.state.highestEpochSlot()
if err != nil {
return es.state, err
}
if highestEpochSlot != epoch {
return es.state, nil
}
// Check if window is expanded recently, if so, time to reset and re-request the same blocks.
resetWindow := false
for _, state := range q.state.epochs {
if state.state == stateSkippedExt {
resetWindow = true
break
}
}
if resetWindow {
for _, state := range q.state.epochs {
state.setState(stateNew)
}
return stateNew, nil
}
// Extend sliding window.
for i := 1; i <= lookaheadEpochs; i++ {
q.state.addEpochState(highestEpochSlot + uint64(i))
}
return stateSkippedExt, nil
}
}
// onCheckStaleEvent is an event that allows to mark stale epochs,
// so that they can be re-processed.
func (q *blocksQueue) onCheckStaleEvent(ctx context.Context) eventHandlerFn {
return func(es *epochState, in interface{}) (stateID, error) {
if ctx.Err() != nil {
return es.state, ctx.Err()
}
if time.Since(es.updated) > staleEpochTimeout {
return stateSkipped, nil
}
return es.state, nil
}
n = mathutil.Min(s.requestedBlocks[pendingBlock], n)
s.requestedBlocks[counter] += n
s.requestedBlocks[pendingBlock] -= n
}

View File

@@ -170,605 +170,6 @@ func TestBlocksQueueInitStartStop(t *testing.T) {
})
}
func TestBlocksQueueUpdateSchedulerState(t *testing.T) {
chainConfig := struct {
expectedBlockSlots []uint64
peers []*peerData
}{
expectedBlockSlots: makeSequence(1, 241),
peers: []*peerData{},
}
mc, _, beaconDB := initializeTestServices(t, chainConfig.expectedBlockSlots, chainConfig.peers)
defer dbtest.TeardownDB(t, beaconDB)
setupQueue := func(ctx context.Context) *blocksQueue {
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: &blocksProviderMock{},
headFetcher: mc,
highestExpectedSlot: uint64(len(chainConfig.expectedBlockSlots)),
})
return queue
}
t.Run("cancelled context", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
queue := setupQueue(ctx)
cancel()
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
if err := queue.scheduleFetchRequests(ctx); err != ctx.Err() {
t.Errorf("expected error: %v", ctx.Err())
}
})
t.Run("empty state on pristine node", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if state.currentSlot != 0 {
t.Errorf("invalid current slot, want: %v, got: %v", 0, state.currentSlot)
}
})
t.Run("empty state on pre-synced node", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
syncToSlot := uint64(7)
setBlocksFromCache(ctx, t, mc, syncToSlot)
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if state.currentSlot != syncToSlot {
t.Errorf("invalid current slot, want: %v, got: %v", syncToSlot, state.currentSlot)
}
})
t.Run("reset block batch size to default", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
// On enough valid blocks, batch size should get back to default value.
state.blockBatchSize *= 2
state.requestedBlocks[validBlock] = blockBatchSize
state.requestedBlocks[pendingBlock] = 13
state.requestedBlocks[skippedBlock] = 17
state.requestedBlocks[failedBlock] = 19
if err := assertState(queue.state.scheduler, 13, blockBatchSize, 17, 19); err != nil {
t.Error(err)
}
if state.blockBatchSize != 2*blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", 2*blockBatchSize, state.blockBatchSize)
}
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(queue.state.scheduler, 13+state.blockBatchSize, 0, 17+blockBatchSize, 19); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
})
t.Run("increase block batch size on too many failures", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
// On too many failures, batch size should get doubled and counters reset.
state.requestedBlocks[validBlock] = 19
state.requestedBlocks[pendingBlock] = 13
state.requestedBlocks[skippedBlock] = 17
state.requestedBlocks[failedBlock] = blockBatchSize
if err := assertState(queue.state.scheduler, 13, 19, 17, blockBatchSize); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if state.blockBatchSize != 2*blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", 2*blockBatchSize, state.blockBatchSize)
}
if err := assertState(queue.state.scheduler, state.blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
})
t.Run("reset counters and block batch size on too many cached items", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
// On too many cached items, batch size and counters should reset.
state.requestedBlocks[validBlock] = queueMaxCachedBlocks
state.requestedBlocks[pendingBlock] = 13
state.requestedBlocks[skippedBlock] = 17
state.requestedBlocks[failedBlock] = 19
if err := assertState(queue.state.scheduler, 13, queueMaxCachedBlocks, 17, 19); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
// This call should trigger resetting.
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
if err := assertState(queue.state.scheduler, state.blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
})
t.Run("no pending blocks left", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
if err := assertState(queue.state.scheduler, 0, 0, 0, 0); err != nil {
t.Error(err)
}
// On too many cached items, batch size and counters should reset.
state.blockBatchSize = 2 * blockBatchSize
state.requestedBlocks[pendingBlock] = 0
state.requestedBlocks[validBlock] = 1
state.requestedBlocks[skippedBlock] = 1
state.requestedBlocks[failedBlock] = 1
if err := assertState(queue.state.scheduler, 0, 1, 1, 1); err != nil {
t.Error(err)
}
if state.blockBatchSize != 2*blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", 2*blockBatchSize, state.blockBatchSize)
}
// This call should trigger resetting.
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpected batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
if err := assertState(queue.state.scheduler, state.blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
})
}
func TestBlocksQueueScheduleFetchRequests(t *testing.T) {
chainConfig := struct {
expectedBlockSlots []uint64
peers []*peerData
}{
expectedBlockSlots: makeSequence(1, 241),
peers: []*peerData{
{
blocks: makeSequence(1, 320),
finalizedEpoch: 8,
headSlot: 320,
},
{
blocks: makeSequence(1, 320),
finalizedEpoch: 8,
headSlot: 320,
},
},
}
mc, _, beaconDB := initializeTestServices(t, chainConfig.expectedBlockSlots, chainConfig.peers)
defer dbtest.TeardownDB(t, beaconDB)
setupQueue := func(ctx context.Context) *blocksQueue {
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: &blocksProviderMock{},
headFetcher: mc,
highestExpectedSlot: uint64(len(chainConfig.expectedBlockSlots)),
})
return queue
}
t.Run("check start/count boundaries", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
// Move sliding window normally.
if err := assertState(state, 0, 0, 0, 0); err != nil {
t.Error(err)
}
end := queue.highestExpectedSlot / state.blockBatchSize
for i := uint64(0); i < end; i++ {
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, (i+1)*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
}
// Make sure that the last request is up to highest expected slot.
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, queue.highestExpectedSlot, 0, 0, 0); err != nil {
t.Error(err)
}
// Try schedule beyond the highest slot.
if err := queue.scheduleFetchRequests(ctx); err == nil {
t.Errorf("expected error: %v", errStartSlotIsTooHigh)
}
if err := assertState(state, queue.highestExpectedSlot, 0, 0, 0); err != nil {
t.Error(err)
}
})
t.Run("too many failures", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
// Schedule enough items.
if err := assertState(state, 0, 0, 0, 0); err != nil {
t.Error(err)
}
end := queue.highestExpectedSlot / state.blockBatchSize
for i := uint64(0); i < end; i++ {
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, (i+1)*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
}
// "Process" some items and reschedule.
if err := assertState(state, end*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
state.incrementCounter(failedBlock, 25)
if err := assertState(state, end*blockBatchSize-25, 0, 0, 25); err != nil {
t.Error(err)
}
state.incrementCounter(failedBlock, 500) // too high value shouldn't cause issues
if err := assertState(state, 0, 0, 0, end*blockBatchSize); err != nil {
t.Error(err)
}
// Due to failures, resetting is expected.
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, 2*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
if state.blockBatchSize != 2*blockBatchSize {
t.Errorf("unexpeced block batch size, want: %v, got: %v", 2*blockBatchSize, state.blockBatchSize)
}
})
t.Run("too many skipped", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
// Schedule enough items.
if err := assertState(state, 0, 0, 0, 0); err != nil {
t.Error(err)
}
end := queue.highestExpectedSlot / state.blockBatchSize
for i := uint64(0); i < end; i++ {
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, (i+1)*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
}
// "Process" some items and reschedule.
if err := assertState(state, end*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
state.incrementCounter(skippedBlock, 25)
if err := assertState(state, end*blockBatchSize-25, 0, 25, 0); err != nil {
t.Error(err)
}
state.incrementCounter(skippedBlock, 500) // too high value shouldn't cause issues
if err := assertState(state, 0, 0, end*blockBatchSize, 0); err != nil {
t.Error(err)
}
// No pending items, resetting is expected (both counters and block batch size).
state.blockBatchSize = 2 * blockBatchSize
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpeced block batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
})
t.Run("reset block batch size", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
state.requestedBlocks[failedBlock] = blockBatchSize
// Increase block batch size.
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, 2*blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
if state.blockBatchSize != 2*blockBatchSize {
t.Errorf("unexpeced block batch size, want: %v, got: %v", 2*blockBatchSize, state.blockBatchSize)
}
// Reset block batch size.
state.requestedBlocks[validBlock] = blockBatchSize
state.requestedBlocks[pendingBlock] = 1
state.requestedBlocks[failedBlock] = 1
state.requestedBlocks[skippedBlock] = 1
if err := assertState(state, 1, blockBatchSize, 1, 1); err != nil {
t.Error(err)
}
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, blockBatchSize+1, 0, blockBatchSize+1, 1); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpeced block batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
})
t.Run("overcrowded scheduler", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
state := queue.state.scheduler
state.requestedBlocks[pendingBlock] = queueMaxCachedBlocks
if err := queue.scheduleFetchRequests(ctx); err != nil {
t.Error(err)
}
if err := assertState(state, blockBatchSize, 0, 0, 0); err != nil {
t.Error(err)
}
if state.blockBatchSize != blockBatchSize {
t.Errorf("unexpeced block batch size, want: %v, got: %v", blockBatchSize, state.blockBatchSize)
}
})
}
func TestBlocksQueueParseFetchResponse(t *testing.T) {
chainConfig := struct {
expectedBlockSlots []uint64
peers []*peerData
}{
expectedBlockSlots: makeSequence(1, 2*blockBatchSize*queueMaxPendingRequests+31),
peers: []*peerData{
{
blocks: makeSequence(1, 320),
finalizedEpoch: 8,
headSlot: 320,
},
{
blocks: makeSequence(1, 320),
finalizedEpoch: 8,
headSlot: 320,
},
},
}
mc, _, beaconDB := initializeTestServices(t, chainConfig.expectedBlockSlots, chainConfig.peers)
defer dbtest.TeardownDB(t, beaconDB)
setupQueue := func(ctx context.Context) *blocksQueue {
queue := newBlocksQueue(ctx, &blocksQueueConfig{
blocksFetcher: &blocksProviderMock{},
headFetcher: mc,
highestExpectedSlot: uint64(len(chainConfig.expectedBlockSlots)),
})
return queue
}
var blocks []*eth.SignedBeaconBlock
for i := 1; i <= blockBatchSize; i++ {
blocks = append(blocks, &eth.SignedBeaconBlock{
Block: &eth.BeaconBlock{
Slot: uint64(i),
},
})
}
t.Run("response error", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
response := &fetchRequestResponse{
start: 1,
count: blockBatchSize,
blocks: blocks,
err: errStartSlotIsTooHigh,
}
if _, err := queue.parseFetchResponse(ctx, response); err != errStartSlotIsTooHigh {
t.Errorf("expected error not thrown, want: %v, got: %v", errStartSlotIsTooHigh, err)
}
})
t.Run("context error", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
queue := setupQueue(ctx)
cancel()
response := &fetchRequestResponse{
start: 1,
count: blockBatchSize,
blocks: blocks,
err: ctx.Err(),
}
if _, err := queue.parseFetchResponse(ctx, response); err != ctx.Err() {
t.Errorf("expected error not thrown, want: %v, got: %v", ctx.Err(), err)
}
})
t.Run("no skipped blocks", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
for i := uint64(1); i <= blockBatchSize; i++ {
if _, ok := queue.state.cachedBlocks[i]; ok {
t.Errorf("unexpeced block found: %v", i)
}
}
response := &fetchRequestResponse{
start: 1,
count: blockBatchSize,
blocks: blocks,
}
if _, err := queue.parseFetchResponse(ctx, response); err != nil {
t.Error(err)
}
// All blocks should be saved at this point.
for i := uint64(1); i <= blockBatchSize; i++ {
block, ok := queue.state.cachedBlocks[i]
if !ok {
t.Errorf("expeced block not found: %v", i)
}
if block.SignedBeaconBlock == nil {
t.Errorf("unexpectedly marked as skipped: %v", i)
}
}
})
t.Run("with skipped blocks", func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
queue := setupQueue(ctx)
for i := uint64(1); i <= blockBatchSize; i++ {
if _, ok := queue.state.cachedBlocks[i]; ok {
t.Errorf("unexpeced block found: %v", i)
}
}
response := &fetchRequestResponse{
start: 1,
count: blockBatchSize,
blocks: blocks,
}
skipStart, skipEnd := uint64(5), uint64(15)
response.blocks = append(response.blocks[:skipStart], response.blocks[skipEnd:]...)
if _, err := queue.parseFetchResponse(ctx, response); err != nil {
t.Error(err)
}
for i := skipStart + 1; i <= skipEnd; i++ {
block, ok := queue.state.cachedBlocks[i]
if !ok {
t.Errorf("expeced block not found: %v", i)
}
if block.SignedBeaconBlock != nil {
t.Errorf("unexpectedly marked as not skipped: %v", i)
}
}
for i := uint64(1); i <= skipStart; i++ {
block, ok := queue.state.cachedBlocks[i]
if !ok {
t.Errorf("expeced block not found: %v", i)
}
if block.SignedBeaconBlock == nil {
t.Errorf("unexpectedly marked as skipped: %v", i)
}
}
for i := skipEnd + 1; i <= blockBatchSize; i++ {
block, ok := queue.state.cachedBlocks[i]
if !ok {
t.Errorf("expeced block not found: %v", i)
}
if block.SignedBeaconBlock == nil {
t.Errorf("unexpectedly marked as skipped: %v", i)
}
}
})
}
func TestBlocksQueueLoop(t *testing.T) {
tests := []struct {
name string
@@ -913,11 +314,9 @@ func TestBlocksQueueLoop(t *testing.T) {
var blocks []*eth.SignedBeaconBlock
for block := range queue.fetchedBlocks {
if err := processBlock(block); err != nil {
queue.state.scheduler.incrementCounter(failedBlock, 1)
continue
}
blocks = append(blocks, block)
queue.state.scheduler.incrementCounter(validBlock, 1)
}
if err := queue.stop(); err != nil {
@@ -968,13 +367,3 @@ func setBlocksFromCache(ctx context.Context, t *testing.T, mc *mock.ChainService
parentRoot = currRoot
}
}
func assertState(state *schedulerState, pending, valid, skipped, failed uint64) error {
s := state.requestedBlocks
res := s[pendingBlock] != pending || s[validBlock] != valid ||
s[skippedBlock] != skipped || s[failedBlock] != failed
if res {
b := struct{ pending, valid, skipped, failed uint64 }{pending, valid, skipped, failed}
return fmt.Errorf("invalid state, want: %+v, got: %+v", b, state.requestedBlocks)
}
return nil
}

View File

@@ -0,0 +1,231 @@
package initialsync
import (
"errors"
"fmt"
"time"
eth "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
)
const (
stateNew stateID = iota
stateScheduled
stateDataParsed
stateSkipped
stateSent
stateSkippedExt
stateComplete
)
const (
eventSchedule eventID = iota
eventDataReceived
eventReadyToSend
eventCheckStale
eventExtendWindow
)
// stateID is unique handle for a state.
type stateID uint8
// eventID is unique handle for an event.
type eventID uint8
// stateMachine is a FSM that allows easy state transitions:
// State(S) x Event(E) -> Actions (A), State(S').
type stateMachine struct {
epochs []*epochState
events map[eventID]*stateMachineEvent
}
// epochState holds state of a single epoch.
type epochState struct {
epoch uint64
state stateID
blocks []*eth.SignedBeaconBlock
updated time.Time
}
// stateMachineEvent is a container for event data.
type stateMachineEvent struct {
name eventID
actions []*stateMachineAction
}
// stateMachineAction represents a state actions that can be attached to an event.
type stateMachineAction struct {
state stateID
handlerFn eventHandlerFn
}
// eventHandlerFn is an event handler function's signature.
type eventHandlerFn func(*epochState, interface{}) (stateID, error)
// newStateMachine returns fully initialized state machine.
func newStateMachine() *stateMachine {
return &stateMachine{
epochs: make([]*epochState, 0, lookaheadEpochs),
events: map[eventID]*stateMachineEvent{},
}
}
// addHandler attaches an event handler to a state event.
func (sm *stateMachine) addHandler(state stateID, event eventID, fn eventHandlerFn) *stateMachineEvent {
e, ok := sm.events[event]
if !ok {
e = &stateMachineEvent{
name: event,
}
sm.events[event] = e
}
action := &stateMachineAction{
state: state,
handlerFn: fn,
}
e.actions = append(e.actions, action)
return e
}
// trigger invokes the event on a given epoch's state machine.
func (sm *stateMachine) trigger(name eventID, epoch uint64, data interface{}) error {
event, ok := sm.events[name]
if !ok {
return fmt.Errorf("event not found: %v", name)
}
ind, ok := sm.findEpochState(epoch)
if !ok {
return fmt.Errorf("state for %v epoch not found", epoch)
}
for _, action := range event.actions {
if action.state != sm.epochs[ind].state {
continue
}
state, err := action.handlerFn(sm.epochs[ind], data)
if err != nil {
return err
}
sm.epochs[ind].setState(state)
}
return nil
}
// addEpochState allocates memory for tracking epoch state.
func (sm *stateMachine) addEpochState(epoch uint64) {
state := &epochState{
epoch: epoch,
state: stateNew,
blocks: make([]*eth.SignedBeaconBlock, 0, allowedBlocksPerSecond),
updated: time.Now(),
}
sm.epochs = append(sm.epochs, state)
}
// removeEpochState frees memory of processed epoch.
func (sm *stateMachine) removeEpochState(epoch uint64) error {
ind, ok := sm.findEpochState(epoch)
if !ok {
return fmt.Errorf("state for %v epoch not found", epoch)
}
sm.epochs[ind].blocks = nil
sm.epochs[ind] = sm.epochs[len(sm.epochs)-1]
sm.epochs = sm.epochs[:len(sm.epochs)-1]
return nil
}
// findEpochState returns index at which state.epoch = epoch, or len(epochs) if not found.
func (sm *stateMachine) findEpochState(epoch uint64) (int, bool) {
for i, state := range sm.epochs {
if epoch == state.epoch {
return i, true
}
}
return len(sm.epochs), false
}
// isLowestEpochState checks whether a given epoch is the lowest for which we know epoch state.
func (sm *stateMachine) isLowestEpochState(epoch uint64) bool {
if _, ok := sm.findEpochState(epoch); !ok {
return false
}
for _, state := range sm.epochs {
if epoch > state.epoch {
return false
}
}
return true
}
// highestEpochSlot returns slot for the latest known epoch.
func (sm *stateMachine) highestEpochSlot() (uint64, error) {
if len(sm.epochs) == 0 {
return 0, errors.New("no epoch states exist")
}
highestEpochSlot := sm.epochs[0].epoch
for _, state := range sm.epochs {
if state.epoch > highestEpochSlot {
highestEpochSlot = state.epoch
}
}
return highestEpochSlot, nil
}
// String returns human readable representation of a state.
func (sm *stateMachine) String() string {
return fmt.Sprintf("%v", sm.epochs)
}
// String returns human-readable representation of an epoch state.
func (es *epochState) String() string {
return fmt.Sprintf("%d:%s", es.epoch, es.state)
}
// String returns human-readable representation of a state.
func (s stateID) String() (state string) {
switch s {
case stateNew:
state = "new"
case stateScheduled:
state = "scheduled"
case stateDataParsed:
state = "dataParsed"
case stateSkipped:
state = "skipped"
case stateSkippedExt:
state = "skippedExt"
case stateSent:
state = "sent"
case stateComplete:
state = "complete"
}
return
}
// setState updates the current state of a given epoch.
func (es *epochState) setState(name stateID) {
if es.state == name {
return
}
es.updated = time.Now()
es.state = name
}
// String returns human-readable representation of an event.
func (e eventID) String() (event string) {
switch e {
case eventSchedule:
event = "schedule"
case eventDataReceived:
event = "dataReceived"
case eventReadyToSend:
event = "readyToSend"
case eventCheckStale:
event = "checkStale"
case eventExtendWindow:
event = "extendWindow"
}
return
}

View File

@@ -0,0 +1,270 @@
package initialsync
import (
"errors"
"fmt"
"testing"
)
func TestStateMachine_Stringify(t *testing.T) {
tests := []struct {
name string
epochs []*epochState
want string
}{
{
"empty epoch state list",
make([]*epochState, 0, lookaheadEpochs),
"[]",
},
{
"newly created state machine",
[]*epochState{
{epoch: 8, state: stateNew,},
{epoch: 9, state: stateScheduled,},
{epoch: 10, state: stateDataParsed,},
{epoch: 11, state: stateSkipped,},
{epoch: 12, state: stateSkippedExt,},
{epoch: 13, state: stateComplete,},
{epoch: 14, state: stateSent,},
},
"[8:new 9:scheduled 10:dataParsed 11:skipped 12:skippedExt 13:complete 14:sent]",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sm := &stateMachine{
epochs: tt.epochs,
}
if got := sm.String(); got != tt.want {
t.Errorf("unexpected output, got: %v, want: %v", got, tt.want)
}
})
}
}
func TestStateMachine_addHandler(t *testing.T) {
sm := newStateMachine()
sm.addHandler(stateNew, eventSchedule, func(state *epochState, i interface{}) (id stateID, err error) {
return stateScheduled, nil
})
if len(sm.events[eventSchedule].actions) != 1 {
t.Errorf("unexpected size, got: %v, want: %v", len(sm.events[eventSchedule].actions), 1)
}
state, err := sm.events[eventSchedule].actions[0].handlerFn(nil, nil)
if err != nil {
t.Error(err)
}
if state != stateScheduled {
t.Errorf("unexpected state, got: %v, want: %v", state, stateScheduled)
}
// Add second handler to the same event
sm.addHandler(stateSent, eventSchedule, func(state *epochState, i interface{}) (id stateID, err error) {
return stateDataParsed, nil
})
if len(sm.events[eventSchedule].actions) != 2 {
t.Errorf("unexpected size, got: %v, want: %v", len(sm.events[eventSchedule].actions), 2)
}
state, err = sm.events[eventSchedule].actions[1].handlerFn(nil, nil)
if err != nil {
t.Error(err)
}
if state != stateDataParsed {
t.Errorf("unexpected state, got: %v, want: %v", state, stateScheduled)
}
}
func TestStateMachine_trigger(t *testing.T) {
type event struct {
state stateID
event eventID
returnState stateID
err bool
}
type args struct {
name eventID
epoch uint64
data interface{}
returnState stateID
}
tests := []struct {
name string
events []event
epochs []uint64
args args
err error
}{
{
name: "event not found",
events: []event{},
epochs: []uint64{},
args: args{eventSchedule, 12, nil, stateNew},
err: fmt.Errorf("event not found: %v", eventSchedule),
},
{
name: "epoch not found",
events: []event{
{stateNew, eventSchedule, stateScheduled, false},
},
epochs: []uint64{},
args: args{eventSchedule, 12, nil, stateScheduled},
err: fmt.Errorf("state for %v epoch not found", 12),
},
{
name: "single action",
events: []event{
{stateNew, eventSchedule, stateScheduled, false},
},
epochs: []uint64{12, 13},
args: args{eventSchedule, 12, nil, stateScheduled},
err: nil,
},
{
name: "multiple actions, has error",
events: []event{
{stateNew, eventSchedule, stateScheduled, false},
{stateScheduled, eventSchedule, stateSent, true},
{stateSent, eventSchedule, stateComplete, false},
},
epochs: []uint64{12, 13},
args: args{eventSchedule, 12, nil, stateScheduled},
err: nil,
},
{
name: "multiple actions, no error, cascade",
events: []event{
{stateNew, eventSchedule, stateScheduled, false},
{stateScheduled, eventSchedule, stateSent, false},
{stateSent, eventSchedule, stateComplete, false},
},
epochs: []uint64{12, 13},
args: args{eventSchedule, 12, nil, stateComplete},
err: nil,
},
{
name: "multiple actions, no error, no cascade",
events: []event{
{stateNew, eventSchedule, stateScheduled, false},
{stateScheduled, eventSchedule, stateSent, false},
{stateNew, eventSchedule, stateComplete, false},
},
epochs: []uint64{12, 13},
args: args{eventSchedule, 12, nil, stateSent},
err: nil,
},
}
fn := func(e event) eventHandlerFn {
return func(es *epochState, in interface{}) (stateID, error) {
if e.err {
return es.state, errors.New("invalid")
}
return e.returnState, nil
}
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
sm := newStateMachine()
expectHandlerError := false
for _, event := range tt.events {
sm.addHandler(event.state, event.event, fn(event))
if event.err {
expectHandlerError = true
}
}
for _, epoch := range tt.epochs {
sm.addEpochState(epoch)
}
err := sm.trigger(tt.args.name, tt.args.epoch, tt.args.data)
if tt.err != nil && (err == nil || tt.err.Error() != err.Error()) {
t.Errorf("unexpected error = '%v', want '%v'", err, tt.err)
}
if tt.err == nil {
if err != nil && !expectHandlerError {
t.Error(err)
}
ind, ok := sm.findEpochState(tt.args.epoch)
if !ok {
t.Errorf("expected epoch not found: %v", tt.args.epoch)
return
}
if sm.epochs[ind].state != tt.args.returnState {
t.Errorf("unexpected final state: %v, want: %v (%v)", sm.epochs[ind].state, tt.args.returnState, sm.epochs)
}
}
})
}
}
func TestStateMachine_findEpochState(t *testing.T) {
sm := newStateMachine()
if ind, ok := sm.findEpochState(12); ok || ind != 0 {
t.Errorf("unexpected index: %v, want: %v", ind, 0)
}
sm.addEpochState(12)
if ind, ok := sm.findEpochState(12); !ok || ind != 0 {
t.Errorf("unexpected index: %v, want: %v", ind, 0)
}
sm.addEpochState(13)
sm.addEpochState(14)
sm.addEpochState(15)
if ind, ok := sm.findEpochState(14); !ok || ind != 2 {
t.Errorf("unexpected index: %v, want: %v", ind, 2)
}
if ind, ok := sm.findEpochState(16); ok || ind != len(sm.epochs) {
t.Errorf("unexpected index: %v, want: %v", ind, len(sm.epochs))
}
}
func TestStateMachine_isLowestEpochState(t *testing.T) {
sm := newStateMachine()
sm.addEpochState(12)
sm.addEpochState(13)
sm.addEpochState(14)
if res := sm.isLowestEpochState(15); res {
t.Errorf("unexpected lowest state: %v", 15)
}
if res := sm.isLowestEpochState(13); res {
t.Errorf("unexpected lowest state: %v", 15)
}
if res := sm.isLowestEpochState(12); !res {
t.Errorf("expected lowest state not found: %v", 12)
}
if err := sm.removeEpochState(12); err != nil {
t.Error(err)
}
if res := sm.isLowestEpochState(12); res {
t.Errorf("unexpected lowest state: %v", 12)
}
if res := sm.isLowestEpochState(13); !res {
t.Errorf("expected lowest state not found: %v", 13)
}
}
func TestStateMachine_highestEpochSlot(t *testing.T) {
sm := newStateMachine()
if _, err := sm.highestEpochSlot(); err == nil {
t.Error("expected error")
}
sm.addEpochState(12)
sm.addEpochState(13)
sm.addEpochState(14)
slot, err := sm.highestEpochSlot()
if err != nil {
t.Error(err)
}
if slot != 14 {
t.Errorf("incorrect highest slot: %v, want: %v", slot, 14)
}
if err := sm.removeEpochState(14); err != nil {
t.Error(err)
}
slot, err = sm.highestEpochSlot()
if err != nil {
t.Error(err)
}
if slot != 13 {
t.Errorf("incorrect highest slot: %v, want: %v", slot, 13)
}
}

View File

@@ -21,7 +21,7 @@ import (
"github.com/sirupsen/logrus"
)
const blockBatchSize = 64
const blockBatchSize = 32
const counterSeconds = 20
const refreshTime = 6 * time.Second
@@ -66,10 +66,8 @@ func (s *Service) roundRobinSync(genesis time.Time) error {
s.logSyncStatus(genesis, blk.Block, counter)
if err := s.processBlock(ctx, blk); err != nil {
log.WithError(err).Info("Block is invalid")
queue.state.scheduler.incrementCounter(failedBlock, 1)
continue
}
queue.state.scheduler.incrementCounter(validBlock, 1)
}
log.Debug("Synced to finalized epoch - now syncing blocks up to current head")
@@ -99,7 +97,7 @@ func (s *Service) roundRobinSync(genesis time.Time) error {
req := &p2ppb.BeaconBlocksByRangeRequest{
HeadBlockRoot: root,
StartSlot: s.chain.HeadSlot() + 1,
Count: mathutil.Min(helpers.SlotsSince(genesis)-s.chain.HeadSlot()+1, blockBatchSize),
Count: mathutil.Min(helpers.SlotsSince(genesis)-s.chain.HeadSlot()+1, allowedBlocksPerSecond),
Step: 1,
}
@@ -109,7 +107,8 @@ func (s *Service) roundRobinSync(genesis time.Time) error {
resp, err := s.requestBlocks(ctx, req, best)
if err != nil {
return err
log.WithError(err).Error("Failed to receive blocks, exiting init sync")
return nil
}
for _, blk := range resp {
@@ -199,7 +198,8 @@ func (s *Service) logSyncStatus(genesis time.Time, blk *eth.BeaconBlock, counter
}
func (s *Service) processBlock(ctx context.Context, blk *eth.SignedBeaconBlock) error {
if !s.db.HasBlock(ctx, bytesutil.ToBytes32(blk.Block.ParentRoot)) {
parentRoot := bytesutil.ToBytes32(blk.Block.ParentRoot)
if !s.db.HasBlock(ctx, parentRoot) && !s.chain.HasInitSyncBlock(parentRoot) {
return fmt.Errorf("beacon node doesn't have a block in db with root %#x", blk.Block.ParentRoot)
}
s.blockNotifier.BlockFeed().Send(&feed.Event{

View File

@@ -235,6 +235,26 @@ func TestRoundRobinSync(t *testing.T) {
finalizedEpoch: 4,
headSlot: 160,
},
{
blocks: makeSequence(1, 160),
finalizedEpoch: 4,
headSlot: 160,
},
{
blocks: makeSequence(1, 160),
finalizedEpoch: 4,
headSlot: 160,
},
{
blocks: makeSequence(1, 160),
finalizedEpoch: 4,
headSlot: 160,
},
{
blocks: makeSequence(1, 160),
finalizedEpoch: 4,
headSlot: 160,
},
},
},
}

View File

@@ -62,7 +62,7 @@ func (s *Service) processPendingAtts(ctx context.Context) error {
attestations := s.blkRootToPendingAtts[bRoot]
s.pendingAttsLock.RUnlock()
// Has the pending attestation's missing block arrived and the node processed block yet?
hasStateSummary := featureconfig.Get().NewStateMgmt && s.db.HasStateSummary(ctx, bRoot)
hasStateSummary := featureconfig.Get().NewStateMgmt && s.db.HasStateSummary(ctx, bRoot) || s.stateSummaryCache.Has(bRoot)
if s.db.HasBlock(ctx, bRoot) && (s.db.HasState(ctx, bRoot) || hasStateSummary) {
numberOfBlocksRecoveredFromAtt.Inc()
for _, att := range attestations {

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/go-ssz"
mock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
dbtest "github.com/prysmaticlabs/prysm/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/operations/attestations"
@@ -20,6 +21,7 @@ import (
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
"github.com/prysmaticlabs/prysm/shared/testutil"
@@ -45,6 +47,7 @@ func TestProcessPendingAtts_NoBlockRequestBlock(t *testing.T) {
db: db,
chain: &mock.ChainService{Genesis: roughtime.Now()},
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.AggregateAttestationAndProof),
stateSummaryCache: cache.NewStateSummaryCache(),
}
a := &ethpb.AggregateAttestationAndProof{Aggregate: &ethpb.Attestation{Data: &ethpb.AttestationData{Target: &ethpb.Checkpoint{}}}}
@@ -68,6 +71,7 @@ func TestProcessPendingAtts_HasBlockSaveUnAggregatedAtt(t *testing.T) {
chain: &mock.ChainService{Genesis: roughtime.Now()},
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.AggregateAttestationAndProof),
attPool: attestations.NewPool(),
stateSummaryCache: cache.NewStateSummaryCache(),
}
a := &ethpb.AggregateAttestationAndProof{
@@ -119,8 +123,8 @@ func TestProcessPendingAtts_HasBlockSaveAggregatedAtt(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: root[:],
Source: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Target: &ethpb.Checkpoint{Epoch: 0, Root: []byte("hello-world")},
Source: &ethpb.Checkpoint{Epoch: 0, Root: bytesutil.PadTo([]byte("hello-world"), 32)},
Target: &ethpb.Checkpoint{Epoch: 0, Root: bytesutil.PadTo([]byte("hello-world"), 32)},
},
AggregationBits: aggBits,
}
@@ -174,6 +178,7 @@ func TestProcessPendingAtts_HasBlockSaveAggregatedAtt(t *testing.T) {
}},
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.AggregateAttestationAndProof),
attPool: attestations.NewPool(),
stateSummaryCache: cache.NewStateSummaryCache(),
}
sb = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{}}

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
blockfeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
@@ -43,6 +44,7 @@ type Config struct {
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
AttestationNotifier operation.Notifier
StateSummaryCache *cache.StateSummaryCache
}
// This defines the interface for interacting with block chain service
@@ -74,6 +76,7 @@ func NewRegularSync(cfg *Config) *Service {
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.AggregateAttestationAndProof),
stateNotifier: cfg.StateNotifier,
blockNotifier: cfg.BlockNotifier,
stateSummaryCache: cfg.StateSummaryCache,
blocksRateLimiter: leakybucket.NewCollector(allowedBlocksPerSecond, allowedBlocksBurst, false /* deleteEmptyBuckets */),
}
@@ -106,6 +109,7 @@ type Service struct {
blockNotifier blockfeed.Notifier
blocksRateLimiter *leakybucket.Collector
attestationNotifier operation.Notifier
stateSummaryCache *cache.StateSummaryCache
}
// Start the regular sync service.

Some files were not shown because too many files have changed in this diff Show More