Compare commits

...

36 Commits

Author SHA1 Message Date
Nishant Das
e077d3ddc9 Fix Incorrect Logging for IPV6 Addresses (#5204)
* fix ipv6 issues
* Merge branch 'master' into fixIPV6
* imports
* Merge branch 'fixIPV6' of https://github.com/prysmaticlabs/geth-sharding into fixIPV6
* Merge branch 'master' into fixIPV6
2020-03-25 17:19:11 +00:00
Preston Van Loon
2ad5cec56c Add gRPC headers flag support for validator client side (#5203)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-25 11:29:04 -05:00
terence tsao
6b1e60c404 Remove extra udp port log (#5205)
* Remove extra udp port log
* Merge branch 'master' into rm-log
2020-03-25 15:28:29 +00:00
terence tsao
48e984f526 Add HighestSlotBlocksBelow getter for db (#5195)
* Add HighestSlotBlockAt

* Start testing

* Apply fixes

* Typo

* Test

* Rename for clarity

* Use length

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-25 08:41:48 -05:00
Preston Van Loon
fbee94a0e9 Remove deprecated aggregator (#5200) 2020-03-25 06:14:21 -07:00
Preston Van Loon
9740245ca5 Add enable-state-field-trie for e2e (#5198)
* Add enable-state-field-trie for e2e
* Merge refs/heads/master into e2e-enable-state-field-trie
* Merge refs/heads/master into e2e-enable-state-field-trie
* fix all this
* Update shared/sliceutil/slice.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
* terence's review
* comment
* Merge branch 'e2e-enable-state-field-trie' of https://github.com/prysmaticlabs/geth-sharding into e2e-enable-state-field-trie
2020-03-25 06:54:56 +00:00
Preston Van Loon
48d4a8655a Add ipv6 multiaddr support (#5199)
* Add ipv6 multiaddr support
* allow ipv6 for discv5
2020-03-25 04:03:51 +00:00
terence tsao
e15d92df06 Apply fixes to block slots methods in DB (#5194)
* Apply fixes
* Typo
* Merge refs/heads/master into fix-slots-saved-for-blocks
2020-03-25 03:05:20 +00:00
Preston Van Loon
729bd83734 Add span to HTR and skip slot cache (#5197)
* Add a bit more span data
* missing import
* Merge branch 'master' into more-spans
2020-03-25 01:15:00 +00:00
terence tsao
c63fb2cd44 Add HighestSlotState Getter for db (#5192) 2020-03-24 14:51:24 -07:00
terence tsao
78a865eb0b Replace boltdb imports with bbolt import (#5193)
* Replaced. Debugging missing strict dependencies...
* Merge branch 'master' into bbolt-import
* Update import path
* Merge branch 'bbolt-import' of github.com:prysmaticlabs/prysm into bbolt-import
* use forked prombbolt
* Merge branch 'bbolt-import' of github.com:prysmaticlabs/prysm into bbolt-import
* fix
* remove old boltdb reference
* Use correct bolt for pk manager
* Merge branch 'bbolt-import' of github.com:prysmaticlabs/prysm into bbolt-import
* fix for docker build
* gaz, oops
2020-03-24 20:00:54 +00:00
shayzluf
6e516dabf9 Setup Slasher RPC server (#5190)
* slasher rpc server

* fix comment

* fix comment

* remove server implementation from pr

* Apply suggestions from code review

* Gazelle

* Update slasher/rpc/service.go

Co-Authored-By: Ivan Martinez <ivanthegreatdev@gmail.com>

* Update slasher/detection/detect.go

* Update slasher/detection/detect.go

* Update slasher/detection/detect.go

* Update slasher/detection/detect.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Ivan Martinez <ivanthegreatdev@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-24 14:30:21 -04:00
Nishant Das
454e02ac4f Add Improvements to State Benchmarks (#5187)
* add improvements

* clean up
2020-03-24 09:16:07 -05:00
Preston Van Loon
35d74981a0 Correctly return attestation data for late requests (#5183)
* Add functionality to support attestation requests that are older than the head state

* lint

* gaz

* Handle nil state case

* handle underflow of first epoch

* Remove redundant and possibly wrong genesisTime struct field

* fix remaining tests

* gofmt

* remove debug comment

* use stategen.StateByRoot interchangably with beaconDB.State

* gofmt

* goimports

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2020-03-23 21:30:28 -07:00
Ivan Martinez
d8bcd891c4 Change E2E config ports to be non-commonly used (#5184)
* Change config ports to be non-commonly used
2020-03-24 03:20:38 +00:00
terence tsao
2e0158d7c5 Add HighestSlotBlock Getter for db (#5182)
* Starting

* Update block getters in db

* New test

* New test for save blocks as well

* Delete blocks can clear bits tests

* Fmt
2020-03-23 18:42:41 -05:00
Ivan Martinez
bdb80f4639 Change ListAttestations to get attestations from blocks (#5145)
* Start fixing api to get from blocks

* Fix listatts tests

* Fix slasher

* Improve blocks

* Change att grouping

* Use faster att concat

* Try to fix e2e

* Change back time

* tiny e2e fix

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-23 16:22:37 -04:00
Preston Van Loon
0043fb0441 Shard //beacon-chain/core/state:go_default_test (#5180)
* Add shards to core state tests, some of the fuzzing tests can be very slow
* Merge refs/heads/master into shard-core-state-tests
2020-03-23 19:08:39 +00:00
Preston Van Loon
f520472fc6 Buildkite changes (#5178)
* Do not override jobs, dont print colors
* Merge branch 'master' of github.com:prysmaticlabs/prysm into buildkite-changes
* use composite flag for minimal downloads
* Add repository cache
* use hardlinks
* repository cache common
* query and build repo cache
2020-03-23 19:00:37 +00:00
Preston Van Loon
5241582ece Add CORS preflight support (#5177)
* Add CORS preflight support

* lint

* clarify description
2020-03-23 13:17:17 -05:00
Nishant Das
b0128ad894 Add Attestation Subnet Bitfield (#4989)
* bump bitfield dep

* add new methods

* get it working

* add nil check

* add check

* one more check

* add flag

* everything works local run

* add debug log

* more changes

* ensuring p2p interface works enough for tests to pass

* all tests pass

* include proper naming and comments to fix lint

* Apply suggestions from code review

* discover by peers

* cannot figure out why 0 peers

* remove keys

* fix test

* fix it

* fix again

* remove log

* change back

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-23 09:41:47 -05:00
Preston Van Loon
5d1c3da85c BLS: some minor improvements (#5161)
* some improvements
* gofmt
* Merge refs/heads/master into bls-improvements
* Merge refs/heads/master into bls-improvements
* Merge refs/heads/master into bls-improvements
2020-03-22 23:40:39 +00:00
terence tsao
301c2a1448 New byteutils for state gen (#5163)
* New byteutils for state gen
* Added empty slice and nil checks
* Merge branch 'master' into bit-utils
* SetBit to extend bit
* Merge branch 'bit-utils' of github.com:prysmaticlabs/prysm into bit-utils
* Comment
* Add HighestBitIndexBelow
* Test for HighestBitIndexBelow
* another test and better test fail output
* Feedback
* Merge branch 'bit-utils' of github.com:prysmaticlabs/prysm into bit-utils
* Feedback
* Preston's feedback, thanks!
* Use var
* Use var
* Merge refs/heads/master into bit-utils
2020-03-22 23:19:38 +00:00
Ivan Martinez
bc3d673ea4 Parallelize E2E Testing (#5168)
* Begin cleanup for E2E

* Parellize testing

* Add comments

* Add comment
2020-03-22 19:04:23 -04:00
Preston Van Loon
3d092d3eed Update go-bitfield (#5162)
* Update go-bitfield from https://github.com/prysmaticlabs/go-bitfield/pull/28
2020-03-22 04:51:06 +00:00
Preston Van Loon
4df5c042d9 Use faster bitfield BitIndices to build attesting indices (#5160)
* Refactor AttestingIndices to not return any error. Add tests. Add shortcut for fully attested attestation
* attestationutil.ConvertToIndexed never returned error either
* Working with benchmark:
* fix test
* Merge branch 'attestationutil-improvements-0' into attestationutil-improvements-1
* out of bounds check
* Update after merge of https://github.com/prysmaticlabs/go-bitfield/pull/26
* remove shortcut
* Merge refs/heads/attestationutil-improvements-0 into attestationutil-improvements-1
* Merge branch 'attestationutil-improvements-0' into attestationutil-improvements-1
* Merge branch 'attestationutil-improvements-1' of github.com:prysmaticlabs/prysm into attestationutil-improvements-1
* revert test...
* Merge refs/heads/attestationutil-improvements-0 into attestationutil-improvements-1
* Merge branch 'master' of github.com:prysmaticlabs/prysm into attestationutil-improvements-1
* Merge branch 'attestationutil-improvements-1' of github.com:prysmaticlabs/prysm into attestationutil-improvements-1
* Update go-bitfield after https://github.com/prysmaticlabs/go-bitfield/pull/27
2020-03-22 01:42:51 +00:00
Preston Van Loon
d06b0e8a86 Refactor attestationutil.AttestingIndices (#5159)
* Refactor AttestingIndices to not return any error. Add tests. Add shortcut for fully attested attestation
* attestationutil.ConvertToIndexed never returned error either
* fix test
* remove shortcut
* revert test...
2020-03-22 00:23:37 +00:00
Jim McDonald
4f8d9c59dd Replace default value for datadir (#5147) 2020-03-21 23:30:51 +08:00
Ivan Martinez
021d777b5e Add Anti-Flake test for E2E (#5149)
* Add antiflake test

* Respond to comments

* Comment

* Change issue num
2020-03-21 14:42:51 +08:00
terence tsao
dc3fb018fe Fix new state mgmt sync stuck in a loop (#5142) 2020-03-19 18:46:35 -07:00
Preston Van Loon
2ab4b86f9b Allow setting flags via yaml config file. (#4878) 2020-03-19 14:46:44 -07:00
Ivan Martinez
b30a089548 Add fetching validators by indices and public keys (#5141)
* update ethereumapis with patch
* Add indices and pubkeys to ListValidators request
* Add sorting
* Merge branch 'master' into validators-by-keys-indices
* Rename to index
* Merge branch 'validators-by-keys-indices' of https://github.com/prysmaticlabs/prysm into validators-by-keys-indices
* Add comment
2020-03-19 20:30:40 +00:00
Ivan Martinez
271938202e Improve validator logs (#5140)
* Imporve validator logging

* Update validator/client/validator_log.go
2020-03-19 13:34:50 -04:00
shayzluf
6fe814c5aa double proposal detector (#5120)
* proposal detector

* comment fixes

* comment fixes

* raul feedback

* fix todo

* gaz

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2020-03-19 17:29:35 +05:30
Preston Van Loon
a9f4d1d02d Attestation: Add a check for overflow (#5136)
* Add a check for overflow
* gofmt beacon-chain/cache/committee_test.go
2020-03-19 04:41:05 +00:00
Preston Van Loon
7c110e54f0 Add ssz marshal and unmarshal for most data structures (#5121)
* Add ssz marshal and unmarshal for most data structures
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Merge refs/heads/master into ssz-stuff
* Update ferran SSZ
* Update ferran's SSZ
* Merge refs/heads/master into ssz-stuff
* fix tests
* Merge branch 'ssz-stuff' of github.com:prysmaticlabs/prysm into ssz-stuff
* gaz
2020-03-19 02:39:23 +00:00
234 changed files with 4359 additions and 1925 deletions

View File

@@ -20,15 +20,16 @@ build:remote-cache --strategy=Genrule=standalone
build:remote-cache --disk_cache=
build:remote-cache --host_platform_remote_properties_override='properties:{name:\"cache-silo-key\" value:\"prysm\"}'
build:remote-cache --remote_instance_name=projects/prysmaticlabs/instances/default_instance
build:remote-cache --experimental_remote_download_outputs=minimal
build:remote-cache --experimental_inmemory_jdeps_files
build:remote-cache --experimental_inmemory_dotd_files
build:remote-cache --remote_download_minimal
# Import workspace options.
import %workspace%/.bazelrc
startup --host_jvm_args=-Xmx1000m --host_jvm_args=-Xms1000m
query --repository_cache=/tmp/repositorycache
query --experimental_repository_cache_hardlinks
build --repository_cache=/tmp/repositorycache
build --experimental_repository_cache_hardlinks
build --experimental_strict_action_env
build --disk_cache=/tmp/bazelbuilds
build --experimental_multi_threaded_digest
@@ -36,13 +37,10 @@ build --sandbox_tmpfs_path=/tmp
build --verbose_failures
build --announce_rc
build --show_progress_rate_limit=5
build --curses=yes --color=yes
build --curses=yes --color=no
build --keep_going
build --test_output=errors
build --flaky_test_attempts=5
build --jobs=50
build --stamp
test --local_test_jobs=2
# Disabled race detection due to unstable test results under constrained environment build kite
# build --features=race

101
WORKSPACE
View File

@@ -277,7 +277,7 @@ http_archive(
go_repository(
name = "com_github_ethereum_go_ethereum",
commit = "40beaeef26d5a2a0918dec2b960c2556c71a90a0",
commit = "861ae1b1875c17d86a6a5d68118708ab2b099658",
importpath = "github.com/ethereum/go-ethereum",
# Note: go-ethereum is not bazel-friendly with regards to cgo. We have a
# a fork that has resolved these issues by disabling HID/USB support and
@@ -292,12 +292,10 @@ go_repository(
name = "com_github_prysmaticlabs_go_ssz",
commit = "e24db4d9e9637cf88ee9e4a779e339a1686a84ee",
importpath = "github.com/prysmaticlabs/go-ssz",
)
go_repository(
name = "com_github_urfave_cli",
commit = "e6cf83ec39f6e1158ced1927d4ed14578fda8edb", # v1.21.0
importpath = "github.com/urfave/cli",
patch_args = ["-p1"],
patches = [
"//third_party:com_github_prysmaticlabs_go_ssz.patch",
],
)
go_repository(
@@ -756,14 +754,6 @@ go_repository(
importpath = "github.com/matttproud/golang_protobuf_extensions",
)
http_archive(
name = "com_github_boltdb_bolt", # v1.3.1
build_file = "//third_party:boltdb/bolt.BUILD",
sha256 = "95dc5842dab55f7519b7002bbec648321277b5d6f0ad59aab509ee59313b6386",
strip_prefix = "bolt-2f1ce7a837dcb8da3ec595b1dac9d0632f0f99e8",
urls = ["https://github.com/boltdb/bolt/archive/2f1ce7a837dcb8da3ec595b1dac9d0632f0f99e8.tar.gz"],
)
go_repository(
name = "com_github_pborman_uuid",
commit = "8b1b92947f46224e3b97bb1a3a5b0382be00d31e", # v1.2.0
@@ -908,6 +898,13 @@ go_repository(
importpath = "k8s.io/client-go",
)
go_repository(
name = "io_etcd_go_bbolt",
importpath = "go.etcd.io/bbolt",
sum = "h1:hi1bXHMVrlQh6WwxAy+qZCV/SYIlqo+Ushwdpa4tAKg=",
version = "v1.3.4",
)
go_repository(
name = "io_k8s_apimachinery",
build_file_proto_mode = "disable_global",
@@ -1192,7 +1189,7 @@ go_repository(
go_repository(
name = "com_github_prysmaticlabs_go_bitfield",
commit = "dbb55b15e92f897ee230360c8d9695e2f224b117",
commit = "62c2aee7166951c456888f92237aee4303ba1b9d",
importpath = "github.com/prysmaticlabs/go-bitfield",
)
@@ -1296,7 +1293,7 @@ go_repository(
go_repository(
name = "com_github_prysmaticlabs_ethereumapis",
commit = "fca4d6f69bedb8615c2fc916d1a68f2692285caa",
commit = "62fd1d2ec119bc93b0473fde17426c63a85197ed",
importpath = "github.com/prysmaticlabs/ethereumapis",
patch_args = ["-p1"],
patches = [
@@ -1331,13 +1328,6 @@ go_repository(
version = "v0.0.4",
)
go_repository(
name = "com_github_mdlayher_prombolt",
importpath = "github.com/mdlayher/prombolt",
sum = "h1:N257g6TTx0LxYoskSDFxvkSJ3NOZpy9IF1xQ7Gu+K8I=",
version = "v0.0.0-20161005185022-dfcf01d20ee9",
)
go_repository(
name = "com_github_minio_highwayhash",
importpath = "github.com/minio/highwayhash",
@@ -1375,13 +1365,6 @@ go_repository(
version = "v0.10.5",
)
go_repository(
name = "in_gopkg_urfave_cli_v1",
importpath = "gopkg.in/urfave/cli.v1",
sum = "h1:NdAVW6RYxDif9DhDHaAortIu956m2c0v+09AZBPTbE0=",
version = "v1.20.0",
)
go_repository(
name = "com_github_naoina_go_stringutil",
importpath = "github.com/naoina/go-stringutil",
@@ -1598,7 +1581,59 @@ go_repository(
go_repository(
name = "com_github_ferranbt_fastssz",
commit = "06015a5d84f9e4eefe2c21377ca678fa8f1a1b09",
importpath = "github.com/ferranbt/fastssz",
sum = "h1:oUQredbOIzWIMmeGR9dTLzSi4DqRVwxrPzSDiLJBp4Q=",
version = "v0.0.0-20200310214500-3283b9706406",
)
go_repository(
name = "com_github_burntsushi_toml",
importpath = "github.com/BurntSushi/toml",
sum = "h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=",
version = "v0.3.1",
)
go_repository(
name = "com_github_cpuguy83_go_md2man_v2",
importpath = "github.com/cpuguy83/go-md2man/v2",
sum = "h1:EoUDS0afbrsXAZ9YQ9jdu/mZ2sXgT1/2yyNng4PGlyM=",
version = "v2.0.0",
)
go_repository(
name = "com_github_russross_blackfriday_v2",
importpath = "github.com/russross/blackfriday/v2",
sum = "h1:lPqVAte+HuHNfhJ/0LC98ESWRz8afy9tM/0RK8m9o+Q=",
version = "v2.0.1",
)
go_repository(
name = "com_github_shurcool_sanitized_anchor_name",
importpath = "github.com/shurcooL/sanitized_anchor_name",
sum = "h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=",
version = "v1.0.0",
)
go_repository(
name = "in_gopkg_urfave_cli_v2",
importpath = "gopkg.in/urfave/cli.v2",
sum = "h1:OvXt/p4cdwNl+mwcWMq/AxaKFkhdxcjx+tx+qf4EOvY=",
version = "v2.0.0-20190806201727-b62605953717",
)
go_repository(
name = "in_gopkg_urfave_cli_v1",
importpath = "gopkg.in/urfave/cli.v1",
sum = "h1:NdAVW6RYxDif9DhDHaAortIu956m2c0v+09AZBPTbE0=",
version = "v1.20.0",
)
go_repository(
name = "com_github_prysmaticlabs_prombbolt",
importpath = "github.com/prysmaticlabs/prombbolt",
sum = "h1:bVD46NhbqEE6bsIqj42TCS3ELUdumti3WfAw9DXNtkg=",
version = "v0.0.0-20200324184628-09789ef63796",
)
load("@com_github_prysmaticlabs_prombbolt//:repositories.bzl", "prombbolt_dependencies")
prombbolt_dependencies()

View File

@@ -23,9 +23,10 @@ go_library(
"@com_github_ipfs_go_log//:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@com_github_whyrusleeping_go_logging//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
"@in_gopkg_urfave_cli_v2//altsrc:go_default_library",
"@org_uber_go_automaxprocs//:go_default_library",
],
)
@@ -55,9 +56,10 @@ go_image(
"@com_github_ipfs_go_log//:go_default_library",
"@com_github_joonix_log//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@com_github_whyrusleeping_go_logging//:go_default_library",
"@com_github_x_cray_logrus_prefixed_formatter//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
"@in_gopkg_urfave_cli_v2//altsrc:go_default_library",
"@org_uber_go_automaxprocs//:go_default_library",
],
)
@@ -111,7 +113,10 @@ go_test(
size = "small",
srcs = ["usage_test.go"],
embed = [":go_default_library"],
deps = ["@com_github_urfave_cli//:go_default_library"],
deps = [
"//shared/featureconfig:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
],
)
[go_binary(

View File

@@ -45,6 +45,7 @@ go_library(
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"//shared/slotutil:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_emicklei_dot//:go_default_library",

View File

@@ -110,7 +110,7 @@ func (s *Service) verifyBeaconBlock(ctx context.Context, data *ethpb.Attestation
return fmt.Errorf("beacon block %#x does not exist", bytesutil.Trunc(data.BeaconBlockRoot))
}
if b.Block.Slot > data.Slot {
return fmt.Errorf("could not process attestation for future block, %d > %d", b.Block.Slot, data.Slot)
return fmt.Errorf("could not process attestation for future block, block.Slot=%d > attestation.Data.Slot=%d", b.Block.Slot, data.Slot)
}
return nil
}
@@ -121,11 +121,7 @@ func (s *Service) verifyAttestation(ctx context.Context, baseState *stateTrie.Be
if err != nil {
return nil, err
}
indexedAtt, err := attestationutil.ConvertToIndexed(ctx, a, committee)
if err != nil {
return nil, errors.Wrap(err, "could not convert attestation to indexed attestation")
}
indexedAtt := attestationutil.ConvertToIndexed(ctx, a, committee)
if err := blocks.VerifyIndexedAttestation(ctx, baseState, indexedAtt); err != nil {
if err == blocks.ErrSigFailedToVerify {
// When sig fails to verify, check if there's a differences in committees due to

View File

@@ -349,10 +349,7 @@ func (s *Service) insertBlockToForkChoiceStore(ctx context.Context, blk *ethpb.B
if err != nil {
return err
}
indices, err := attestationutil.AttestingIndices(a.AggregationBits, committee)
if err != nil {
return err
}
indices := attestationutil.AttestingIndices(a.AggregationBits, committee)
s.forkChoiceStore.ProcessAttestation(ctx, indices, bytesutil.ToBytes32(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)
}

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"context"
"fmt"
"time"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
@@ -15,6 +14,7 @@ import (
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
"github.com/prysmaticlabs/prysm/shared/traceutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -22,7 +22,7 @@ import (
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() uint64 {
return uint64(time.Now().Unix()-s.genesisTime.Unix()) / params.BeaconConfig().SecondsPerSlot
return uint64(roughtime.Now().Unix()-s.genesisTime.Unix()) / params.BeaconConfig().SecondsPerSlot
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block

View File

@@ -135,10 +135,18 @@ func (s *Service) Start() {
if err != nil {
log.Fatalf("Could not fetch finalized cp: %v", err)
}
if beaconState == nil {
beaconState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
if featureconfig.Get().NewStateMgmt {
beaconState, err = s.stateGen.StateByRoot(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
}
} else {
beaconState, err = s.beaconDB.State(ctx, bytesutil.ToBytes32(cp.Root))
if err != nil {
log.Fatalf("Could not fetch beacon state: %v", err)
}
}
}
@@ -306,7 +314,7 @@ func (s *Service) saveGenesisValidators(ctx context.Context, state *stateTrie.Be
// This gets called when beacon chain is first initialized to save genesis data (state, block, and more) in db.
func (s *Service) saveGenesisData(ctx context.Context, genesisState *stateTrie.BeaconState) error {
stateRoot, err := genesisState.HashTreeRoot()
stateRoot, err := genesisState.HashTreeRoot(ctx)
if err != nil {
return err
}
@@ -435,7 +443,6 @@ func (s *Service) initializeChainInfo(ctx context.Context) error {
if finalizedState == nil || finalizedBlock == nil {
return errors.New("finalized state and block can't be nil")
}
s.setHead(finalizedRoot, finalizedBlock, finalizedState)
return nil

View File

@@ -219,7 +219,7 @@ func (ms *ChainService) GenesisTime() time.Time {
// CurrentSlot mocks the same method in the chain service.
func (ms *ChainService) CurrentSlot() uint64 {
return ms.HeadSlot()
return uint64(time.Now().Unix()-ms.Genesis.Unix()) / params.BeaconConfig().SecondsPerSlot
}
// Participation mocks the same method in the chain service.

View File

@@ -25,6 +25,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@io_k8s_client_go//tools/cache:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -96,7 +96,7 @@ func (c *CommitteeCache) Committee(slot uint64, seed [32]byte, index uint64) ([]
indexOffSet := index + (slot%params.BeaconConfig().SlotsPerEpoch)*committeeCountPerSlot
start, end := startEndIndices(item, indexOffSet)
if int(end) > len(item.ShuffledIndices) {
if int(end) > len(item.ShuffledIndices) || end < start {
return nil, errors.New("requested index out of bound")
}

View File

@@ -1,6 +1,7 @@
package cache
import (
"math"
"reflect"
"sort"
"strconv"
@@ -172,3 +173,19 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
t.Error("incorrect key received for slot 199")
}
}
func TestCommitteeCacheOutOfRange(t *testing.T) {
cache := NewCommitteesCache()
seed := bytesutil.ToBytes32([]byte("foo"))
cache.CommitteeCache.Add(&Committees{
CommitteeCount: 1,
Seed: seed,
ShuffledIndices: []uint64{0},
SortedIndices: []uint64{},
ProposerIndices: []uint64{},
})
_, err := cache.Committee(0, seed, math.MaxUint64) // Overflow!
if err == nil {
t.Fatal("Did not fail as expected")
}
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"go.opencensus.io/trace"
)
var (
@@ -47,6 +48,8 @@ func NewSkipSlotCache() *SkipSlotCache {
// Get waits for any in progress calculation to complete before returning a
// cached response, if any.
func (c *SkipSlotCache) Get(ctx context.Context, slot uint64) (*stateTrie.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "skipSlotCache.Get")
defer span.End()
if !featureconfig.Get().EnableSkipSlotsCache {
// Return a miss result if cache is not enabled.
skipSlotCacheMiss.Inc()
@@ -57,6 +60,7 @@ func (c *SkipSlotCache) Get(ctx context.Context, slot uint64) (*stateTrie.Beacon
// Another identical request may be in progress already. Let's wait until
// any in progress request resolves or our timeout is exceeded.
inProgress := false
for {
if ctx.Err() != nil {
return nil, ctx.Err()
@@ -67,6 +71,7 @@ func (c *SkipSlotCache) Get(ctx context.Context, slot uint64) (*stateTrie.Beacon
c.lock.RUnlock()
break
}
inProgress = true
c.lock.RUnlock()
// This increasing backoff is to decrease the CPU cycles while waiting
@@ -75,14 +80,17 @@ func (c *SkipSlotCache) Get(ctx context.Context, slot uint64) (*stateTrie.Beacon
delay *= delayFactor
delay = math.Min(delay, maxDelay)
}
span.AddAttributes(trace.BoolAttribute("inProgress", inProgress))
item, exists := c.cache.Get(slot)
if exists && item != nil {
skipSlotCacheHit.Inc()
span.AddAttributes(trace.BoolAttribute("hit", true))
return item.(*stateTrie.BeaconState).Copy(), nil
}
skipSlotCacheMiss.Inc()
span.AddAttributes(trace.BoolAttribute("hit", false))
return nil, nil
}

View File

@@ -856,10 +856,7 @@ func VerifyAttestation(ctx context.Context, beaconState *stateTrie.BeaconState,
if err != nil {
return err
}
indexedAtt, err := attestationutil.ConvertToIndexed(ctx, att, committee)
if err != nil {
return errors.Wrap(err, "could not convert to indexed attestation")
}
indexedAtt := attestationutil.ConvertToIndexed(ctx, att, committee)
return VerifyIndexedAttestation(ctx, beaconState, indexedAtt)
}

View File

@@ -943,7 +943,7 @@ func TestProcessAttestations_OK(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices, err := attestationutil.AttestingIndices(att.AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
if err != nil {
t.Error(err)
}
@@ -1004,7 +1004,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices1, err := attestationutil.AttestingIndices(att1.AggregationBits, committee)
attestingIndices1 := attestationutil.AttestingIndices(att1.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1032,7 +1032,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices2, err := attestationutil.AttestingIndices(att2.AggregationBits, committee)
attestingIndices2 := attestationutil.AttestingIndices(att2.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1082,7 +1082,7 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices1, err := attestationutil.AttestingIndices(att1.AggregationBits, committee)
attestingIndices1 := attestationutil.AttestingIndices(att1.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1109,7 +1109,7 @@ func TestProcessAggregatedAttestation_NoOverlappingBits(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices2, err := attestationutil.AttestingIndices(att2.AggregationBits, committee)
attestingIndices2 := attestationutil.AttestingIndices(att2.AggregationBits, committee)
if err != nil {
t.Fatal(err)
}
@@ -1240,10 +1240,7 @@ func TestConvertToIndexed_OK(t *testing.T) {
if err != nil {
t.Error(err)
}
ia, err := attestationutil.ConvertToIndexed(context.Background(), attestation, committee)
if err != nil {
t.Errorf("failed to convert attestation to indexed attestation: %v", err)
}
ia := attestationutil.ConvertToIndexed(context.Background(), attestation, committee)
if !reflect.DeepEqual(wanted, ia) {
diff, _ := messagediff.PrettyDiff(ia, wanted)
t.Log(diff)

View File

@@ -336,10 +336,7 @@ func unslashedAttestingIndices(state *stateTrie.BeaconState, atts []*pb.PendingA
if err != nil {
return nil, err
}
attestingIndices, err := attestationutil.AttestingIndices(att.AggregationBits, committee)
if err != nil {
return nil, errors.Wrap(err, "could not get attester indices")
}
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
// Create a set for attesting indices
set := make([]uint64, 0, len(attestingIndices))
for _, index := range attestingIndices {

View File

@@ -47,10 +47,7 @@ func ProcessAttestations(
if err != nil {
return nil, nil, err
}
indices, err := attestationutil.AttestingIndices(a.AggregationBits, committee)
if err != nil {
return nil, nil, err
}
indices := attestationutil.AttestingIndices(a.AggregationBits, committee)
vp = UpdateValidator(vp, v, indices, a, a.Data.Slot)
}

View File

@@ -219,7 +219,7 @@ func TestProcessAttestations(t *testing.T) {
if err != nil {
t.Error(err)
}
indices, _ := attestationutil.AttestingIndices(att1.AggregationBits, committee)
indices := attestationutil.AttestingIndices(att1.AggregationBits, committee)
for _, i := range indices {
if !vp[i].IsPrevEpochAttester {
t.Error("Not a prev epoch attester")
@@ -229,7 +229,7 @@ func TestProcessAttestations(t *testing.T) {
if err != nil {
t.Error(err)
}
indices, _ = attestationutil.AttestingIndices(att2.AggregationBits, committee)
indices = attestationutil.AttestingIndices(att2.AggregationBits, committee)
for _, i := range indices {
if !vp[i].IsPrevEpochAttester {
t.Error("Not a prev epoch attester")

View File

@@ -134,7 +134,7 @@ func TestAttestationParticipants_NoCommitteeCache(t *testing.T) {
if err != nil {
t.Error(err)
}
result, err := attestationutil.AttestingIndices(tt.bitfield, committee)
result := attestationutil.AttestingIndices(tt.bitfield, committee)
if err != nil {
t.Errorf("Failed to get attestation participants: %v", err)
}
@@ -167,7 +167,7 @@ func TestAttestationParticipants_EmptyBitfield(t *testing.T) {
if err != nil {
t.Fatal(err)
}
indices, err := attestationutil.AttestingIndices(bitfield.NewBitlist(128), committee)
indices := attestationutil.AttestingIndices(bitfield.NewBitlist(128), committee)
if err != nil {
t.Fatalf("attesting indices failed: %v", err)
}

View File

@@ -42,7 +42,6 @@ go_test(
name = "go_default_test",
size = "small",
srcs = [
"benchmarks_test.go",
"skip_slot_cache_test.go",
"state_fuzz_test.go",
"state_test.go",
@@ -51,13 +50,13 @@ go_test(
],
data = ["//shared/benchutil/benchmark_files:benchmark_data"],
embed = [":go_default_library"],
shard_count = 3,
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/benchutil:go_default_library",
"//shared/bls:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/hashutil:go_default_library",
@@ -72,3 +71,31 @@ go_test(
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_benchmark_test",
size = "large",
srcs = ["benchmarks_test.go"],
args = [
"-test.bench=.",
"-test.benchmem",
"-test.v",
],
local = True,
tags = [
"benchmark",
"manual",
"no-cache",
],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/state:go_default_library",
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/benchutil:go_default_library",
"//shared/params:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
],
)

View File

@@ -1,12 +1,15 @@
package state
package state_benchmark_test
import (
"context"
"testing"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/beacon-chain/core/state"
beaconstate "github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/benchutil"
"github.com/prysmaticlabs/prysm/shared/params"
)
@@ -24,7 +27,7 @@ func TestBenchmarkExecuteStateTransition(t *testing.T) {
t.Fatal(err)
}
if _, err := ExecuteStateTransition(context.Background(), beaconState, block); err != nil {
if _, err := state.ExecuteStateTransition(context.Background(), beaconState, block); err != nil {
t.Fatalf("failed to process block, benchmarks will fail: %v", err)
}
}
@@ -44,7 +47,7 @@ func BenchmarkExecuteStateTransition_FullBlock(b *testing.B) {
b.N = runAmount
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := ExecuteStateTransition(context.Background(), cleanStates[i], block); err != nil {
if _, err := state.ExecuteStateTransition(context.Background(), cleanStates[i], block); err != nil {
b.Fatal(err)
}
}
@@ -72,14 +75,14 @@ func BenchmarkExecuteStateTransition_WithCache(b *testing.B) {
}
beaconState.SetSlot(currentSlot)
// Run the state transition once to populate the cache.
if _, err := ExecuteStateTransition(context.Background(), beaconState, block); err != nil {
if _, err := state.ExecuteStateTransition(context.Background(), beaconState, block); err != nil {
b.Fatalf("failed to process block, benchmarks will fail: %v", err)
}
b.N = runAmount
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := ExecuteStateTransition(context.Background(), cleanStates[i], block); err != nil {
if _, err := state.ExecuteStateTransition(context.Background(), cleanStates[i], block); err != nil {
b.Fatalf("failed to process block, benchmarks will fail: %v", err)
}
}
@@ -105,7 +108,7 @@ func BenchmarkProcessEpoch_2FullEpochs(b *testing.B) {
for i := 0; i < b.N; i++ {
// ProcessEpochPrecompute is the optimized version of process epoch. It's enabled by default
// at run time.
if _, err := ProcessEpochPrecompute(context.Background(), beaconState.Copy()); err != nil {
if _, err := state.ProcessEpochPrecompute(context.Background(), beaconState.Copy()); err != nil {
b.Fatal(err)
}
}
@@ -133,19 +136,88 @@ func BenchmarkHashTreeRootState_FullState(b *testing.B) {
}
// Hydrate the HashTreeRootState cache.
if _, err := beaconState.HashTreeRoot(); err != nil {
if _, err := beaconState.HashTreeRoot(ctx); err != nil {
b.Fatal(err)
}
b.N = 50
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := beaconState.HashTreeRoot(); err != nil {
if _, err := beaconState.HashTreeRoot(ctx); err != nil {
b.Fatal(err)
}
}
}
func BenchmarkMarshalState_FullState(b *testing.B) {
beaconState, err := benchutil.PreGenState2FullEpochs()
if err != nil {
b.Fatal(err)
}
natState := beaconState.InnerStateUnsafe()
b.Run("Proto_Marshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.N = 1000
for i := 0; i < b.N; i++ {
if _, err := proto.Marshal(natState); err != nil {
b.Fatal(err)
}
}
})
b.Run("Fast_SSZ_Marshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.N = 1000
for i := 0; i < b.N; i++ {
if _, err := natState.MarshalSSZ(); err != nil {
b.Fatal(err)
}
}
})
}
func BenchmarkUnmarshalState_FullState(b *testing.B) {
beaconState, err := benchutil.PreGenState2FullEpochs()
if err != nil {
b.Fatal(err)
}
natState := beaconState.InnerStateUnsafe()
protoObject, err := proto.Marshal(natState)
if err != nil {
b.Fatal(err)
}
sszObject, err := natState.MarshalSSZ()
if err != nil {
b.Fatal(err)
}
b.Run("Proto_Unmarshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.N = 1000
for i := 0; i < b.N; i++ {
if err := proto.Unmarshal(protoObject, &pb.BeaconState{}); err != nil {
b.Fatal(err)
}
}
})
b.Run("Fast_SSZ_Unmarshal", func(b *testing.B) {
b.ResetTimer()
b.ReportAllocs()
b.N = 1000
for i := 0; i < b.N; i++ {
sszState := &pb.BeaconState{}
if err := sszState.UnmarshalSSZ(sszObject); err != nil {
b.Fatal(err)
}
}
})
}
func clonedStates(beaconState *beaconstate.BeaconState) []*beaconstate.BeaconState {
clonedStates := make([]*beaconstate.BeaconState, runAmount)
for i := 0; i < runAmount; i++ {

View File

@@ -34,8 +34,8 @@ go_test(
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
],
)
@@ -68,8 +68,8 @@ go_test(
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
],
)

View File

@@ -72,7 +72,7 @@ func ExecuteStateTransition(
interop.WriteBlockToDisk(signed, false)
interop.WriteStateToDisk(state)
postStateRoot, err := state.HashTreeRoot()
postStateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, err
}
@@ -181,7 +181,7 @@ func CalculateStateRoot(
return [32]byte{}, errors.Wrap(err, "could not process block")
}
return state.HashTreeRoot()
return state.HashTreeRoot(ctx)
}
// ProcessSlot happens every slot and focuses on the slot counter and block roots record updates.
@@ -205,7 +205,7 @@ func ProcessSlot(ctx context.Context, state *stateTrie.BeaconState) (*stateTrie.
defer span.End()
span.AddAttributes(trace.Int64Attribute("slot", int64(state.Slot())))
prevStateRoot, err := state.HashTreeRoot()
prevStateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, err
}

View File

@@ -428,7 +428,7 @@ func TestProcessBlock_PassesProcessingConditions(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices, err := attestationutil.AttestingIndices(blockAtt.AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(blockAtt.AggregationBits, committee)
if err != nil {
t.Error(err)
}
@@ -743,7 +743,7 @@ func TestProcessBlk_AttsBasedOnValidatorCount(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices, err := attestationutil.AttestingIndices(att.AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
if err != nil {
t.Error(err)
}

View File

@@ -36,17 +36,17 @@ go_library(
"//shared/params:go_default_library",
"//shared/sliceutil:go_default_library",
"//shared/traceutil:go_default_library",
"@com_github_boltdb_bolt//:go_default_library",
"@com_github_dgraph_io_ristretto//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_mdlayher_prombolt//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_prysmaticlabs_prombbolt//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -4,9 +4,9 @@ import (
"context"
"encoding/binary"
"github.com/boltdb/bolt"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -3,11 +3,11 @@ package kv
import (
"context"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
@@ -12,6 +11,7 @@ import (
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"github.com/prysmaticlabs/prysm/shared/traceutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -6,9 +6,9 @@ import (
"os"
"path"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -4,9 +4,9 @@ import (
"bytes"
"context"
"fmt"
"math"
"strconv"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
log "github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
@@ -70,50 +71,11 @@ func (k *Store) Blocks(ctx context.Context, f *filters.QueryFilter) ([]*ethpb.Si
err := k.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
// If no filter criteria are specified, return an error.
if f == nil {
return errors.New("must specify a filter criteria for retrieving blocks")
}
// Creates a list of indices from the passed in filter values, such as:
// []byte("0x2093923") in the parent root indices bucket to be used for looking up
// block roots that were stored under each of those indices for O(1) lookup.
indicesByBucket, err := createBlockIndicesFromFilters(f)
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {
return errors.Wrap(err, "could not determine lookup indices")
return err
}
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange := fetchBlockRootsBySlotRange(
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
filtersMap[filters.EndSlot],
filtersMap[filters.StartEpoch],
filtersMap[filters.EndEpoch],
filtersMap[filters.SlotStep],
)
// Once we have a list of block roots that correspond to each
// lookup index, we find the intersection across all of them and use
// that list of roots to lookup the block. These block will
// meet the filter criteria.
indices := lookupValuesForIndices(indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
// block roots that meet the slot range filter criteria, we find the intersection
// between these two sets of roots.
if len(rootsBySlotRange) > 0 {
joined := append([][][]byte{keys}, indices...)
keys = sliceutil.IntersectionByteSlices(joined...)
} else {
// If we have found indices that meet the filter criteria, but there are no block roots
// that meet the slot range filter criteria, we find the intersection
// of the regular filter indices.
keys = sliceutil.IntersectionByteSlices(indices...)
}
}
for i := 0; i < len(keys); i++ {
encoded := bkt.Get(keys[i])
block := &ethpb.SignedBeaconBlock{}
@@ -133,48 +95,11 @@ func (k *Store) BlockRoots(ctx context.Context, f *filters.QueryFilter) ([][32]b
defer span.End()
blockRoots := make([][32]byte, 0)
err := k.db.View(func(tx *bolt.Tx) error {
// If no filter criteria are specified, return an error.
if f == nil {
return errors.New("must specify a filter criteria for retrieving block roots")
}
// Creates a list of indices from the passed in filter values, such as:
// []byte("0x2093923") in the parent root indices bucket to be used for looking up
// block roots that were stored under each of those indices for O(1) lookup.
indicesByBucket, err := createBlockIndicesFromFilters(f)
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {
return errors.Wrap(err, "could not determine lookup indices")
return err
}
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange := fetchBlockRootsBySlotRange(
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
filtersMap[filters.EndSlot],
filtersMap[filters.StartEpoch],
filtersMap[filters.EndEpoch],
filtersMap[filters.SlotStep],
)
// Once we have a list of block roots that correspond to each
// lookup index, we find the intersection across all of them.
indices := lookupValuesForIndices(indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
// block roots that meet the slot range filter criteria, we find the intersection
// between these two sets of roots.
if len(rootsBySlotRange) > 0 {
joined := append([][][]byte{keys}, indices...)
keys = sliceutil.IntersectionByteSlices(joined...)
} else {
// If we have found indices that meet the filter criteria, but there are no block roots
// that meet the slot range filter criteria, we find the intersection
// of the regular filter indices.
keys = sliceutil.IntersectionByteSlices(indices...)
}
}
for i := 0; i < len(keys); i++ {
blockRoots = append(blockRoots, bytesutil.ToBytes32(keys[i]))
}
@@ -222,6 +147,9 @@ func (k *Store) DeleteBlock(ctx context.Context, blockRoot [32]byte) error {
return errors.Wrap(err, "could not delete root for DB indices")
}
k.blockCache.Del(string(blockRoot[:]))
if err := k.clearBlockSlotBitField(ctx, tx, block.Block.Slot); err != nil {
return err
}
return bkt.Delete(blockRoot[:])
})
}
@@ -247,6 +175,9 @@ func (k *Store) DeleteBlocks(ctx context.Context, blockRoots [][32]byte) error {
return errors.Wrap(err, "could not delete root for DB indices")
}
k.blockCache.Del(string(blockRoot[:]))
if err := k.clearBlockSlotBitField(ctx, tx, block.Block.Slot); err != nil {
return err
}
if err := bkt.Delete(blockRoot[:]); err != nil {
return err
}
@@ -267,6 +198,10 @@ func (k *Store) SaveBlock(ctx context.Context, signed *ethpb.SignedBeaconBlock)
return nil
}
return k.db.Update(func(tx *bolt.Tx) error {
if err := k.setBlockSlotBitField(ctx, tx, signed.Block.Slot); err != nil {
return err
}
bkt := tx.Bucket(blocksBucket)
if existingBlock := bkt.Get(blockRoot[:]); existingBlock != nil {
return nil
@@ -291,6 +226,10 @@ func (k *Store) SaveBlocks(ctx context.Context, blocks []*ethpb.SignedBeaconBloc
return k.db.Update(func(tx *bolt.Tx) error {
for _, block := range blocks {
if err := k.setBlockSlotBitField(ctx, tx, block.Block.Slot); err != nil {
return err
}
blockRoot, err := ssz.HashTreeRoot(block.Block)
if err != nil {
return err
@@ -364,6 +303,177 @@ func (k *Store) SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) er
})
}
// HighestSlotBlocks returns the blocks with the highest slot from the db.
func (k *Store) HighestSlotBlocks(ctx context.Context) ([]*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HighestSlotBlocks")
defer span.End()
blocks := make([]*ethpb.SignedBeaconBlock, 0)
err := k.db.View(func(tx *bolt.Tx) error {
sBkt := tx.Bucket(slotsHasObjectBucket)
savedSlots := sBkt.Get(savedBlockSlotsKey)
highestIndex, err := bytesutil.HighestBitIndex(savedSlots)
if err != nil {
return err
}
blocks, err = k.blocksAtSlotBitfieldIndex(ctx, tx, uint64(highestIndex))
if err != nil {
return err
}
return nil
})
return blocks, err
}
// HighestSlotBlocksBelow returns the block with the highest slot below the input slot from the db.
func (k *Store) HighestSlotBlocksBelow(ctx context.Context, slot uint64) ([]*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HighestSlotBlockAt")
defer span.End()
blocks := make([]*ethpb.SignedBeaconBlock, 0)
err := k.db.View(func(tx *bolt.Tx) error {
sBkt := tx.Bucket(slotsHasObjectBucket)
savedSlots := sBkt.Get(savedBlockSlotsKey)
highestIndex, err := bytesutil.HighestBitIndexAt(savedSlots, int(slot))
if err != nil {
return err
}
blocks, err = k.blocksAtSlotBitfieldIndex(ctx, tx, uint64(highestIndex))
if err != nil {
return err
}
return nil
})
return blocks, err
}
// blocksAtSlotBitfieldIndex retrieves the block in DB given the input index. The index represents
// the position of the slot bitfield the saved block maps to.
func (k *Store) blocksAtSlotBitfieldIndex(ctx context.Context, tx *bolt.Tx, index uint64) ([]*ethpb.SignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.blocksAtSlotBitfieldIndex")
defer span.End()
highestSlot := index - 1
highestSlot = uint64(math.Max(0, float64(highestSlot)))
f := filters.NewFilter().SetStartSlot(highestSlot).SetEndSlot(highestSlot)
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {
return nil, err
}
blocks := make([]*ethpb.SignedBeaconBlock, 0, len(keys))
bBkt := tx.Bucket(blocksBucket)
for i := 0; i < len(keys); i++ {
encoded := bBkt.Get(keys[i])
block := &ethpb.SignedBeaconBlock{}
if err := decode(encoded, block); err != nil {
return nil, err
}
blocks = append(blocks, block)
}
return blocks, err
}
// setBlockSlotBitField sets the block slot bit in DB.
// This helps to track which slot has a saved block in db.
func (k *Store) setBlockSlotBitField(ctx context.Context, tx *bolt.Tx, slot uint64) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.setBlockSlotBitField")
defer span.End()
k.blockSlotBitLock.Lock()
defer k.blockSlotBitLock.Unlock()
bucket := tx.Bucket(slotsHasObjectBucket)
slotBitfields := bucket.Get(savedBlockSlotsKey)
// Copy is needed to avoid unsafe pointer conversions.
// See: https://github.com/etcd-io/bbolt/pull/201
tmp := make([]byte, len(slotBitfields))
copy(tmp, slotBitfields)
slotBitfields = bytesutil.SetBit(tmp, int(slot))
return bucket.Put(savedBlockSlotsKey, slotBitfields)
}
// clearBlockSlotBitField clears the block slot bit in DB.
// This helps to track which slot has a saved block in db.
func (k *Store) clearBlockSlotBitField(ctx context.Context, tx *bolt.Tx, slot uint64) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.clearBlockSlotBitField")
defer span.End()
k.blockSlotBitLock.Lock()
defer k.blockSlotBitLock.Unlock()
bucket := tx.Bucket(slotsHasObjectBucket)
slotBitfields := bucket.Get(savedBlockSlotsKey)
// Copy is needed to avoid unsafe pointer conversions.
// See: https://github.com/etcd-io/bbolt/pull/201
tmp := make([]byte, len(slotBitfields))
copy(tmp, slotBitfields)
slotBitfields = bytesutil.ClearBit(tmp, int(slot))
return bucket.Put(savedBlockSlotsKey, slotBitfields)
}
// getBlockRootsByFilter retrieves the block roots given the filter criteria.
func getBlockRootsByFilter(ctx context.Context, tx *bolt.Tx, f *filters.QueryFilter) ([][]byte, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.getBlockRootsByFilter")
defer span.End()
// If no filter criteria are specified, return an error.
if f == nil {
return nil, errors.New("must specify a filter criteria for retrieving blocks")
}
// Creates a list of indices from the passed in filter values, such as:
// []byte("0x2093923") in the parent root indices bucket to be used for looking up
// block roots that were stored under each of those indices for O(1) lookup.
indicesByBucket, err := createBlockIndicesFromFilters(f)
if err != nil {
return nil, errors.Wrap(err, "could not determine lookup indices")
}
// We retrieve block roots that match a filter criteria of slot ranges, if specified.
filtersMap := f.Filters()
rootsBySlotRange := fetchBlockRootsBySlotRange(
tx.Bucket(blockSlotIndicesBucket),
filtersMap[filters.StartSlot],
filtersMap[filters.EndSlot],
filtersMap[filters.StartEpoch],
filtersMap[filters.EndEpoch],
filtersMap[filters.SlotStep],
)
// Once we have a list of block roots that correspond to each
// lookup index, we find the intersection across all of them and use
// that list of roots to lookup the block. These block will
// meet the filter criteria.
indices := lookupValuesForIndices(indicesByBucket, tx)
keys := rootsBySlotRange
if len(indices) > 0 {
// If we have found indices that meet the filter criteria, and there are also
// block roots that meet the slot range filter criteria, we find the intersection
// between these two sets of roots.
if len(rootsBySlotRange) > 0 {
joined := append([][][]byte{keys}, indices...)
keys = sliceutil.IntersectionByteSlices(joined...)
} else {
// If we have found indices that meet the filter criteria, but there are no block roots
// that meet the slot range filter criteria, we find the intersection
// of the regular filter indices.
keys = sliceutil.IntersectionByteSlices(indices...)
}
}
return keys, nil
}
// fetchBlockRootsBySlotRange looks into a boltDB bucket and performs a binary search
// range scan using sorted left-padded byte keys using a start slot and an end slot.
// If both the start and end slot are the same, and are 0, the function returns nil.

View File

@@ -419,3 +419,193 @@ func TestStore_Blocks_Retrieve_SlotRangeWithStep(t *testing.T) {
}
}
}
func TestStore_SaveBlock_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
block := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
if err := db.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
highestSavedBlock, err := db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block, highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", block, highestSavedBlock)
}
block = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 999}}
if err := db.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
highestSavedBlock, err = db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block, highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", block, highestSavedBlock)
}
block = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 300000000}} // 100 years.
if err := db.SaveBlock(ctx, block); err != nil {
t.Fatal(err)
}
highestSavedBlock, err = db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block, highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", block, highestSavedBlock)
}
}
func TestStore_SaveBlock_CanGetHighestAt(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
block1 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
db.SaveBlock(ctx, block1)
block2 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 10}}
db.SaveBlock(ctx, block2)
block3 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 100}}
db.SaveBlock(ctx, block3)
highestAt, err := db.HighestSlotBlocksBelow(ctx, 2)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block1, highestAt[0]) {
t.Errorf("Wanted %v, received %v", block1, highestAt)
}
highestAt, err = db.HighestSlotBlocksBelow(ctx, 11)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block2, highestAt[0]) {
t.Errorf("Wanted %v, received %v", block2, highestAt)
}
highestAt, err = db.HighestSlotBlocksBelow(ctx, 101)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block3, highestAt[0]) {
t.Errorf("Wanted %v, received %v", block3, highestAt)
}
r3, _ := ssz.HashTreeRoot(block3.Block)
db.DeleteBlock(ctx, r3)
highestAt, err = db.HighestSlotBlocksBelow(ctx, 101)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(block2, highestAt[0]) {
t.Errorf("Wanted %v, received %v", block2, highestAt)
}
}
func TestStore_SaveBlocks_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
b := make([]*ethpb.SignedBeaconBlock, 500)
for i := 0; i < 500; i++ {
b[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
ParentRoot: []byte("parent"),
Slot: uint64(i),
},
}
}
if err := db.SaveBlocks(ctx, b); err != nil {
t.Fatal(err)
}
highestSavedBlock, err := db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(b[len(b)-1], highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", b[len(b)-1], highestSavedBlock)
}
}
func TestStore_DeleteBlock_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
b50 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 50}}
if err := db.SaveBlock(ctx, b50); err != nil {
t.Fatal(err)
}
highestSavedBlock, err := db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(b50, highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", b50, highestSavedBlock)
}
b51 := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 51}}
r51, _ := ssz.HashTreeRoot(b51.Block)
if err := db.SaveBlock(ctx, b51); err != nil {
t.Fatal(err)
}
highestSavedBlock, err = db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(b51, highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", b51, highestSavedBlock)
}
if err := db.DeleteBlock(ctx, r51); err != nil {
t.Fatal(err)
}
highestSavedBlock, err = db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(b50, highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", b50, highestSavedBlock)
}
}
func TestStore_DeleteBlocks_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
ctx := context.Background()
b := make([]*ethpb.SignedBeaconBlock, 100)
r := make([][32]byte, 100)
for i := 0; i < 100; i++ {
b[i] = &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
ParentRoot: []byte("parent"),
Slot: uint64(i),
},
}
r[i], _ = ssz.HashTreeRoot(b[i].Block)
}
if err := db.SaveBlocks(ctx, b); err != nil {
t.Fatal(err)
}
if err := db.DeleteBlocks(ctx, [][32]byte{r[99], r[98], r[97]}); err != nil {
t.Fatal(err)
}
highestSavedBlock, err := db.HighestSlotBlocks(ctx)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(b[96], highestSavedBlock[0]) {
t.Errorf("Wanted %v, received %v", b[len(b)-1], highestSavedBlock)
}
}

View File

@@ -4,10 +4,10 @@ import (
"context"
"errors"
"github.com/boltdb/bolt"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/traceutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -4,8 +4,8 @@ import (
"context"
"fmt"
"github.com/boltdb/bolt"
"github.com/ethereum/go-ethereum/common"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -5,12 +5,12 @@ import (
"context"
"fmt"
"github.com/boltdb/bolt"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
dbpb "github.com/prysmaticlabs/prysm/proto/beacon/db"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/traceutil"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -3,14 +3,15 @@ package kv
import (
"os"
"path"
"sync"
"time"
"github.com/boltdb/bolt"
"github.com/dgraph-io/ristretto"
"github.com/mdlayher/prombolt"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
prombolt "github.com/prysmaticlabs/prombbolt"
"github.com/prysmaticlabs/prysm/beacon-chain/db/iface"
bolt "go.etcd.io/bbolt"
)
var _ = iface.Database(&Store{})
@@ -35,6 +36,8 @@ type Store struct {
databasePath string
blockCache *ristretto.Cache
validatorIndexCache *ristretto.Cache
stateSlotBitLock sync.Mutex
blockSlotBitLock sync.Mutex
}
// NewKVStore initializes a new boltDB key-value store at the directory
@@ -98,6 +101,7 @@ func NewKVStore(dirPath string) (*Store, error) {
stateSummaryBucket,
archivedIndexRootBucket,
archivedIndexStateBucket,
slotsHasObjectBucket,
// Indices buckets.
attestationHeadBlockRootBucket,
attestationSourceRootIndicesBucket,

View File

@@ -3,9 +3,9 @@ package kv
import (
"context"
"github.com/boltdb/bolt"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -3,9 +3,9 @@ package kv
import (
"context"
"github.com/boltdb/bolt"
"github.com/gogo/protobuf/proto"
"github.com/prysmaticlabs/prysm/proto/beacon/db"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -24,6 +24,7 @@ var (
powchainBucket = []byte("powchain")
archivedIndexRootBucket = []byte("archived-index-root")
archivedIndexStateBucket = []byte("archived-index-state")
slotsHasObjectBucket = []byte("slots-has-objects")
// Key indices buckets.
blockParentRootIndicesBucket = []byte("block-parent-root-indices")
@@ -43,6 +44,8 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastArchivedIndexKey = []byte("last-archived")
savedBlockSlotsKey = []byte("saved-block-slots")
savedStateSlotsKey = []byte("saved-state-slots")
// Migration bucket.
migrationBucket = []byte("migrations")

View File

@@ -3,9 +3,9 @@ package kv
import (
"context"
"github.com/boltdb/bolt"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -3,14 +3,16 @@ package kv
import (
"bytes"
"context"
"math"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
@@ -118,7 +120,10 @@ func (k *Store) SaveState(ctx context.Context, state *state.BeaconState, blockRo
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateBucket)
return bucket.Put(blockRoot[:], enc)
if err := bucket.Put(blockRoot[:], enc); err != nil {
return err
}
return k.setStateSlotBitField(ctx, tx, state.Slot())
})
}
@@ -141,6 +146,9 @@ func (k *Store) SaveStates(ctx context.Context, states []*state.BeaconState, blo
return k.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateBucket)
for i, rt := range blockRoots {
if err := k.setStateSlotBitField(ctx, tx, states[i].Slot()); err != nil {
return err
}
err = bucket.Put(rt[:], multipleEncs[i])
if err != nil {
return err
@@ -196,6 +204,14 @@ func (k *Store) DeleteState(ctx context.Context, blockRoot [32]byte) error {
}
}
slot, err := slotByBlockRoot(ctx, tx, blockRoot[:])
if err != nil {
return err
}
if err := k.clearStateSlotBitField(ctx, tx, slot); err != nil {
return err
}
bkt = tx.Bucket(stateBucket)
return bkt.Delete(blockRoot[:])
})
@@ -229,8 +245,8 @@ func (k *Store) DeleteStates(ctx context.Context, blockRoots [][32]byte) error {
return err
}
bkt = tx.Bucket(blocksBucket)
headBlkRoot := bkt.Get(headBlockRootKey)
blockBkt := tx.Bucket(blocksBucket)
headBlkRoot := blockBkt.Get(headBlockRootKey)
bkt = tx.Bucket(stateBucket)
c := bkt.Cursor()
@@ -246,6 +262,15 @@ func (k *Store) DeleteStates(ctx context.Context, blockRoots [][32]byte) error {
return errors.New("cannot delete genesis, finalized, or head state")
}
}
slot, err := slotByBlockRoot(ctx, tx, blockRoot)
if err != nil {
return err
}
if err := k.clearStateSlotBitField(ctx, tx, slot); err != nil {
return err
}
if err := c.Delete(); err != nil {
return err
}
@@ -264,3 +289,136 @@ func createState(enc []byte) (*pb.BeaconState, error) {
}
return protoState, nil
}
// slotByBlockRoot retrieves the corresponding slot of the input block root.
func slotByBlockRoot(ctx context.Context, tx *bolt.Tx, blockRoot []byte) (uint64, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.slotByBlockRoot")
defer span.End()
if featureconfig.Get().NewStateMgmt {
bkt := tx.Bucket(stateSummaryBucket)
enc := bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state summary enc can't be nil")
}
stateSummary := &pb.StateSummary{}
if err := decode(enc, stateSummary); err != nil {
return 0, err
}
return stateSummary.Slot, nil
}
bkt := tx.Bucket(stateBucket)
enc := bkt.Get(blockRoot)
if enc == nil {
return 0, errors.New("state enc can't be nil")
}
s, err := createState(enc)
if err != nil {
return 0, err
}
if s == nil {
return 0, errors.New("state can't be nil")
}
return s.Slot, nil
}
// HighestSlotStates returns the states with the highest slot from the db.
// Ideally there should just be one state per slot, but given validator
// can double propose, a single slot could have multiple block roots and
// reuslts states. This returns a list of states.
func (k *Store) HighestSlotStates(ctx context.Context) ([]*state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HighestSlotState")
defer span.End()
var states []*state.BeaconState
err := k.db.View(func(tx *bolt.Tx) error {
slotBkt := tx.Bucket(slotsHasObjectBucket)
savedSlots := slotBkt.Get(savedStateSlotsKey)
highestIndex, err := bytesutil.HighestBitIndex(savedSlots)
if err != nil {
return err
}
highestSlot := highestIndex - 1
highestSlot = int(math.Max(0, float64(highestSlot)))
f := filters.NewFilter().SetStartSlot(uint64(highestSlot)).SetEndSlot(uint64(highestSlot))
keys, err := getBlockRootsByFilter(ctx, tx, f)
if err != nil {
return err
}
if len(keys) == 0 {
return errors.New("could not get one block root to get state")
}
stateBkt := tx.Bucket(stateBucket)
for i := range keys {
enc := stateBkt.Get(keys[i][:])
if enc == nil {
continue
}
pbState, err := createState(enc)
if err != nil {
return err
}
s, err := state.InitializeFromProtoUnsafe(pbState)
if err != nil {
return err
}
states = append(states, s)
}
return err
})
if err != nil {
return nil, err
}
if len(states) == 0 {
return nil, errors.New("could not get one state")
}
return states, nil
}
// setStateSlotBitField sets the state slot bit in DB.
// This helps to track which slot has a saved state in db.
func (k *Store) setStateSlotBitField(ctx context.Context, tx *bolt.Tx, slot uint64) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.setStateSlotBitField")
defer span.End()
k.stateSlotBitLock.Lock()
defer k.stateSlotBitLock.Unlock()
bucket := tx.Bucket(slotsHasObjectBucket)
slotBitfields := bucket.Get(savedStateSlotsKey)
// Copy is needed to avoid unsafe pointer conversions.
// See: https://github.com/etcd-io/bbolt/pull/201
tmp := make([]byte, len(slotBitfields))
copy(tmp, slotBitfields)
slotBitfields = bytesutil.SetBit(tmp, int(slot))
return bucket.Put(savedStateSlotsKey, slotBitfields)
}
// clearStateSlotBitField clears the state slot bit in DB.
// This helps to track which slot has a saved state in db.
func (k *Store) clearStateSlotBitField(ctx context.Context, tx *bolt.Tx, slot uint64) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.clearStateSlotBitField")
defer span.End()
k.stateSlotBitLock.Lock()
defer k.stateSlotBitLock.Unlock()
bucket := tx.Bucket(slotsHasObjectBucket)
slotBitfields := bucket.Get(savedStateSlotsKey)
// Copy is needed to avoid unsafe pointer conversions.
// See: https://github.com/etcd-io/bbolt/pull/201
tmp := make([]byte, len(slotBitfields))
copy(tmp, slotBitfields)
slotBitfields = bytesutil.ClearBit(tmp, int(slot))
return bucket.Put(savedStateSlotsKey, slotBitfields)
}

View File

@@ -3,8 +3,8 @@ package kv
import (
"context"
"github.com/boltdb/bolt"
pb "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -5,6 +5,7 @@ import (
"reflect"
"testing"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-ssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state"
@@ -286,3 +287,84 @@ func TestStore_DeleteHeadState(t *testing.T) {
t.Error("Did not receive wanted error")
}
}
func TestStore_SaveDeleteState_CanGetHighest(t *testing.T) {
db := setupDB(t)
defer teardownDB(t, db)
s0 := &pb.BeaconState{Slot: 1}
b := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1}}
r, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err := state.InitializeFromProto(s0)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r); err != nil {
t.Fatal(err)
}
s1 := &pb.BeaconState{Slot: 999}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 999}}
r1, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err = state.InitializeFromProto(s1)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r1); err != nil {
t.Fatal(err)
}
highest, err := db.HighestSlotStates(context.Background())
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s1) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
}
s2 := &pb.BeaconState{Slot: 1000}
b = &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Slot: 1000}}
r2, _ := ssz.HashTreeRoot(b.Block)
if err := db.SaveBlock(context.Background(), b); err != nil {
t.Fatal(err)
}
st, err = state.InitializeFromProto(s2)
if err != nil {
t.Fatal(err)
}
if err := db.SaveState(context.Background(), st, r2); err != nil {
t.Fatal(err)
}
highest, err = db.HighestSlotStates(context.Background())
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s2) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s2)
}
db.DeleteState(context.Background(), r2)
highest, err = db.HighestSlotStates(context.Background())
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s1) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
}
db.DeleteState(context.Background(), r1)
highest, err = db.HighestSlotStates(context.Background())
if err != nil {
t.Fatal(err)
}
if !proto.Equal(highest[0].InnerStateUnsafe(), s0) {
t.Errorf("Did not retrieve saved state: %v != %v", highest, s1)
}
}

View File

@@ -3,7 +3,7 @@ package kv
import (
"bytes"
"github.com/boltdb/bolt"
bolt "go.etcd.io/bbolt"
)
// lookupValuesForIndices takes in a list of indices and looks up

View File

@@ -5,9 +5,9 @@ import (
"encoding/binary"
"fmt"
"github.com/boltdb/bolt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/shared/params"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)

View File

@@ -13,6 +13,6 @@ go_library(
deps = [
"//shared/cmd:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -1,31 +1,31 @@
package flags
import (
"github.com/urfave/cli"
"gopkg.in/urfave/cli.v2"
)
var (
// ArchiveEnableFlag defines whether or not the beacon chain should archive
// historical blocks, attestations, and validator set changes.
ArchiveEnableFlag = cli.BoolFlag{
ArchiveEnableFlag = &cli.BoolFlag{
Name: "archive",
Usage: "Whether or not beacon chain should archive historical data including blocks, attestations, and validator set changes",
}
// ArchiveValidatorSetChangesFlag defines whether or not the beacon chain should archive
// historical validator set changes in persistent storage.
ArchiveValidatorSetChangesFlag = cli.BoolFlag{
ArchiveValidatorSetChangesFlag = &cli.BoolFlag{
Name: "archive-validator-set-changes",
Usage: "Whether or not beacon chain should archive historical validator set changes",
}
// ArchiveBlocksFlag defines whether or not the beacon chain should archive
// historical block data in persistent storage.
ArchiveBlocksFlag = cli.BoolFlag{
ArchiveBlocksFlag = &cli.BoolFlag{
Name: "archive-blocks",
Usage: "Whether or not beacon chain should archive historical blocks",
}
// ArchiveAttestationsFlag defines whether or not the beacon chain should archive
// historical attestation data in persistent storage.
ArchiveAttestationsFlag = cli.BoolFlag{
ArchiveAttestationsFlag = &cli.BoolFlag{
Name: "archive-attestations",
Usage: "Whether or not beacon chain should archive historical blocks",
}

View File

@@ -1,102 +1,113 @@
package flags
import (
"github.com/urfave/cli"
"gopkg.in/urfave/cli.v2"
)
var (
// HTTPWeb3ProviderFlag provides an HTTP access endpoint to an ETH 1.0 RPC.
HTTPWeb3ProviderFlag = cli.StringFlag{
HTTPWeb3ProviderFlag = &cli.StringFlag{
Name: "http-web3provider",
Usage: "A mainchain web3 provider string http endpoint",
Value: "https://goerli.prylabs.net",
}
// Web3ProviderFlag defines a flag for a mainchain RPC endpoint.
Web3ProviderFlag = cli.StringFlag{
Web3ProviderFlag = &cli.StringFlag{
Name: "web3provider",
Usage: "A mainchain web3 provider string endpoint. Can either be an IPC file string or a WebSocket endpoint. Cannot be an HTTP endpoint.",
Value: "wss://goerli.prylabs.net/websocket",
}
// DepositContractFlag defines a flag for the deposit contract address.
DepositContractFlag = cli.StringFlag{
DepositContractFlag = &cli.StringFlag{
Name: "deposit-contract",
Usage: "Deposit contract address. Beacon chain node will listen logs coming from the deposit contract to determine when validator is eligible to participate.",
Value: "0x4689a3C63CE249355C8a573B5974db21D2d1b8Ef",
}
// RPCHost defines the host on which the RPC server should listen.
RPCHost = cli.StringFlag{
RPCHost = &cli.StringFlag{
Name: "rpc-host",
Usage: "Host on which the RPC server should listen",
Value: "0.0.0.0",
}
// RPCPort defines a beacon node RPC port to open.
RPCPort = cli.IntFlag{
RPCPort = &cli.IntFlag{
Name: "rpc-port",
Usage: "RPC port exposed by a beacon node",
Value: 4000,
}
// RPCMaxPageSize defines the maximum numbers per page returned in RPC responses from this
// beacon node (default: 500).
RPCMaxPageSize = cli.IntFlag{
RPCMaxPageSize = &cli.IntFlag{
Name: "rpc-max-page-size",
Usage: "Max number of items returned per page in RPC responses for paginated endpoints (default: 500)",
Usage: "Max number of items returned per page in RPC responses for paginated endpoints.",
Value: 500,
}
// CertFlag defines a flag for the node's TLS certificate.
CertFlag = cli.StringFlag{
CertFlag = &cli.StringFlag{
Name: "tls-cert",
Usage: "Certificate for secure gRPC. Pass this and the tls-key flag in order to use gRPC securely.",
}
// KeyFlag defines a flag for the node's TLS key.
KeyFlag = cli.StringFlag{
KeyFlag = &cli.StringFlag{
Name: "tls-key",
Usage: "Key for secure gRPC. Pass this and the tls-cert flag in order to use gRPC securely.",
}
// GRPCGatewayPort enables a gRPC gateway to be exposed for Prysm.
GRPCGatewayPort = cli.IntFlag{
GRPCGatewayPort = &cli.IntFlag{
Name: "grpc-gateway-port",
Usage: "Enable gRPC gateway for JSON requests",
}
// GPRCGatewayCorsDomain serves preflight requests when serving gRPC JSON gateway.
GPRCGatewayCorsDomain = &cli.StringFlag{
Name: "grpc-gateway-corsdomain",
Usage: "Comma separated list of domains from which to accept cross origin requests " +
"(browser enforced). This flag has no effect if not used with --grpc-gateway-port.",
}
// MinSyncPeers specifies the required number of successful peer handshakes in order
// to start syncing with external peers.
MinSyncPeers = cli.IntFlag{
MinSyncPeers = &cli.IntFlag{
Name: "min-sync-peers",
Usage: "The required number of valid peers to connect with before syncing.",
Value: 3,
}
// ContractDeploymentBlock is the block in which the eth1 deposit contract was deployed.
ContractDeploymentBlock = cli.IntFlag{
ContractDeploymentBlock = &cli.IntFlag{
Name: "contract-deployment-block",
Usage: "The eth1 block in which the deposit contract was deployed.",
Value: 1960177,
}
// SetGCPercent is the percentage of current live allocations at which the garbage collector is to run.
SetGCPercent = cli.IntFlag{
SetGCPercent = &cli.IntFlag{
Name: "gc-percent",
Usage: "The percentage of freshly allocated data to live data on which the gc will be run again.",
Value: 100,
}
// UnsafeSync starts the beacon node from the previously saved head state and syncs from there.
UnsafeSync = cli.BoolFlag{
UnsafeSync = &cli.BoolFlag{
Name: "unsafe-sync",
Usage: "Starts the beacon node with the previously saved head state instead of finalized state.",
}
// SlasherCertFlag defines a flag for the slasher TLS certificate.
SlasherCertFlag = cli.StringFlag{
SlasherCertFlag = &cli.StringFlag{
Name: "slasher-tls-cert",
Usage: "Certificate for secure slasher gRPC connection. Pass this in order to use slasher gRPC securely.",
}
// SlasherProviderFlag defines a flag for a slasher RPC provider.
SlasherProviderFlag = cli.StringFlag{
SlasherProviderFlag = &cli.StringFlag{
Name: "slasher-provider",
Usage: "A slasher provider string endpoint. Can either be an grpc server endpoint.",
Value: "127.0.0.1:5000",
}
// SlotsPerArchivedPoint specifies the number of slots between the archived points, to save beacon state in the cold
// section of DB.
SlotsPerArchivedPoint = cli.IntFlag{
SlotsPerArchivedPoint = &cli.IntFlag{
Name: "slots-per-archive-point",
Usage: "The slot durations of when an archived state gets saved in the DB.",
Value: 128,
}
// EnableDiscv5 enables running discv5.
EnableDiscv5 = &cli.BoolFlag{
Name: "enable-discv5",
Usage: "Starts dv5 dht.",
}
)

View File

@@ -3,7 +3,7 @@ package flags
import (
"github.com/prysmaticlabs/prysm/shared/cmd"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli"
"gopkg.in/urfave/cli.v2"
)
// GlobalFlags specifies all the global flags for the
@@ -17,6 +17,7 @@ type GlobalFlags struct {
MaxPageSize int
DeploymentBlock int
UnsafeSync bool
EnableDiscv5 bool
}
var globalConfig *GlobalFlags
@@ -38,31 +39,34 @@ func Init(c *GlobalFlags) {
// based on the provided cli context.
func ConfigureGlobalFlags(ctx *cli.Context) {
cfg := &GlobalFlags{}
if ctx.GlobalBool(ArchiveEnableFlag.Name) {
if ctx.Bool(ArchiveEnableFlag.Name) {
cfg.EnableArchive = true
}
if ctx.GlobalBool(ArchiveValidatorSetChangesFlag.Name) {
if ctx.Bool(ArchiveValidatorSetChangesFlag.Name) {
cfg.EnableArchivedValidatorSetChanges = true
}
if ctx.GlobalBool(ArchiveBlocksFlag.Name) {
if ctx.Bool(ArchiveBlocksFlag.Name) {
cfg.EnableArchivedBlocks = true
}
if ctx.GlobalBool(ArchiveAttestationsFlag.Name) {
if ctx.Bool(ArchiveAttestationsFlag.Name) {
cfg.EnableArchivedAttestations = true
}
if ctx.GlobalBool(UnsafeSync.Name) {
if ctx.Bool(UnsafeSync.Name) {
cfg.UnsafeSync = true
}
cfg.MaxPageSize = ctx.GlobalInt(RPCMaxPageSize.Name)
cfg.DeploymentBlock = ctx.GlobalInt(ContractDeploymentBlock.Name)
if ctx.Bool(EnableDiscv5.Name) {
cfg.EnableDiscv5 = true
}
cfg.MaxPageSize = ctx.Int(RPCMaxPageSize.Name)
cfg.DeploymentBlock = ctx.Int(ContractDeploymentBlock.Name)
configureMinimumPeers(ctx, cfg)
Init(cfg)
}
func configureMinimumPeers(ctx *cli.Context, cfg *GlobalFlags) {
cfg.MinimumSyncPeers = ctx.GlobalInt(MinSyncPeers.Name)
maxPeers := int(ctx.GlobalInt64(cmd.P2PMaxPeers.Name))
cfg.MinimumSyncPeers = ctx.Int(MinSyncPeers.Name)
maxPeers := int(ctx.Int64(cmd.P2PMaxPeers.Name))
if cfg.MinimumSyncPeers > maxPeers {
log.Warnf("Changing Minimum Sync Peers to %d", maxPeers)
cfg.MinimumSyncPeers = maxPeers

View File

@@ -1,29 +1,29 @@
package flags
import (
"github.com/urfave/cli"
"gopkg.in/urfave/cli.v2"
)
var (
// InteropGenesisStateFlag defines a flag for the beacon node to load genesis state via file.
InteropGenesisStateFlag = cli.StringFlag{
InteropGenesisStateFlag = &cli.StringFlag{
Name: "interop-genesis-state",
Usage: "The genesis state file (.SSZ) to load from",
}
// InteropMockEth1DataVotesFlag enables mocking the eth1 proof-of-work chain data put into blocks by proposers.
InteropMockEth1DataVotesFlag = cli.BoolFlag{
InteropMockEth1DataVotesFlag = &cli.BoolFlag{
Name: "interop-eth1data-votes",
Usage: "Enable mocking of eth1 data votes for proposers to package into blocks",
}
// InteropGenesisTimeFlag specifies genesis time for state generation.
InteropGenesisTimeFlag = cli.Uint64Flag{
InteropGenesisTimeFlag = &cli.Uint64Flag{
Name: "interop-genesis-time",
Usage: "Specify the genesis time for interop genesis state generation. Must be used with " +
"--interop-num-validators",
}
// InteropNumValidatorsFlag specifies number of genesis validators for state generation.
InteropNumValidatorsFlag = cli.Uint64Flag{
InteropNumValidatorsFlag = &cli.Uint64Flag{
Name: "interop-num-validators",
Usage: "Specify number of genesis validators to generate for interop. Must be used with --interop-genesis-time",
}

View File

@@ -4,6 +4,7 @@ load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"cors.go",
"gateway.go",
"handlers.go",
"log.go",
@@ -16,6 +17,7 @@ go_library(
deps = [
"//shared:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_grpc_gateway_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@grpc_ecosystem_grpc_gateway//runtime:go_default_library",
"@org_golang_google_grpc//:go_default_library",

View File

@@ -0,0 +1,20 @@
package gateway
import (
"net/http"
"github.com/rs/cors"
)
func newCorsHandler(srv http.Handler, allowedOrigins []string) http.Handler {
if len(allowedOrigins) == 0 {
return srv
}
c := cors.New(cors.Options{
AllowedOrigins: allowedOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet},
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler(srv)
}

View File

@@ -19,13 +19,14 @@ var _ = shared.Service(&Gateway{})
// Gateway is the gRPC gateway to serve HTTP JSON traffic as a proxy and forward
// it to the beacon-chain gRPC server.
type Gateway struct {
conn *grpc.ClientConn
ctx context.Context
cancel context.CancelFunc
gatewayAddr string
remoteAddr string
server *http.Server
mux *http.ServeMux
conn *grpc.ClientConn
ctx context.Context
cancel context.CancelFunc
gatewayAddr string
remoteAddr string
server *http.Server
mux *http.ServeMux
allowedOrigins []string
startFailure error
}
@@ -64,7 +65,7 @@ func (g *Gateway) Start() {
g.server = &http.Server{
Addr: g.gatewayAddr,
Handler: g.mux,
Handler: newCorsHandler(g.mux, g.allowedOrigins),
}
go func() {
if err := g.server.ListenAndServe(); err != http.ErrServerClosed {
@@ -105,16 +106,17 @@ func (g *Gateway) Stop() error {
// New returns a new gateway server which translates HTTP into gRPC.
// Accepts a context and optional http.ServeMux.
func New(ctx context.Context, remoteAddress, gatewayAddress string, mux *http.ServeMux) *Gateway {
func New(ctx context.Context, remoteAddress, gatewayAddress string, mux *http.ServeMux, allowedOrigins []string) *Gateway {
if mux == nil {
mux = http.NewServeMux()
}
return &Gateway{
remoteAddr: remoteAddress,
gatewayAddr: gatewayAddress,
ctx: ctx,
mux: mux,
remoteAddr: remoteAddress,
gatewayAddr: gatewayAddress,
ctx: ctx,
mux: mux,
allowedOrigins: allowedOrigins,
}
}

View File

@@ -5,6 +5,7 @@ import (
"flag"
"fmt"
"net/http"
"strings"
joonix "github.com/joonix/log"
"github.com/prysmaticlabs/prysm/beacon-chain/gateway"
@@ -13,9 +14,10 @@ import (
)
var (
beaconRPC = flag.String("beacon-rpc", "localhost:4000", "Beacon chain gRPC endpoint")
port = flag.Int("port", 8000, "Port to serve on")
debug = flag.Bool("debug", false, "Enable debug logging")
beaconRPC = flag.String("beacon-rpc", "localhost:4000", "Beacon chain gRPC endpoint")
port = flag.Int("port", 8000, "Port to serve on")
debug = flag.Bool("debug", false, "Enable debug logging")
allowedOrigins = flag.String("corsdomain", "", "A comma separated list of CORS domains to allow.")
)
func init() {
@@ -31,7 +33,7 @@ func main() {
}
mux := http.NewServeMux()
gw := gateway.New(context.Background(), *beaconRPC, fmt.Sprintf("0.0.0.0:%d", *port), mux)
gw := gateway.New(context.Background(), *beaconRPC, fmt.Sprintf("0.0.0.0:%d", *port), mux, strings.Split(*allowedOrigins, ","))
mux.HandleFunc("/swagger/", gateway.SwaggerServer())
mux.HandleFunc("/healthz", healthzServer(gw))
gw.Start()

View File

@@ -153,7 +153,7 @@ func (s *Service) DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight
func (s *Service) saveGenesisState(ctx context.Context, genesisState *stateTrie.BeaconState) error {
s.chainStartDeposits = make([]*ethpb.Deposit, genesisState.NumValidators())
stateRoot, err := genesisState.HashTreeRoot()
stateRoot, err := genesisState.HashTreeRoot(ctx)
if err != nil {
return err
}

View File

@@ -17,10 +17,11 @@ import (
"github.com/prysmaticlabs/prysm/shared/logutil"
"github.com/prysmaticlabs/prysm/shared/version"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
gologging "github.com/whyrusleeping/go-logging"
prefixed "github.com/x-cray/logrus-prefixed-formatter"
_ "go.uber.org/automaxprocs"
"gopkg.in/urfave/cli.v2"
"gopkg.in/urfave/cli.v2/altsrc"
)
var appFlags = []cli.Flag{
@@ -37,6 +38,7 @@ var appFlags = []cli.Flag{
flags.ContractDeploymentBlock,
flags.SetGCPercent,
flags.UnsafeSync,
flags.EnableDiscv5,
flags.InteropMockEth1DataVotesFlag,
flags.InteropGenesisStateFlag,
flags.InteropNumValidatorsFlag,
@@ -79,15 +81,16 @@ var appFlags = []cli.Flag{
debug.TraceFlag,
cmd.LogFileName,
cmd.EnableUPnPFlag,
cmd.ConfigFileFlag,
}
func init() {
appFlags = append(appFlags, featureconfig.BeaconChainFlags...)
appFlags = cmd.WrapFlags(append(appFlags, featureconfig.BeaconChainFlags...))
}
func main() {
log := logrus.WithField("prefix", "main")
app := cli.NewApp()
app := cli.App{}
app.Name = "beacon-chain"
app.Usage = "this is a beacon chain implementation for Ethereum 2.0"
app.Action = startNode
@@ -96,7 +99,14 @@ func main() {
app.Flags = appFlags
app.Before = func(ctx *cli.Context) error {
format := ctx.GlobalString(cmd.LogFormat.Name)
// Load any flags from file, if specified.
if ctx.IsSet(cmd.ConfigFileFlag.Name) {
if err := altsrc.InitInputSourceWithContext(appFlags, altsrc.NewYamlSourceFromFlagFunc(cmd.ConfigFileFlag.Name))(ctx); err != nil {
return err
}
}
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
formatter := new(prefixed.TextFormatter)
@@ -104,7 +114,7 @@ func main() {
formatter.FullTimestamp = true
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as gibberish in the log files.
formatter.DisableColors = ctx.GlobalString(cmd.LogFileName.Name) != ""
formatter.DisableColors = ctx.String(cmd.LogFileName.Name) != ""
logrus.SetFormatter(formatter)
break
case "fluentd":
@@ -121,7 +131,7 @@ func main() {
return fmt.Errorf("unknown log format %s", format)
}
logFileName := ctx.GlobalString(cmd.LogFileName.Name)
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logutil.ConfigurePersistentLogging(logFileName); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
@@ -129,7 +139,7 @@ func main() {
}
if ctx.IsSet(flags.SetGCPercent.Name) {
runtimeDebug.SetGCPercent(ctx.GlobalInt(flags.SetGCPercent.Name))
runtimeDebug.SetGCPercent(ctx.Int(flags.SetGCPercent.Name))
}
runtime.GOMAXPROCS(runtime.NumCPU())
return debug.Setup(ctx)
@@ -149,7 +159,7 @@ func main() {
}
func startNode(ctx *cli.Context) error {
verbosity := ctx.GlobalString(cmd.VerbosityFlag.Name)
verbosity := ctx.String(cmd.VerbosityFlag.Name)
level, err := logrus.ParseLevel(verbosity)
if err != nil {
return err

View File

@@ -38,7 +38,7 @@ go_library(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
],
)
@@ -51,6 +51,6 @@ go_test(
"//beacon-chain/core/feed/state:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli//:go_default_library",
"@in_gopkg_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -46,7 +46,7 @@ import (
"github.com/prysmaticlabs/prysm/shared/tracing"
"github.com/prysmaticlabs/prysm/shared/version"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"gopkg.in/urfave/cli.v2"
)
var log = logrus.WithField("prefix", "node")
@@ -79,10 +79,10 @@ type BeaconNode struct {
func NewBeaconNode(ctx *cli.Context) (*BeaconNode, error) {
if err := tracing.Setup(
"beacon-chain", // service name
ctx.GlobalString(cmd.TracingProcessNameFlag.Name),
ctx.GlobalString(cmd.TracingEndpointFlag.Name),
ctx.GlobalFloat64(cmd.TraceSampleFractionFlag.Name),
ctx.GlobalBool(cmd.EnableTracingFlag.Name),
ctx.String(cmd.TracingProcessNameFlag.Name),
ctx.String(cmd.TracingEndpointFlag.Name),
ctx.Float64(cmd.TraceSampleFractionFlag.Name),
ctx.Bool(cmd.EnableTracingFlag.Name),
); err != nil {
return nil, err
}
@@ -151,7 +151,7 @@ func NewBeaconNode(ctx *cli.Context) (*BeaconNode, error) {
return nil, err
}
if !ctx.GlobalBool(cmd.DisableMonitoringFlag.Name) {
if !ctx.Bool(cmd.DisableMonitoringFlag.Name) {
if err := beacon.registerPrometheusService(ctx); err != nil {
return nil, err
}
@@ -228,10 +228,10 @@ func (b *BeaconNode) startForkChoice() {
}
func (b *BeaconNode) startDB(ctx *cli.Context) error {
baseDir := ctx.GlobalString(cmd.DataDirFlag.Name)
baseDir := ctx.String(cmd.DataDirFlag.Name)
dbPath := path.Join(baseDir, beaconChainDBName)
clearDB := ctx.GlobalBool(cmd.ClearDB.Name)
forceClearDB := ctx.GlobalBool(cmd.ForceClearDB.Name)
clearDB := ctx.Bool(cmd.ClearDB.Name)
forceClearDB := ctx.Bool(cmd.ForceClearDB.Name)
d, err := db.NewDB(dbPath)
if err != nil {
@@ -269,7 +269,7 @@ func (b *BeaconNode) startStateGen() {
func (b *BeaconNode) registerP2P(ctx *cli.Context) error {
// Bootnode ENR may be a filepath to an ENR file.
bootnodeAddrs := strings.Split(ctx.GlobalString(cmd.BootstrapNode.Name), ",")
bootnodeAddrs := strings.Split(ctx.String(cmd.BootstrapNode.Name), ",")
for i, addr := range bootnodeAddrs {
if filepath.Ext(addr) == ".enr" {
b, err := ioutil.ReadFile(addr)
@@ -280,22 +280,28 @@ func (b *BeaconNode) registerP2P(ctx *cli.Context) error {
}
}
datadir := ctx.String(cmd.DataDirFlag.Name)
if datadir == "" {
datadir = cmd.DefaultDataDir()
}
svc, err := p2p.NewService(&p2p.Config{
NoDiscovery: ctx.GlobalBool(cmd.NoDiscovery.Name),
StaticPeers: sliceutil.SplitCommaSeparated(ctx.GlobalStringSlice(cmd.StaticPeers.Name)),
NoDiscovery: ctx.Bool(cmd.NoDiscovery.Name),
StaticPeers: sliceutil.SplitCommaSeparated(ctx.StringSlice(cmd.StaticPeers.Name)),
BootstrapNodeAddr: bootnodeAddrs,
RelayNodeAddr: ctx.GlobalString(cmd.RelayNode.Name),
DataDir: ctx.GlobalString(cmd.DataDirFlag.Name),
LocalIP: ctx.GlobalString(cmd.P2PIP.Name),
HostAddress: ctx.GlobalString(cmd.P2PHost.Name),
HostDNS: ctx.GlobalString(cmd.P2PHostDNS.Name),
PrivateKey: ctx.GlobalString(cmd.P2PPrivKey.Name),
TCPPort: ctx.GlobalUint(cmd.P2PTCPPort.Name),
UDPPort: ctx.GlobalUint(cmd.P2PUDPPort.Name),
MaxPeers: ctx.GlobalUint(cmd.P2PMaxPeers.Name),
WhitelistCIDR: ctx.GlobalString(cmd.P2PWhitelist.Name),
EnableUPnP: ctx.GlobalBool(cmd.EnableUPnPFlag.Name),
Encoding: ctx.GlobalString(cmd.P2PEncoding.Name),
RelayNodeAddr: ctx.String(cmd.RelayNode.Name),
DataDir: datadir,
LocalIP: ctx.String(cmd.P2PIP.Name),
HostAddress: ctx.String(cmd.P2PHost.Name),
HostDNS: ctx.String(cmd.P2PHostDNS.Name),
PrivateKey: ctx.String(cmd.P2PPrivKey.Name),
TCPPort: ctx.Uint(cmd.P2PTCPPort.Name),
UDPPort: ctx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: ctx.Uint(cmd.P2PMaxPeers.Name),
WhitelistCIDR: ctx.String(cmd.P2PWhitelist.Name),
EnableUPnP: ctx.Bool(cmd.EnableUPnPFlag.Name),
EnableDiscv5: ctx.Bool(flags.EnableDiscv5.Name),
Encoding: ctx.String(cmd.P2PEncoding.Name),
})
if err != nil {
return err
@@ -332,7 +338,7 @@ func (b *BeaconNode) registerBlockchainService(ctx *cli.Context) error {
return err
}
maxRoutines := ctx.GlobalInt64(cmd.MaxGoroutines.Name)
maxRoutines := ctx.Int64(cmd.MaxGoroutines.Name)
blockchainService, err := blockchain.NewService(context.Background(), &blockchain.Config{
BeaconDB: b.db,
DepositCache: b.depositCache,
@@ -354,10 +360,10 @@ func (b *BeaconNode) registerBlockchainService(ctx *cli.Context) error {
}
func (b *BeaconNode) registerPOWChainService(cliCtx *cli.Context) error {
if cliCtx.GlobalBool(testSkipPowFlag) {
if cliCtx.Bool(testSkipPowFlag) {
return b.services.RegisterService(&powchain.Service{})
}
depAddress := cliCtx.GlobalString(flags.DepositContractFlag.Name)
depAddress := cliCtx.String(flags.DepositContractFlag.Name)
if depAddress == "" {
log.Fatal(fmt.Sprintf("%s is required", flags.DepositContractFlag.Name))
}
@@ -368,8 +374,8 @@ func (b *BeaconNode) registerPOWChainService(cliCtx *cli.Context) error {
ctx := context.Background()
cfg := &powchain.Web3ServiceConfig{
ETH1Endpoint: cliCtx.GlobalString(flags.Web3ProviderFlag.Name),
HTTPEndPoint: cliCtx.GlobalString(flags.HTTPWeb3ProviderFlag.Name),
ETH1Endpoint: cliCtx.String(flags.Web3ProviderFlag.Name),
HTTPEndPoint: cliCtx.String(flags.HTTPWeb3ProviderFlag.Name),
DepositContract: common.HexToAddress(depAddress),
BeaconDB: b.db,
DepositCache: b.depositCache,
@@ -489,8 +495,8 @@ func (b *BeaconNode) registerRPCService(ctx *cli.Context) error {
syncService = initSyncTmp
}
genesisValidators := ctx.GlobalUint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := ctx.GlobalString(flags.InteropGenesisStateFlag.Name)
genesisValidators := ctx.Uint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := ctx.String(flags.InteropGenesisStateFlag.Name)
var depositFetcher depositcache.DepositFetcher
var chainStartFetcher powchain.ChainStartFetcher
if genesisValidators > 0 || genesisStatePath != "" {
@@ -505,14 +511,14 @@ func (b *BeaconNode) registerRPCService(ctx *cli.Context) error {
chainStartFetcher = web3Service
}
host := ctx.GlobalString(flags.RPCHost.Name)
port := ctx.GlobalString(flags.RPCPort.Name)
cert := ctx.GlobalString(flags.CertFlag.Name)
key := ctx.GlobalString(flags.KeyFlag.Name)
slasherCert := ctx.GlobalString(flags.SlasherCertFlag.Name)
slasherProvider := ctx.GlobalString(flags.SlasherProviderFlag.Name)
host := ctx.String(flags.RPCHost.Name)
port := ctx.String(flags.RPCPort.Name)
cert := ctx.String(flags.CertFlag.Name)
key := ctx.String(flags.KeyFlag.Name)
slasherCert := ctx.String(flags.SlasherCertFlag.Name)
slasherProvider := ctx.String(flags.SlasherProviderFlag.Name)
mockEth1DataVotes := ctx.GlobalBool(flags.InteropMockEth1DataVotesFlag.Name)
mockEth1DataVotes := ctx.Bool(flags.InteropMockEth1DataVotesFlag.Name)
rpcService := rpc.NewService(context.Background(), &rpc.Config{
Host: host,
Port: port,
@@ -568,7 +574,7 @@ func (b *BeaconNode) registerPrometheusService(ctx *cli.Context) error {
additionalHandlers = append(additionalHandlers, prometheus.Handler{Path: "/tree", Handler: c.TreeHandler})
service := prometheus.NewPrometheusService(
fmt.Sprintf(":%d", ctx.GlobalInt64(cmd.MonitoringPortFlag.Name)),
fmt.Sprintf(":%d", ctx.Int64(cmd.MonitoringPortFlag.Name)),
b.services,
additionalHandlers...,
)
@@ -578,19 +584,20 @@ func (b *BeaconNode) registerPrometheusService(ctx *cli.Context) error {
}
func (b *BeaconNode) registerGRPCGateway(ctx *cli.Context) error {
gatewayPort := ctx.GlobalInt(flags.GRPCGatewayPort.Name)
gatewayPort := ctx.Int(flags.GRPCGatewayPort.Name)
if gatewayPort > 0 {
selfAddress := fmt.Sprintf("127.0.0.1:%d", ctx.GlobalInt(flags.RPCPort.Name))
selfAddress := fmt.Sprintf("127.0.0.1:%d", ctx.Int(flags.RPCPort.Name))
gatewayAddress := fmt.Sprintf("0.0.0.0:%d", gatewayPort)
return b.services.RegisterService(gateway.New(context.Background(), selfAddress, gatewayAddress, nil /*optional mux*/))
allowedOrigins := strings.Split(ctx.String(flags.GPRCGatewayCorsDomain.Name), ",")
return b.services.RegisterService(gateway.New(context.Background(), selfAddress, gatewayAddress, nil /*optional mux*/, allowedOrigins))
}
return nil
}
func (b *BeaconNode) registerInteropServices(ctx *cli.Context) error {
genesisTime := ctx.GlobalUint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := ctx.GlobalUint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := ctx.GlobalString(flags.InteropGenesisStateFlag.Name)
genesisTime := ctx.Uint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := ctx.Uint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := ctx.String(flags.InteropGenesisStateFlag.Name)
if genesisValidators > 0 || genesisStatePath != "" {
svc := interopcoldstart.NewColdStartService(context.Background(), &interopcoldstart.Config{

View File

@@ -9,7 +9,7 @@ import (
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli"
"gopkg.in/urfave/cli.v2"
)
// Ensure BeaconNode implements interfaces.
@@ -22,7 +22,7 @@ func TestNodeClose_OK(t *testing.T) {
tmp := fmt.Sprintf("%s/datadirtest2", testutil.TempDir())
os.RemoveAll(tmp)
app := cli.NewApp()
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.String("web3provider", "ws//127.0.0.1:8546", "web3 provider ws or IPC endpoint")
set.Bool("test-skip-pow", true, "skip pow dial")
@@ -31,7 +31,7 @@ func TestNodeClose_OK(t *testing.T) {
set.Bool("demo-config", true, "demo configuration")
set.String("deposit-contract", "0x0000000000000000000000000000000000000000", "deposit contract address")
context := cli.NewContext(app, set, nil)
context := cli.NewContext(&app, set, nil)
node, err := NewBeaconNode(context)
if err != nil {

View File

@@ -29,6 +29,7 @@ go_library(
"//tools:__subpackages__",
],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/p2p/connmgr:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
@@ -70,6 +71,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
@@ -92,12 +94,14 @@ go_test(
flaky = True,
tags = ["block-network"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//proto/testing:go_default_library",
"//shared/iputils:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p_blankhost//:go_default_library",
@@ -109,6 +113,7 @@ go_test(
"@com_github_libp2p_go_libp2p_swarm//testing:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -19,5 +19,6 @@ type Config struct {
MaxPeers uint
WhitelistCIDR string
EnableUPnP bool
EnableDiscv5 bool
Encoding string
}

View File

@@ -14,8 +14,12 @@ import (
"github.com/libp2p/go-libp2p-core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
)
const attestationSubnetCount = 64
const attSubnetEnrKey = "attnets"
// Listener defines the discovery V5 network interface that is used
// to communicate with other peers.
type Listener interface {
@@ -27,6 +31,7 @@ type Listener interface {
LookupRandom() []*enode.Node
Ping(*enode.Node) error
RequestENR(*enode.Node) (*enode.Node, error)
LocalNode() *enode.LocalNode
}
func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *discover.UDPv5 {
@@ -34,7 +39,14 @@ func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *disc
IP: ipAddr,
Port: int(cfg.UDPPort),
}
conn, err := net.ListenUDP("udp4", udpAddr)
// assume ip is either ipv4 or ipv6
networkVersion := ""
if ipAddr.To4() != nil {
networkVersion = "udp4"
} else {
networkVersion = "udp6"
}
conn, err := net.ListenUDP(networkVersion, udpAddr)
if err != nil {
log.Fatal(err)
}
@@ -44,7 +56,7 @@ func createListener(ipAddr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) *disc
}
if cfg.HostAddress != "" {
hostIP := net.ParseIP(cfg.HostAddress)
if hostIP.To4() == nil {
if hostIP.To4() == nil && hostIP.To16() == nil {
log.Errorf("Invalid host address given: %s", hostIP.String())
} else {
localNode.SetFallbackIP(hostIP)
@@ -84,13 +96,13 @@ func createLocalNode(privKey *ecdsa.PrivateKey, ipAddr net.IP, udpPort int, tcpP
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
return localNode, nil
return intializeAttSubnets(localNode), nil
}
func startDiscoveryV5(addr net.IP, privKey *ecdsa.PrivateKey, cfg *Config) (*discover.UDPv5, error) {
listener := createListener(addr, privKey, cfg)
node := listener.Self()
log.WithField("nodeID", node.ID()).Info("Started discovery v5")
record := listener.Self()
log.WithField("ENR", record.String()).Info("Started discovery v5")
return listener, nil
}
@@ -108,6 +120,29 @@ func startDHTDiscovery(host core.Host, bootstrapAddr string) error {
return err
}
func intializeAttSubnets(node *enode.LocalNode) *enode.LocalNode {
bitV := bitfield.NewBitvector64()
entry := enr.WithEntry(attSubnetEnrKey, bitV.Bytes())
node.Set(entry)
return node
}
func retrieveAttSubnets(record *enr.Record) ([]uint64, error) {
bitV := bitfield.NewBitvector64()
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
err := record.Load(entry)
if err != nil {
return nil, err
}
committeeIdxs := []uint64{}
for i := uint64(0); i < 64; i++ {
if bitV.BitAt(i) {
committeeIdxs = append(committeeIdxs, i)
}
}
return committeeIdxs, nil
}
func parseBootStrapAddrs(addrs []string) (discv5Nodes []string, kadDHTNodes []string) {
discv5Nodes, kadDHTNodes = parseGenericAddrs(addrs)
if len(discv5Nodes) == 0 && len(kadDHTNodes) == 0 {

View File

@@ -13,7 +13,10 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p-core/host"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/shared/iputils"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
@@ -107,6 +110,90 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
}
}
func TestStartDiscV5_DiscoverPeersWithSubnets(t *testing.T) {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
bootListener := createListener(ipAddr, pkey, &Config{UDPPort: uint(port)})
defer bootListener.Close()
bootNode := bootListener.Self()
cfg := &Config{
BootstrapNodeAddr: []string{bootNode.String()},
Discv5BootStrapAddr: []string{bootNode.String()},
Encoding: "ssz",
MaxPeers: 30,
}
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
defer func() {
pollingPeriod = currentPeriod
}()
var listeners []*discover.UDPv5
for i := 1; i <= 3; i++ {
port = 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
listener, err := startDiscoveryV5(ipAddr, pkey, cfg)
if err != nil {
t.Errorf("Could not start discovery for node: %v", err)
}
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(uint64(i), true)
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
listener.LocalNode().Set(entry)
listeners = append(listeners, listener)
}
// Make one service on port 3001.
port = 4000
cfg.UDPPort = uint(port)
s, err := NewService(cfg)
if err != nil {
t.Fatal(err)
}
s.Start()
defer s.Stop()
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(discoveryWaitTime)
// look up 3 different subnets
exists, err := s.FindPeersWithSubnet(1)
if err != nil {
t.Fatal(err)
}
exists2, err := s.FindPeersWithSubnet(2)
if err != nil {
t.Fatal(err)
}
exists3, err := s.FindPeersWithSubnet(3)
if err != nil {
t.Fatal(err)
}
if !exists || !exists2 || !exists3 {
t.Fatal("Peer with subnet doesn't exist")
}
// update ENR of a peer
testService := &Service{dv5Listener: listeners[0]}
cache.CommitteeIDs.AddIDs([]uint64{10}, 0)
testService.RefreshENR(0)
time.Sleep(2 * time.Second)
exists, err = s.FindPeersWithSubnet(2)
if err != nil {
t.Fatal(err)
}
if !exists {
t.Fatal("Peer with subnet doesn't exist")
}
}
func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
addr := net.ParseIP("invalidIP")
_, pkey := createAddrAndPrivKey(t)

View File

@@ -26,7 +26,7 @@ func (s *Service) AddConnectionHandler(reqFunc func(ctx context.Context, id peer
log.WithField("currentState", peerConnectionState).WithField("reason", "already active").Trace("Ignoring connection request")
return
}
s.peers.Add(conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction)
s.peers.Add(conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction, nil)
if len(s.peers.Active()) >= int(s.cfg.MaxPeers) {
log.WithField("reason", "at peer limit").Trace("Ignoring connection request")
if err := s.Disconnect(conn.RemotePeer()); err != nil {

View File

@@ -53,6 +53,8 @@ type PubSubProvider interface {
type PeerManager interface {
Disconnect(peer.ID) error
PeerID() peer.ID
RefreshENR(epoch uint64)
FindPeersWithSubnet(index uint64) (bool, error)
}
// Sender abstracts the sending functionality from libp2p.

View File

@@ -11,13 +11,14 @@ import (
filter "github.com/libp2p/go-maddr-filter"
"github.com/multiformats/go-multiaddr"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/connmgr"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
)
// buildOptions for the libp2p host.
func buildOptions(cfg *Config, ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Option {
listen, err := ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ip, cfg.TCPPort))
listen, err := multiAddressBuilder(ip.String(), cfg.TCPPort)
if err != nil {
log.Fatalf("Failed to p2p listen: %v", err)
}
@@ -42,7 +43,7 @@ func buildOptions(cfg *Config, ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
}
if cfg.HostAddress != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []multiaddr.Multiaddr) []multiaddr.Multiaddr {
external, err := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", cfg.HostAddress, cfg.TCPPort))
external, err := multiAddressBuilder(cfg.HostAddress, cfg.TCPPort)
if err != nil {
log.WithError(err).Error("Unable to create external multiaddress")
} else {
@@ -67,7 +68,7 @@ func buildOptions(cfg *Config, ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
log.Errorf("Invalid local ip provided: %s", cfg.LocalIP)
return options
}
listen, err = ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", cfg.LocalIP, cfg.TCPPort))
listen, err = multiAddressBuilder(cfg.LocalIP, cfg.TCPPort)
if err != nil {
log.Fatalf("Failed to p2p listen: %v", err)
}
@@ -76,6 +77,17 @@ func buildOptions(cfg *Config, ip net.IP, priKey *ecdsa.PrivateKey) []libp2p.Opt
return options
}
func multiAddressBuilder(ipAddr string, port uint) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, port))
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/tcp/%d", ipAddr, port))
}
// Adds a private key to the libp2p option if the option was provided.
// If the private key file is missing or cannot be read, or if the
// private key contents cannot be marshaled, an exception is thrown.

View File

@@ -68,6 +68,7 @@ type peerStatus struct {
chainState *pb.Status
chainStateLastUpdated time.Time
badResponses int
committeeIndices []uint64
}
// NewStatus creates a new status entity.
@@ -85,7 +86,7 @@ func (p *Status) MaxBadResponses() int {
// Add adds a peer.
// If a peer already exists with this ID its address and direction are updated with the supplied data.
func (p *Status) Add(pid peer.ID, address ma.Multiaddr, direction network.Direction) {
func (p *Status) Add(pid peer.ID, address ma.Multiaddr, direction network.Direction, indices []uint64) {
p.lock.Lock()
defer p.lock.Unlock()
@@ -93,6 +94,9 @@ func (p *Status) Add(pid peer.ID, address ma.Multiaddr, direction network.Direct
// Peer already exists, just update its address info.
status.address = address
status.direction = direction
if indices != nil {
status.committeeIndices = indices
}
return
}
@@ -100,7 +104,8 @@ func (p *Status) Add(pid peer.ID, address ma.Multiaddr, direction network.Direct
address: address,
direction: direction,
// Peers start disconnected; state will be updated when the handshake process begins.
peerState: PeerDisconnected,
peerState: PeerDisconnected,
committeeIndices: indices,
}
}
@@ -151,6 +156,50 @@ func (p *Status) ChainState(pid peer.ID) (*pb.Status, error) {
return nil, ErrPeerUnknown
}
// IsActive checks if a peers is active and returns the result appropriately.
func (p *Status) IsActive(pid peer.ID) bool {
p.lock.RLock()
defer p.lock.RUnlock()
status, ok := p.status[pid]
return ok && (status.peerState == PeerConnected || status.peerState == PeerConnecting)
}
// CommitteeIndices retrieves the committee subnets the peer is subscribed to.
func (p *Status) CommitteeIndices(pid peer.ID) ([]uint64, error) {
p.lock.RLock()
defer p.lock.RUnlock()
if status, ok := p.status[pid]; ok {
if status.committeeIndices == nil {
return []uint64{}, nil
}
return status.committeeIndices, nil
}
return nil, ErrPeerUnknown
}
// SubscribedToSubnet retrieves the peers subscribed to the given
// committee subnet.
func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
p.lock.RLock()
defer p.lock.RUnlock()
peers := make([]peer.ID, 0)
for pid, status := range p.status {
// look at active peers
if status.peerState == PeerConnecting || status.peerState == PeerConnected &&
status.committeeIndices != nil {
for _, idx := range status.committeeIndices {
if idx == index {
peers = append(peers, pid)
}
}
}
}
return peers
}
// SetConnectionState sets the connection state of the given remote peer.
func (p *Status) SetConnectionState(pid peer.ID, state PeerConnectionState) {
p.lock.Lock()

View File

@@ -38,7 +38,7 @@ func TestPeerExplicitAdd(t *testing.T) {
t.Fatalf("Failed to create address: %v", err)
}
direction := network.DirInbound
p.Add(id, address, direction)
p.Add(id, address, direction, []uint64{})
resAddress, err := p.Address(id)
if err != nil {
@@ -62,7 +62,7 @@ func TestPeerExplicitAdd(t *testing.T) {
t.Fatalf("Failed to create address: %v", err)
}
direction2 := network.DirOutbound
p.Add(id, address2, direction2)
p.Add(id, address2, direction2, []uint64{})
resAddress2, err := p.Address(id)
if err != nil {
@@ -156,7 +156,7 @@ func TestPeerChainState(t *testing.T) {
t.Fatalf("Failed to create address: %v", err)
}
direction := network.DirInbound
p.Add(id, address, direction)
p.Add(id, address, direction, []uint64{})
oldChainStartLastUpdated, err := p.ChainStateLastUpdated(id)
if err != nil {
@@ -205,7 +205,7 @@ func TestPeerBadResponses(t *testing.T) {
t.Fatalf("Failed to create address: %v", err)
}
direction := network.DirInbound
p.Add(id, address, direction)
p.Add(id, address, direction, []uint64{})
resBadResponses, err := p.BadResponses(id)
if err != nil {
@@ -458,7 +458,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
p := peers.NewStatus(maxBadResponses)
for i := 0; i <= maxPeers+100; i++ {
p.Add(peer.ID(i), nil, network.DirOutbound)
p.Add(peer.ID(i), nil, network.DirOutbound, []uint64{})
p.SetConnectionState(peer.ID(i), peers.PeerConnected)
p.SetChainState(peer.ID(i), &pb.Status{
FinalizedEpoch: 10,
@@ -506,7 +506,7 @@ func addPeer(t *testing.T, p *peers.Status, state peers.PeerConnectionState) pee
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
p.Add(id, nil, network.DirUnknown)
p.Add(id, nil, network.DirUnknown, []uint64{})
p.SetConnectionState(id, state)
return id
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/dgraph-io/ristretto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
ds "github.com/ipfs/go-datastore"
dsync "github.com/ipfs/go-datastore/sync"
"github.com/libp2p/go-libp2p"
@@ -22,6 +23,8 @@ import (
rhost "github.com/libp2p/go-libp2p/p2p/host/routed"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/shared"
@@ -34,6 +37,9 @@ var _ = shared.Service(&Service{})
// Check local table every 5 seconds for newly added peers.
var pollingPeriod = 5 * time.Second
// search limit for number of peers in discovery v5.
const searchLimit = 100
const prysmProtocolPrefix = "/prysm/0.0.0"
// maxBadResponses is the maximum number of bad responses from a peer before we stop talking to it.
@@ -152,7 +158,7 @@ func (s *Service) Start() {
s.host.ConnManager().Protect(peer.ID, "relay")
}
if len(s.cfg.Discv5BootStrapAddr) != 0 && !s.cfg.NoDiscovery {
if (len(s.cfg.Discv5BootStrapAddr) != 0 && !s.cfg.NoDiscovery) || s.cfg.EnableDiscv5 {
ipAddr := ipAddr()
listener, err := startDiscoveryV5(ipAddr, s.privKey, s.cfg)
if err != nil {
@@ -167,7 +173,6 @@ func (s *Service) Start() {
return
}
s.dv5Listener = listener
go s.listenForNewNodes()
}
@@ -215,13 +220,13 @@ func (s *Service) Start() {
runutil.RunEvery(s.ctx, 10*time.Second, s.updateMetrics)
multiAddrs := s.host.Network().ListenAddresses()
logIP4Addr(s.host.ID(), multiAddrs...)
logIPAddr(s.host.ID(), multiAddrs...)
p2pHostAddress := s.cfg.HostAddress
p2pTCPPort := s.cfg.TCPPort
if p2pHostAddress != "" {
logExternalIP4Addr(s.host.ID(), p2pHostAddress, p2pTCPPort)
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort)
}
p2pHostDNS := s.cfg.HostDNS
@@ -293,11 +298,72 @@ func (s *Service) Peers() *peers.Status {
return s.peers
}
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee id's for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee id's.
func (s *Service) RefreshENR(epoch uint64) {
// return early if discv5 isnt running
if s.dv5Listener == nil {
return
}
bitV := bitfield.NewBitvector64()
committees := cache.CommitteeIDs.GetIDs(epoch)
for _, idx := range committees {
bitV.SetBitAt(idx, true)
}
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
s.dv5Listener.LocalNode().Set(entry)
}
// FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then we try to connect
// with those peers.
func (s *Service) FindPeersWithSubnet(index uint64) (bool, error) {
nodes := make([]*enode.Node, searchLimit)
num := s.dv5Listener.ReadRandomNodes(nodes)
exists := false
for _, node := range nodes[:num] {
if node.IP() == nil {
continue
}
subnets, err := retrieveAttSubnets(node.Record())
if err != nil {
return false, errors.Wrap(err, "could not retrieve subnets")
}
for _, comIdx := range subnets {
if comIdx == index {
multiAddr, err := convertToSingleMultiAddr(node)
if err != nil {
return false, err
}
info, err := peer.AddrInfoFromP2pAddr(multiAddr)
if err != nil {
return false, err
}
if s.peers.IsActive(info.ID) {
exists = true
continue
}
if s.host.Network().Connectedness(info.ID) == network.Connected {
exists = true
continue
}
s.peers.Add(info.ID, multiAddr, network.DirUnknown, subnets)
if err := s.connectWithPeer(*info); err != nil {
log.Errorf("Could not connect with peer %s: %v", info.String(), err)
}
exists = true
}
}
}
return exists, nil
}
// listen for new nodes watches for new nodes in the network and adds them to the peerstore.
func (s *Service) listenForNewNodes() {
runutil.RunEvery(s.ctx, pollingPeriod, func() {
nodes := s.dv5Listener.LookupRandom()
multiAddresses := convertToMultiAddr(nodes)
multiAddresses := s.processPeers(nodes)
s.connectWithAllPeers(multiAddresses)
})
}
@@ -311,25 +377,68 @@ func (s *Service) connectWithAllPeers(multiAddrs []ma.Multiaddr) {
for _, info := range addrInfos {
// make each dial non-blocking
go func(info peer.AddrInfo) {
if len(s.Peers().Active()) >= int(s.cfg.MaxPeers) {
log.WithFields(logrus.Fields{"peer": info.ID.String(),
"reason": "at peer limit"}).Trace("Not dialing peer")
return
}
if info.ID == s.host.ID() {
return
}
if s.Peers().IsBad(info.ID) {
return
}
if err := s.host.Connect(s.ctx, info); err != nil {
if err := s.connectWithPeer(info); err != nil {
log.Errorf("Could not connect with peer %s: %v", info.String(), err)
s.Peers().IncrementBadResponses(info.ID)
}
}(info)
}
}
func (s *Service) connectWithPeer(info peer.AddrInfo) error {
if len(s.Peers().Active()) >= int(s.cfg.MaxPeers) {
log.WithFields(logrus.Fields{"peer": info.ID.String(),
"reason": "at peer limit"}).Trace("Not dialing peer")
return nil
}
if info.ID == s.host.ID() {
return nil
}
if s.Peers().IsBad(info.ID) {
return nil
}
if err := s.host.Connect(s.ctx, info); err != nil {
s.Peers().IncrementBadResponses(info.ID)
return err
}
return nil
}
// process new peers that come in from our dht.
func (s *Service) processPeers(nodes []*enode.Node) []ma.Multiaddr {
var multiAddrs []ma.Multiaddr
for _, node := range nodes {
// ignore nodes with no ip address stored.
if node.IP() == nil {
continue
}
multiAddr, err := convertToSingleMultiAddr(node)
if err != nil {
log.WithError(err).Error("Could not convert to multiAddr")
continue
}
peerData, err := peer.AddrInfoFromP2pAddr(multiAddr)
if err != nil {
log.WithError(err).Error("Could not get peer id")
continue
}
if s.peers.IsActive(peerData.ID) {
continue
}
if s.host.Network().Connectedness(peerData.ID) == network.Connected {
continue
}
indices, err := retrieveAttSubnets(node.Record())
if err != nil {
log.WithError(err).Error("Could not retrieve attestation subnets")
continue
}
// add peer to peer handler.
s.peers.Add(peerData.ID, multiAddr, network.DirUnknown, indices)
multiAddrs = append(multiAddrs, multiAddr)
}
return multiAddrs
}
func (s *Service) connectToBootnodes() error {
nodes := make([]*enode.Node, 0, len(s.cfg.Discv5BootStrapAddr))
for _, addr := range s.cfg.Discv5BootStrapAddr {
@@ -358,10 +467,10 @@ func (s *Service) addKadDHTNodesToExclusionList(addr string) error {
return nil
}
func logIP4Addr(id peer.ID, addrs ...ma.Multiaddr) {
func logIPAddr(id peer.ID, addrs ...ma.Multiaddr) {
var correctAddr ma.Multiaddr
for _, addr := range addrs {
if strings.Contains(addr.String(), "/ip4/") {
if strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/") {
correctAddr = addr
break
}
@@ -374,13 +483,16 @@ func logIP4Addr(id peer.ID, addrs ...ma.Multiaddr) {
}
}
func logExternalIP4Addr(id peer.ID, addr string, port uint) {
func logExternalIPAddr(id peer.ID, addr string, port uint) {
if addr != "" {
p := strconv.FormatUint(uint64(port), 10)
multiAddr, err := multiAddressBuilder(addr, port)
if err != nil {
log.Errorf("Could not create multiaddress: %v", err)
return
}
log.WithField(
"multiAddr",
"/ip4/"+addr+"/tcp/"+p+"/p2p/"+id.String(),
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
}
}

View File

@@ -52,6 +52,10 @@ func (mockListener) RequestENR(*enode.Node) (*enode.Node, error) {
panic("implement me")
}
func (mockListener) LocalNode() *enode.LocalNode {
panic("implement me")
}
func createPeer(t *testing.T, cfg *Config, port int) (Listener, host.Host) {
h, pkey, ipAddr := createHost(t, port)
cfg.UDPPort = uint(port)

View File

@@ -25,12 +25,12 @@ func (m *MockPeersProvider) Peers() *peers.Status {
// Pretend we are connected to two peers
id0, _ := peer.IDB58Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
ma0, _ := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
m.peers.Add(id0, ma0, network.DirInbound)
m.peers.Add(id0, ma0, network.DirInbound, []uint64{})
m.peers.SetConnectionState(id0, peers.PeerConnected)
m.peers.SetChainState(id0, &pb.Status{FinalizedEpoch: uint64(10)})
id1, _ := peer.IDB58Decode("16Uiu2HAm4HgJ9N1o222xK61o7LSgToYWoAy1wNTJRkh9gLZapVAy")
ma1, _ := ma.NewMultiaddr("/ip4/52.23.23.253/tcp/30000/ipfs/QmfAgkmjiZNZhr2wFN9TwaRgHouMTBT6HELyzE5A3BT2wK/p2p-circuit")
m.peers.Add(id1, ma1, network.DirOutbound)
m.peers.Add(id1, ma1, network.DirOutbound, []uint64{})
m.peers.SetConnectionState(id1, peers.PeerConnected)
m.peers.SetChainState(id1, &pb.Status{FinalizedEpoch: uint64(11)})
}

View File

@@ -159,7 +159,7 @@ func (p *TestP2P) AddConnectionHandler(f func(ctx context.Context, id peer.ID) e
ConnectedF: func(net network.Network, conn network.Conn) {
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
p.peers.Add(conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction)
p.peers.Add(conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction, []uint64{})
ctx := context.Background()
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnecting)
@@ -227,3 +227,13 @@ func (p *TestP2P) Started() bool {
func (p *TestP2P) Peers() *peers.Status {
return p.peers
}
// FindPeersWithSubnet mocks the p2p func.
func (p *TestP2P) FindPeersWithSubnet(index uint64) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (p *TestP2P) RefreshENR(epoch uint64) {
return
}

View File

@@ -18,14 +18,12 @@ go_library(
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/powchain:go_default_library",
"//beacon-chain/rpc/aggregator:go_default_library",
"//beacon-chain/rpc/beacon:go_default_library",
"//beacon-chain/rpc/node:go_default_library",
"//beacon-chain/rpc/validator:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"//proto/slashing:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",

View File

@@ -1,14 +0,0 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["server.go"],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/rpc/aggregator",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/rpc/validator:go_default_library",
"//proto/beacon/rpc/v1:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -1,40 +0,0 @@
package aggregator
import (
"context"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/validator"
pb "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
"go.opencensus.io/trace"
)
// Server defines a server implementation of the gRPC aggregator service.
// Deprecated: Do not use.
type Server struct {
ValidatorServer *validator.Server
}
// SubmitAggregateAndProof is called by a validator when its assigned to be an aggregator.
// The beacon node will broadcast aggregated attestation and proof on the aggregator's behavior.
// Deprecated: Use github.com/prysmaticlabs/prysm/beacon-chain/rpc/validator.SubmitAggregateAndProof.
// TODO(4952): Delete this method.
func (as *Server) SubmitAggregateAndProof(ctx context.Context, req *pb.AggregationRequest) (*pb.AggregationResponse, error) {
ctx, span := trace.StartSpan(ctx, "AggregatorServer.SubmitAggregation")
defer span.End()
span.AddAttributes(trace.Int64Attribute("slot", int64(req.Slot)))
request := &ethpb.AggregationRequest{
Slot: req.Slot,
CommitteeIndex: req.CommitteeIndex,
PublicKey: req.PublicKey,
SlotSignature: req.SlotSignature,
}
// Passthrough request to non-deprecated method.
res, err := as.ValidatorServer.SubmitAggregateAndProof(ctx, request)
if err != nil {
return nil, err
}
return &pb.AggregationResponse{Root: res.AttestationDataRoot}, nil
}

View File

@@ -89,8 +89,10 @@ go_test(
"//beacon-chain/state:go_default_library",
"//proto/beacon/p2p/v1:go_default_library",
"//shared/attestationutil:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/featureconfig:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_gogo_protobuf//types:go_default_library",

View File

@@ -45,53 +45,26 @@ func (bs *Server) ListAttestations(
return nil, status.Errorf(codes.InvalidArgument, "Requested page size %d can not be greater than max size %d",
req.PageSize, flags.Get().MaxPageSize)
}
var atts []*ethpb.Attestation
var blocks []*ethpb.SignedBeaconBlock
var err error
switch q := req.QueryFilter.(type) {
case *ethpb.ListAttestationsRequest_Genesis:
genBlk, err := bs.BeaconDB.GenesisBlock(ctx)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not genesis block: %v", err)
}
if genBlk == nil {
return nil, status.Error(codes.Internal, "Could not find genesis block")
}
genesisRoot, err := ssz.HashTreeRoot(genBlk.Block)
if err != nil {
return nil, err
}
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetHeadBlockRoot(genesisRoot[:]))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch genesis attestations: %v", err)
}
case *ethpb.ListAttestationsRequest_HeadBlockRoot:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetHeadBlockRoot(q.HeadBlockRoot))
case *ethpb.ListAttestationsRequest_GenesisEpoch:
blocks, err = bs.BeaconDB.Blocks(ctx, filters.NewFilter().SetStartEpoch(0).SetEndEpoch(0))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
case *ethpb.ListAttestationsRequest_SourceEpoch:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetSourceEpoch(q.SourceEpoch))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
case *ethpb.ListAttestationsRequest_SourceRoot:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetSourceRoot(q.SourceRoot))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
case *ethpb.ListAttestationsRequest_TargetEpoch:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetTargetEpoch(q.TargetEpoch))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
case *ethpb.ListAttestationsRequest_TargetRoot:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetTargetRoot(q.TargetRoot))
case *ethpb.ListAttestationsRequest_Epoch:
blocks, err = bs.BeaconDB.Blocks(ctx, filters.NewFilter().SetStartEpoch(q.Epoch).SetEndEpoch(q.Epoch))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
default:
return nil, status.Error(codes.InvalidArgument, "Must specify a filter criteria for fetching attestations")
}
atts := make([]*ethpb.Attestation, 0, params.BeaconConfig().MaxAttestations*uint64(len(blocks)))
for _, block := range blocks {
atts = append(atts, block.Block.Body.Attestations...)
}
// We sort attestations according to the Sortable interface.
sort.Sort(sortableAttestations(atts))
numAttestations := len(atts)
@@ -127,25 +100,27 @@ func (bs *Server) ListAttestations(
func (bs *Server) ListIndexedAttestations(
ctx context.Context, req *ethpb.ListIndexedAttestationsRequest,
) (*ethpb.ListIndexedAttestationsResponse, error) {
atts := make([]*ethpb.Attestation, 0)
blocks := make([]*ethpb.SignedBeaconBlock, 0)
var err error
epoch := helpers.SlotToEpoch(bs.GenesisTimeFetcher.CurrentSlot())
switch q := req.QueryFilter.(type) {
case *ethpb.ListIndexedAttestationsRequest_TargetEpoch:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetTargetEpoch(q.TargetEpoch))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
epoch = q.TargetEpoch
case *ethpb.ListIndexedAttestationsRequest_GenesisEpoch:
atts, err = bs.BeaconDB.Attestations(ctx, filters.NewFilter().SetTargetEpoch(0))
blocks, err = bs.BeaconDB.Blocks(ctx, filters.NewFilter().SetStartEpoch(0).SetEndEpoch(0))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
case *ethpb.ListIndexedAttestationsRequest_Epoch:
blocks, err = bs.BeaconDB.Blocks(ctx, filters.NewFilter().SetStartEpoch(q.Epoch).SetEndEpoch(q.Epoch))
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch attestations: %v", err)
}
epoch = 0
default:
return nil, status.Error(codes.InvalidArgument, "Must specify a filter criteria for fetching attestations")
}
atts := make([]*ethpb.Attestation, 0, params.BeaconConfig().MaxAttestations*uint64(len(blocks)))
for _, block := range blocks {
atts = append(atts, block.Block.Body.Attestations...)
}
// We sort attestations according to the Sortable interface.
sort.Sort(sortableAttestations(atts))
numAttestations := len(atts)
@@ -185,15 +160,7 @@ func (bs *Server) ListIndexedAttestations(
continue
}
committee := committeesBySlot[att.Data.Slot].Committees[att.Data.CommitteeIndex]
idxAtt, err := attestationutil.ConvertToIndexed(ctx, atts[i], committee.ValidatorIndices)
if err != nil {
return nil, status.Errorf(
codes.Internal,
"Could not convert attestation with slot %d to indexed form: %v",
att.Data.Slot,
err,
)
}
idxAtt := attestationutil.ConvertToIndexed(ctx, atts[i], committee.ValidatorIndices)
indexedAtts[i] = idxAtt
}
@@ -311,15 +278,7 @@ func (bs *Server) StreamIndexedAttestations(
continue
}
committee := committeesForSlot.Committees[att.Data.CommitteeIndex]
idxAtt, err := attestationutil.ConvertToIndexed(stream.Context(), att, committee.ValidatorIndices)
if err != nil {
return status.Errorf(
codes.Internal,
"Could not convert attestation with slot %d to indexed form: %v",
att.Data.Slot,
err,
)
}
idxAtt := attestationutil.ConvertToIndexed(stream.Context(), att, committee.ValidatorIndices)
if err := stream.Send(idxAtt); err != nil {
return status.Errorf(codes.Unavailable, "Could not send over stream: %v", err)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"reflect"
"sort"
"strconv"
"strings"
"testing"
@@ -26,6 +27,7 @@ import (
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pbp2p "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/testutil"
)
@@ -53,7 +55,7 @@ func TestServer_ListAttestations_NoResults(t *testing.T) {
NextPageToken: strconv.Itoa(0),
}
res, err := bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_SourceEpoch{SourceEpoch: 0},
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{GenesisEpoch: true},
})
if err != nil {
t.Fatal(err)
@@ -83,17 +85,21 @@ func TestServer_ListAttestations_Genesis(t *testing.T) {
// Should throw an error if no genesis data is found.
if _, err := bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_Genesis{
Genesis: true,
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
}); err != nil && !strings.Contains(err.Error(), "Could not find genesis") {
t.Fatal(err)
}
att := &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}}
parentRoot := [32]byte{1, 2, 3}
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{
Slot: 0,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{att},
},
},
}
root, err := ssz.HashTreeRoot(blk.Block)
@@ -106,16 +112,6 @@ func TestServer_ListAttestations_Genesis(t *testing.T) {
if err := db.SaveGenesisBlockRoot(ctx, root); err != nil {
t.Fatal(err)
}
att := &ethpb.Attestation{
AggregationBits: bitfield.Bitlist{0b11},
Data: &ethpb.AttestationData{
Slot: 0,
BeaconBlockRoot: root[:],
},
}
if err := db.SaveAttestation(ctx, att); err != nil {
t.Fatal(err)
}
wanted := &ethpb.ListAttestationsResponse{
Attestations: []*ethpb.Attestation{att},
NextPageToken: "",
@@ -123,8 +119,8 @@ func TestServer_ListAttestations_Genesis(t *testing.T) {
}
res, err := bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_Genesis{
Genesis: true,
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
})
if err != nil {
@@ -140,8 +136,8 @@ func TestServer_ListAttestations_Genesis(t *testing.T) {
t.Fatal(err)
}
if _, err := bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_Genesis{
Genesis: true,
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
}); err != nil && !strings.Contains(err.Error(), "Found more than 1") {
t.Fatal(err)
@@ -153,20 +149,29 @@ func TestServer_ListAttestations_NoPagination(t *testing.T) {
defer dbTest.TeardownDB(t, db)
ctx := context.Background()
count := uint64(10)
count := uint64(8)
atts := make([]*ethpb.Attestation, 0, count)
for i := uint64(0); i < count; i++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
blockExample := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: i,
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
AggregationBits: bitfield.Bitlist{0b11},
}
if err := db.SaveAttestation(ctx, attExample); err != nil {
if err := db.SaveBlock(ctx, blockExample); err != nil {
t.Fatal(err)
}
atts = append(atts, attExample)
atts = append(atts, blockExample.Block.Body.Attestations...)
}
bs := &Server{
@@ -174,8 +179,8 @@ func TestServer_ListAttestations_NoPagination(t *testing.T) {
}
received, err := bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
})
if err != nil {
@@ -183,7 +188,7 @@ func TestServer_ListAttestations_NoPagination(t *testing.T) {
}
if !reflect.DeepEqual(atts, received.Attestations) {
t.Fatalf("incorrect attestations response: wanted %v, received %v", atts, received.Attestations)
t.Fatalf("incorrect attestations response: wanted \n%v, received \n%v", atts, received.Attestations)
}
}
@@ -198,56 +203,82 @@ func TestServer_ListAttestations_FiltersCorrectly(t *testing.T) {
targetRoot := []byte{7, 8, 9}
targetEpoch := uint64(7)
unknownRoot := []byte{1, 1, 1}
atts := []*ethpb.Attestation{
blocks := []*ethpb.SignedBeaconBlock{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: someRoot,
Source: &ethpb.Checkpoint{
Root: sourceRoot,
Epoch: sourceEpoch,
},
Target: &ethpb.Checkpoint{
Root: targetRoot,
Epoch: targetEpoch,
},
Slot: 3,
},
AggregationBits: bitfield.Bitlist{0b11},
},
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: unknownRoot,
Source: &ethpb.Checkpoint{
Root: sourceRoot,
Epoch: sourceEpoch,
},
Target: &ethpb.Checkpoint{
Root: targetRoot,
Epoch: targetEpoch,
},
Block: &ethpb.BeaconBlock{
Slot: 4,
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: someRoot,
Source: &ethpb.Checkpoint{
Root: sourceRoot,
Epoch: sourceEpoch,
},
Target: &ethpb.Checkpoint{
Root: targetRoot,
Epoch: targetEpoch,
},
Slot: 3,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
AggregationBits: bitfield.Bitlist{0b11},
},
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: someRoot,
Source: &ethpb.Checkpoint{
Root: unknownRoot,
Epoch: sourceEpoch,
Block: &ethpb.BeaconBlock{
Slot: 5 + params.BeaconConfig().SlotsPerEpoch,
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: someRoot,
Source: &ethpb.Checkpoint{
Root: sourceRoot,
Epoch: sourceEpoch,
},
Target: &ethpb.Checkpoint{
Root: targetRoot,
Epoch: targetEpoch,
},
Slot: 4 + params.BeaconConfig().SlotsPerEpoch,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
},
{
Block: &ethpb.BeaconBlock{
Slot: 5,
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: someRoot,
Source: &ethpb.Checkpoint{
Root: sourceRoot,
Epoch: sourceEpoch,
},
Target: &ethpb.Checkpoint{
Root: targetRoot,
Epoch: targetEpoch,
},
Slot: 4,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
Target: &ethpb.Checkpoint{
Root: unknownRoot,
Epoch: targetEpoch,
},
Slot: 5,
},
AggregationBits: bitfield.Bitlist{0b11},
},
}
if err := db.SaveAttestations(ctx, atts); err != nil {
if err := db.SaveBlocks(ctx, blocks); err != nil {
t.Fatal(err)
}
@@ -256,49 +287,22 @@ func TestServer_ListAttestations_FiltersCorrectly(t *testing.T) {
}
received, err := bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{HeadBlockRoot: someRoot},
QueryFilter: &ethpb.ListAttestationsRequest_Epoch{Epoch: 1},
})
if err != nil {
t.Fatal(err)
}
if len(received.Attestations) != 1 {
t.Errorf("Wanted 1 matching attestations for epoch %d, received %d", 1, len(received.Attestations))
}
received, err = bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{GenesisEpoch: true},
})
if err != nil {
t.Fatal(err)
}
if len(received.Attestations) != 2 {
t.Errorf("Wanted 2 matching attestations with root %#x, received %d", someRoot, len(received.Attestations))
}
received, err = bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_SourceEpoch{SourceEpoch: sourceEpoch},
})
if err != nil {
t.Fatal(err)
}
if len(received.Attestations) != 3 {
t.Errorf("Wanted 3 matching attestations with source epoch %d, received %d", sourceEpoch, len(received.Attestations))
}
received, err = bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_SourceRoot{SourceRoot: sourceRoot},
})
if err != nil {
t.Fatal(err)
}
if len(received.Attestations) != 2 {
t.Errorf("Wanted 2 matching attestations with source root %#x, received %d", sourceRoot, len(received.Attestations))
}
received, err = bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_TargetEpoch{TargetEpoch: targetEpoch},
})
if err != nil {
t.Fatal(err)
}
if len(received.Attestations) != 3 {
t.Errorf("Wanted 3 matching attestations with target epoch %d, received %d", targetEpoch, len(received.Attestations))
}
received, err = bs.ListAttestations(ctx, &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_TargetRoot{TargetRoot: targetRoot},
})
if err != nil {
t.Fatal(err)
}
if len(received.Attestations) != 2 {
t.Errorf("Wanted 2 matching attestations with target root %#x, received %d", targetRoot, len(received.Attestations))
t.Errorf("Wanted 2 matching attestations for epoch %d, received %d", 0, len(received.Attestations))
}
}
@@ -307,144 +311,113 @@ func TestServer_ListAttestations_Pagination_CustomPageParameters(t *testing.T) {
defer dbTest.TeardownDB(t, db)
ctx := context.Background()
count := uint64(100)
count := params.BeaconConfig().SlotsPerEpoch * 4
atts := make([]*ethpb.Attestation, 0, count)
for i := uint64(0); i < count; i++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
},
AggregationBits: bitfield.Bitlist{0b11},
for i := uint64(0); i < params.BeaconConfig().SlotsPerEpoch; i++ {
for s := uint64(0); s < 4; s++ {
blockExample := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Slot: i,
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
CommitteeIndex: s,
Slot: i,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
}
if err := db.SaveBlock(ctx, blockExample); err != nil {
t.Fatal(err)
}
atts = append(atts, blockExample.Block.Body.Attestations...)
}
if err := db.SaveAttestation(ctx, attExample); err != nil {
t.Fatal(err)
}
atts = append(atts, attExample)
}
sort.Sort(sortableAttestations(atts))
bs := &Server{
BeaconDB: db,
}
tests := []struct {
req *ethpb.ListAttestationsRequest
res *ethpb.ListAttestationsResponse
name string
req *ethpb.ListAttestationsRequest
res *ethpb.ListAttestationsResponse
}{
{
name: "1st of 3 pages",
req: &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
PageToken: strconv.Itoa(1),
PageSize: 3,
},
res: &ethpb.ListAttestationsResponse{
Attestations: []*ethpb.Attestation{
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 3,
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 4,
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 5,
},
AggregationBits: bitfield.Bitlist{0b11}},
atts[3],
atts[4],
atts[5],
},
NextPageToken: strconv.Itoa(2),
TotalSize: int32(count)}},
TotalSize: int32(count),
},
},
{
name: "10 of size 1",
req: &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
PageToken: strconv.Itoa(10),
PageSize: 5,
PageSize: 1,
},
res: &ethpb.ListAttestationsResponse{
Attestations: []*ethpb.Attestation{
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 50,
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 51,
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 52,
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 53,
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 54,
}, AggregationBits: bitfield.Bitlist{0b11}},
atts[10],
},
NextPageToken: strconv.Itoa(11),
TotalSize: int32(count)}},
TotalSize: int32(count),
},
},
{
name: "2 of size 8",
req: &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
PageToken: strconv.Itoa(33),
PageSize: 3,
PageToken: strconv.Itoa(2),
PageSize: 8,
},
res: &ethpb.ListAttestationsResponse{
Attestations: []*ethpb.Attestation{
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 99,
},
AggregationBits: bitfield.Bitlist{0b11}},
atts[16],
atts[17],
atts[18],
atts[19],
atts[20],
atts[21],
atts[22],
atts[23],
},
NextPageToken: "",
TotalSize: int32(count)}},
{
req: &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
},
PageSize: 2,
},
res: &ethpb.ListAttestationsResponse{
Attestations: []*ethpb.Attestation{
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
},
AggregationBits: bitfield.Bitlist{0b11}},
{Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: 1,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
NextPageToken: strconv.Itoa(1),
TotalSize: int32(count)}},
NextPageToken: strconv.Itoa(3),
TotalSize: int32(count)},
},
}
for _, test := range tests {
res, err := bs.ListAttestations(ctx, test.req)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(res, test.res) {
t.Errorf("Incorrect attestations response, wanted %v, received %v", test.res, res)
}
t.Run(test.name, func(t *testing.T) {
res, err := bs.ListAttestations(ctx, test.req)
if err != nil {
t.Fatal(err)
}
if !proto.Equal(res, test.res) {
t.Errorf("Incorrect attestations response, wanted \n%v, received \n%v", test.res, res)
}
})
}
}
@@ -456,17 +429,25 @@ func TestServer_ListAttestations_Pagination_OutOfRange(t *testing.T) {
count := uint64(1)
atts := make([]*ethpb.Attestation, 0, count)
for i := uint64(0); i < count; i++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
blockExample := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
AggregationBits: bitfield.Bitlist{0b11},
}
if err := db.SaveAttestation(ctx, attExample); err != nil {
if err := db.SaveBlock(ctx, blockExample); err != nil {
t.Fatal(err)
}
atts = append(atts, attExample)
atts = append(atts, blockExample.Block.Body.Attestations...)
}
bs := &Server{
@@ -474,8 +455,8 @@ func TestServer_ListAttestations_Pagination_OutOfRange(t *testing.T) {
}
req := &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
QueryFilter: &ethpb.ListAttestationsRequest_Epoch{
Epoch: 0,
},
PageToken: strconv.Itoa(1),
PageSize: 100,
@@ -506,17 +487,25 @@ func TestServer_ListAttestations_Pagination_DefaultPageSize(t *testing.T) {
count := uint64(params.BeaconConfig().DefaultPageSize)
atts := make([]*ethpb.Attestation, 0, count)
for i := uint64(0); i < count; i++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
blockExample := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
AggregationBits: bitfield.Bitlist{0b11},
}
if err := db.SaveAttestation(ctx, attExample); err != nil {
if err := db.SaveBlock(ctx, blockExample); err != nil {
t.Fatal(err)
}
atts = append(atts, attExample)
atts = append(atts, blockExample.Block.Body.Attestations...)
}
bs := &Server{
@@ -524,8 +513,8 @@ func TestServer_ListAttestations_Pagination_DefaultPageSize(t *testing.T) {
}
req := &ethpb.ListAttestationsRequest{
QueryFilter: &ethpb.ListAttestationsRequest_HeadBlockRoot{
HeadBlockRoot: []byte("root"),
QueryFilter: &ethpb.ListAttestationsRequest_GenesisEpoch{
GenesisEpoch: true,
},
}
res, err := bs.ListAttestations(ctx, req)
@@ -550,22 +539,26 @@ func TestServer_ListIndexedAttestations_GenesisEpoch(t *testing.T) {
count := params.BeaconConfig().SlotsPerEpoch
atts := make([]*ethpb.Attestation, 0, count)
for i := uint64(0); i < count; i++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
CommitteeIndex: 0,
Target: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
blockExample := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
CommitteeIndex: 0,
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
AggregationBits: bitfield.Bitlist{0b11},
}
atts = append(atts, attExample)
}
if err := db.SaveAttestations(ctx, atts); err != nil {
t.Fatal(err)
if err := db.SaveBlock(ctx, blockExample); err != nil {
t.Fatal(err)
}
atts = append(atts, blockExample.Block.Body.Attestations...)
}
// We setup 128 validators.
@@ -599,7 +592,7 @@ func TestServer_ListIndexedAttestations_GenesisEpoch(t *testing.T) {
for i := 0; i < len(indexedAtts); i++ {
att := atts[i]
committee := committees[att.Data.Slot].Committees[att.Data.CommitteeIndex]
idxAtt, err := attestationutil.ConvertToIndexed(ctx, atts[i], committee.ValidatorIndices)
idxAtt := attestationutil.ConvertToIndexed(ctx, atts[i], committee.ValidatorIndices)
if err != nil {
t.Fatalf("Could not convert attestation to indexed: %v", err)
}
@@ -645,22 +638,30 @@ func TestServer_ListIndexedAttestations_ArchivedEpoch(t *testing.T) {
startSlot := helpers.StartSlot(50)
epoch := uint64(50)
for i := startSlot; i < count; i++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
CommitteeIndex: 0,
Target: &ethpb.Checkpoint{
Epoch: epoch,
Root: make([]byte, 32),
blockExample := &ethpb.SignedBeaconBlock{
Block: &ethpb.BeaconBlock{
Body: &ethpb.BeaconBlockBody{
Attestations: []*ethpb.Attestation{
{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
Slot: i,
CommitteeIndex: 0,
Target: &ethpb.Checkpoint{
Epoch: epoch,
Root: make([]byte, 32),
},
},
AggregationBits: bitfield.Bitlist{0b11},
},
},
},
},
AggregationBits: bitfield.Bitlist{0b11},
}
atts = append(atts, attExample)
}
if err := db.SaveAttestations(ctx, atts); err != nil {
t.Fatal(err)
if err := db.SaveBlock(ctx, blockExample); err != nil {
t.Fatal(err)
}
atts = append(atts, blockExample.Block.Body.Attestations...)
}
// We setup 128 validators.
@@ -696,7 +697,7 @@ func TestServer_ListIndexedAttestations_ArchivedEpoch(t *testing.T) {
for i := 0; i < len(indexedAtts); i++ {
att := atts[i]
committee := committees[att.Data.Slot].Committees[att.Data.CommitteeIndex]
idxAtt, err := attestationutil.ConvertToIndexed(ctx, atts[i], committee.ValidatorIndices)
idxAtt := attestationutil.ConvertToIndexed(ctx, atts[i], committee.ValidatorIndices)
if err != nil {
t.Fatalf("Could not convert attestation to indexed: %v", err)
}
@@ -714,8 +715,8 @@ func TestServer_ListIndexedAttestations_ArchivedEpoch(t *testing.T) {
}
res, err := bs.ListIndexedAttestations(ctx, &ethpb.ListIndexedAttestationsRequest{
QueryFilter: &ethpb.ListIndexedAttestationsRequest_TargetEpoch{
TargetEpoch: epoch,
QueryFilter: &ethpb.ListIndexedAttestationsRequest_Epoch{
Epoch: epoch,
},
})
if err != nil {
@@ -937,7 +938,7 @@ func TestServer_StreamIndexedAttestations_OK(t *testing.T) {
for j := 0; j < numValidators; j++ {
attExample := &ethpb.Attestation{
Data: &ethpb.AttestationData{
BeaconBlockRoot: []byte("root"),
BeaconBlockRoot: bytesutil.PadTo([]byte("root"), 32),
Slot: i,
Target: &ethpb.Checkpoint{
Epoch: 0,
@@ -988,7 +989,7 @@ func TestServer_StreamIndexedAttestations_OK(t *testing.T) {
for i := 0; i < len(indexedAtts); i++ {
att := aggAtts[i]
committee := committees[att.Data.Slot].Committees[att.Data.CommitteeIndex]
idxAtt, err := attestationutil.ConvertToIndexed(ctx, att, committee.ValidatorIndices)
idxAtt := attestationutil.ConvertToIndexed(ctx, att, committee.ValidatorIndices)
if err != nil {
t.Fatalf("Could not convert attestation to indexed: %v", err)
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/binary"
"reflect"
"testing"
"time"
"github.com/gogo/protobuf/proto"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
@@ -15,6 +16,7 @@ import (
stateTrie "github.com/prysmaticlabs/prysm/beacon-chain/state"
pbp2p "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
"gopkg.in/d4l3k/messagediff.v1"
)
@@ -35,7 +37,8 @@ func TestServer_ListBeaconCommittees_CurrentEpoch(t *testing.T) {
}
m := &mock.ChainService{
State: headState,
State: headState,
Genesis: roughtime.Now().Add(time.Duration(-1*int64((headState.Slot()*params.BeaconConfig().SecondsPerSlot))) * time.Second),
}
bs := &Server{
HeadFetcher: m,
@@ -87,7 +90,8 @@ func TestServer_ListBeaconCommittees_PreviousEpoch(t *testing.T) {
headState.SetSlot(params.BeaconConfig().SlotsPerEpoch * 2)
m := &mock.ChainService{
State: headState,
State: headState,
Genesis: roughtime.Now().Add(time.Duration(-1*int64((headState.Slot()*params.BeaconConfig().SecondsPerSlot))) * time.Second),
}
bs := &Server{
HeadFetcher: m,

View File

@@ -81,13 +81,10 @@ func (bs *Server) ListValidatorBalances(
if len(pubKey) == 0 {
continue
}
index, ok, err := bs.BeaconDB.ValidatorIndex(ctx, pubKey)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not retrieve validator index: %v", err)
}
pubkeyBytes := bytesutil.ToBytes48(pubKey)
index, ok := headState.ValidatorIndexByPubkey(pubkeyBytes)
if !ok {
return nil, status.Errorf(codes.NotFound, "Could not find validator index for public key %#x", pubKey)
return nil, status.Errorf(codes.NotFound, "Could not find validator index for public key %#x", pubkeyBytes)
}
filtered[index] = true
@@ -124,6 +121,10 @@ func (bs *Server) ListValidatorBalances(
}
balancesCount = len(res)
}
// Depending on the indices and public keys given, results might not be sorted.
sort.Slice(res, func(i, j int) bool {
return res[i].Index < res[j].Index
})
// If there are no balances, we simply return a response specifying this.
// Otherwise, attempting to paginate 0 balances below would result in an error.
@@ -199,16 +200,55 @@ func (bs *Server) ListValidators(
}
validatorList := make([]*ethpb.Validators_ValidatorContainer, 0)
for i := 0; i < headState.NumValidators(); i++ {
val, err := headState.ValidatorAtIndex(uint64(i))
for _, index := range req.Indices {
val, err := headState.ValidatorAtIndex(index)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get validator")
}
validatorList = append(validatorList, &ethpb.Validators_ValidatorContainer{
Index: uint64(i),
Index: index,
Validator: val,
})
}
for _, pubKey := range req.PublicKeys {
// Skip empty public key.
if len(pubKey) == 0 {
continue
}
pubkeyBytes := bytesutil.ToBytes48(pubKey)
index, ok := headState.ValidatorIndexByPubkey(pubkeyBytes)
if !ok {
continue
}
val, err := headState.ValidatorAtIndex(index)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get validator")
}
validatorList = append(validatorList, &ethpb.Validators_ValidatorContainer{
Index: index,
Validator: val,
})
}
// Depending on the indices and public keys given, results might not be sorted.
sort.Slice(validatorList, func(i, j int) bool {
return validatorList[i].Index < validatorList[j].Index
})
if len(req.PublicKeys) == 0 && len(req.Indices) == 0 {
for i := 0; i < headState.NumValidators(); i++ {
val, err := headState.ValidatorAtIndex(uint64(i))
if err != nil {
return nil, status.Error(codes.Internal, "Could not get validator")
}
validatorList = append(validatorList, &ethpb.Validators_ValidatorContainer{
Index: uint64(i),
Validator: val,
})
}
}
if requestedEpoch < currentEpoch {
stopIdx := len(validatorList)
for idx, item := range validatorList {
@@ -611,13 +651,15 @@ func (bs *Server) GetValidatorPerformance(
correctlyVotedHead := make([]bool, 0, reqPubKeysCount)
missingValidators := make([][]byte, 0, reqPubKeysCount)
headState, err := bs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, status.Error(codes.Internal, "Could not get head state")
}
// Convert the list of validator public keys to list of validator indices.
// Also track missing validators using public keys.
for _, key := range req.PublicKeys {
idx, ok, err := bs.BeaconDB.ValidatorIndex(ctx, key)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not fetch validator idx for public key %#x: %v", key, err)
}
pubkeyBytes := bytesutil.ToBytes48(key)
idx, ok := headState.ValidatorIndexByPubkey(pubkeyBytes)
if !ok {
missingValidators = append(missingValidators, key)
continue

View File

@@ -5,6 +5,7 @@ import (
"encoding/binary"
"fmt"
"reflect"
"sort"
"strconv"
"strings"
"testing"
@@ -694,6 +695,59 @@ func TestServer_ListValidators_NoPagination(t *testing.T) {
}
}
func TestServer_ListValidators_IndicesPubKeys(t *testing.T) {
db := dbTest.SetupDB(t)
defer dbTest.TeardownDB(t, db)
validators, _ := setupValidators(t, db, 100)
indicesWanted := []uint64{2, 7, 11, 17}
pubkeyIndicesWanted := []uint64{3, 5, 9, 15}
allIndicesWanted := append(indicesWanted, pubkeyIndicesWanted...)
want := make([]*ethpb.Validators_ValidatorContainer, len(allIndicesWanted))
for i, idx := range allIndicesWanted {
want[i] = &ethpb.Validators_ValidatorContainer{
Index: idx,
Validator: validators[idx],
}
}
sort.Slice(want, func(i int, j int) bool {
return want[i].Index < want[j].Index
})
headState, err := db.HeadState(context.Background())
if err != nil {
t.Fatal(err)
}
bs := &Server{
HeadFetcher: &mock.ChainService{
State: headState,
},
FinalizationFetcher: &mock.ChainService{
FinalizedCheckPoint: &ethpb.Checkpoint{
Epoch: 0,
},
},
}
pubKeysWanted := make([][]byte, len(pubkeyIndicesWanted))
for i, indice := range pubkeyIndicesWanted {
pubKeysWanted[i] = pubKey(indice)
}
req := &ethpb.ListValidatorsRequest{
Indices: indicesWanted,
PublicKeys: pubKeysWanted,
}
received, err := bs.ListValidators(context.Background(), req)
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(want, received.ValidatorList) {
t.Fatal("Incorrect respond of validators")
}
}
func TestServer_ListValidators_Pagination(t *testing.T) {
db := dbTest.SetupDB(t)
defer dbTest.TeardownDB(t, db)

View File

@@ -25,14 +25,12 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/beacon-chain/powchain"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/aggregator"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/beacon"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/node"
"github.com/prysmaticlabs/prysm/beacon-chain/rpc/validator"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/beacon-chain/sync"
pbp2p "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
pb "github.com/prysmaticlabs/prysm/proto/beacon/rpc/v1"
slashpb "github.com/prysmaticlabs/prysm/proto/slashing"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
@@ -210,7 +208,6 @@ func (s *Service) Start() {
}
s.grpcServer = grpc.NewServer(opts...)
genesisTime := s.genesisTimeFetcher.GenesisTime()
validatorServer := &validator.Server{
Ctx: s.ctx,
BeaconDB: s.beaconDB,
@@ -235,7 +232,6 @@ func (s *Service) Start() {
MockEth1Votes: s.mockEth1Votes,
Eth1BlockFetcher: s.powChainService,
PendingDepositsFetcher: s.pendingDepositFetcher,
GenesisTime: genesisTime,
SlashingsPool: s.slashingsPool,
StateGen: s.stateGen,
}
@@ -266,8 +262,6 @@ func (s *Service) Start() {
ReceivedAttestationsBuffer: make(chan *ethpb.Attestation, 100),
CollectedAttestationsBuffer: make(chan []*ethpb.Attestation, 100),
}
aggregatorServer := &aggregator.Server{ValidatorServer: validatorServer}
pb.RegisterAggregatorServiceServer(s.grpcServer, aggregatorServer)
ethpb.RegisterNodeServer(s.grpcServer, nodeServer)
ethpb.RegisterBeaconChainServer(s.grpcServer, beaconChainServer)
ethpb.RegisterBeaconNodeValidatorServer(s.grpcServer, validatorServer)

View File

@@ -96,6 +96,7 @@ go_test(
"//shared/event:go_default_library",
"//shared/hashutil:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"//shared/testutil:go_default_library",
"//shared/trieutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
@@ -105,5 +106,6 @@ go_test(
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_prysmaticlabs_go_ssz//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
],
)

View File

@@ -221,7 +221,7 @@ func generateAtt(state *beaconstate.BeaconState, index uint64, privKeys []*bls.S
AggregationBits: aggBits,
}
committee, _ := helpers.BeaconCommitteeFromState(state, att.Data.Slot, att.Data.CommitteeIndex)
attestingIndices, _ := attestationutil.AttestingIndices(att.AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
domain, err := helpers.Domain(state.Fork(), 0, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return nil, err

View File

@@ -24,6 +24,8 @@ import (
"google.golang.org/grpc/status"
)
const msgInvalidAttestationRequest = "Attestation request must be within current or previous epoch"
// GetAttestationData requests that the beacon node produce an attestation data object,
// which the validator acting as an attester will then sign.
func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.AttestationDataRequest) (*ethpb.AttestationData, error) {
@@ -44,6 +46,11 @@ func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.Attestation
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
currentEpoch := helpers.SlotToEpoch(vs.GenesisTimeFetcher.CurrentSlot())
if currentEpoch > 0 && currentEpoch-1 != helpers.SlotToEpoch(req.Slot) && currentEpoch != helpers.SlotToEpoch(req.Slot) {
return nil, status.Error(codes.InvalidArgument, msgInvalidAttestationRequest)
}
// Attester will either wait until there's a valid block from the expected block proposer of for the assigned input slot
// or one third of the slot has transpired. Whichever comes first.
vs.waitToOneThird(ctx, req.Slot)
@@ -84,6 +91,28 @@ func (vs *Server) GetAttestationData(ctx context.Context, req *ethpb.Attestation
return nil, status.Errorf(codes.Internal, "Could not retrieve head root: %v", err)
}
// In the case that we receive an attestation request after a newer state/block has been
// processed, we walk up the chain until state.Slot <= req.Slot to prevent producing an
// attestation that violates processing constraints.
fetchState := vs.BeaconDB.State
if featureconfig.Get().NewStateMgmt {
fetchState = vs.StateGen.StateByRoot
}
for headState.Slot() > req.Slot {
if ctx.Err() != nil {
return nil, status.Errorf(codes.Aborted, ctx.Err().Error())
}
parent := headState.ParentRoot()
headRoot = parent[:]
headState, err = fetchState(ctx, parent)
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
if headState == nil {
return nil, status.Error(codes.Internal, "Failed to lookup parent state from head.")
}
}
if helpers.CurrentEpoch(headState) < helpers.SlotToEpoch(req.Slot) {
headState, err = state.ProcessSlots(ctx, headState, helpers.StartSlot(helpers.SlotToEpoch(req.Slot)))
if err != nil {
@@ -175,8 +204,8 @@ func (vs *Server) waitToOneThird(ctx context.Context, slot uint64) {
_, span := trace.StartSpan(ctx, "validator.waitToOneThird")
defer span.End()
// Don't need to wait if head slot is already the same as requested slot.
if slot == vs.HeadFetcher.HeadSlot() {
// Don't need to wait if current slot is greater than requested slot.
if slot < vs.GenesisTimeFetcher.CurrentSlot() {
return
}

View File

@@ -21,6 +21,8 @@ import (
pbp2p "github.com/prysmaticlabs/prysm/proto/beacon/p2p/v1"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
"google.golang.org/grpc/status"
)
func init() {
@@ -142,9 +144,9 @@ func TestGetAttestationData_OK(t *testing.T) {
if err != nil {
t.Fatalf("Could not get signing root for target block: %v", err)
}
slot := 3*params.BeaconConfig().SlotsPerEpoch + 1
beaconState := &pbp2p.BeaconState{
Slot: 3*params.BeaconConfig().SlotsPerEpoch + 1,
Slot: slot,
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 2,
@@ -155,6 +157,9 @@ func TestGetAttestationData_OK(t *testing.T) {
beaconState.BlockRoots[1*params.BeaconConfig().SlotsPerEpoch] = targetRoot[:]
beaconState.BlockRoots[2*params.BeaconConfig().SlotsPerEpoch] = justifiedRoot[:]
s, _ := beaconstate.InitializeFromProto(beaconState)
chainService := &mock.ChainService{
Genesis: time.Now(),
}
attesterServer := &Server{
BeaconDB: db,
P2P: &mockp2p.MockBroadcaster{},
@@ -162,7 +167,8 @@ func TestGetAttestationData_OK(t *testing.T) {
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{State: s, Root: blockRoot[:]},
FinalizationFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: beaconState.CurrentJustifiedCheckpoint},
GenesisTimeFetcher: &mock.ChainService{},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*int64(slot*params.BeaconConfig().SecondsPerSlot)) * time.Second)},
StateNotifier: chainService.StateNotifier(),
}
if err := db.SaveState(ctx, s, blockRoot); err != nil {
t.Fatal(err)
@@ -252,8 +258,9 @@ func TestAttestationDataAtSlot_HandlesFarAwayJustifiedEpoch(t *testing.T) {
if err != nil {
t.Fatalf("Could not hash justified block: %v", err)
}
slot := uint64(10000)
beaconState := &pbp2p.BeaconState{
Slot: 10000,
Slot: slot,
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: helpers.SlotToEpoch(1500),
@@ -264,6 +271,9 @@ func TestAttestationDataAtSlot_HandlesFarAwayJustifiedEpoch(t *testing.T) {
beaconState.BlockRoots[1*params.BeaconConfig().SlotsPerEpoch] = epochBoundaryRoot[:]
beaconState.BlockRoots[2*params.BeaconConfig().SlotsPerEpoch] = justifiedBlockRoot[:]
s, _ := beaconstate.InitializeFromProto(beaconState)
chainService := &mock.ChainService{
Genesis: time.Now(),
}
attesterServer := &Server{
BeaconDB: db,
P2P: &mockp2p.MockBroadcaster{},
@@ -271,7 +281,8 @@ func TestAttestationDataAtSlot_HandlesFarAwayJustifiedEpoch(t *testing.T) {
HeadFetcher: &mock.ChainService{State: s, Root: blockRoot[:]},
FinalizationFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: beaconState.CurrentJustifiedCheckpoint},
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTimeFetcher: &mock.ChainService{},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*int64(slot*params.BeaconConfig().SecondsPerSlot)) * time.Second)},
StateNotifier: chainService.StateNotifier(),
}
if err := db.SaveState(ctx, s, blockRoot); err != nil {
t.Fatal(err)
@@ -317,17 +328,18 @@ func TestAttestationDataSlot_handlesInProgressRequest(t *testing.T) {
chainService := &mock.ChainService{
Genesis: time.Now(),
}
slot := uint64(2)
server := &Server{
HeadFetcher: &mock.ChainService{State: state},
AttestationCache: cache.NewAttestationCache(),
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTimeFetcher: &mock.ChainService{},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*int64(slot*params.BeaconConfig().SecondsPerSlot)) * time.Second)},
StateNotifier: chainService.StateNotifier(),
}
req := &ethpb.AttestationDataRequest{
CommitteeIndex: 1,
Slot: 2,
Slot: slot,
}
res := &ethpb.AttestationData{
@@ -368,7 +380,7 @@ func TestAttestationDataSlot_handlesInProgressRequest(t *testing.T) {
}
func TestWaitForSlotOneThird_WaitedCorrectly(t *testing.T) {
currentTime := uint64(time.Now().Unix())
currentTime := uint64(roughtime.Now().Unix())
numOfSlots := uint64(4)
genesisTime := currentTime - (numOfSlots * params.BeaconConfig().SecondsPerSlot)
@@ -387,18 +399,18 @@ func TestWaitForSlotOneThird_WaitedCorrectly(t *testing.T) {
oneThird := currentTime + timeToSleep
server.waitToOneThird(context.Background(), numOfSlots)
currentTime = uint64(time.Now().Unix())
currentTime = uint64(roughtime.Now().Unix())
if currentTime != oneThird {
t.Errorf("Wanted %d time for slot one third but got %d", oneThird, currentTime)
}
}
func TestWaitForSlotOneThird_HeadIsHereNoWait(t *testing.T) {
currentTime := uint64(time.Now().Unix())
currentTime := uint64(roughtime.Now().Unix())
numOfSlots := uint64(4)
genesisTime := currentTime - (numOfSlots * params.BeaconConfig().SecondsPerSlot)
s := &pbp2p.BeaconState{Slot: 100}
s := &pbp2p.BeaconState{Slot: 2}
state, _ := beaconstate.InitializeFromProto(s)
server := &Server{
AttestationCache: cache.NewAttestationCache(),
@@ -413,3 +425,221 @@ func TestWaitForSlotOneThird_HeadIsHereNoWait(t *testing.T) {
t.Errorf("Wanted %d time for slot one third but got %d", uint64(time.Now().Unix()), currentTime)
}
}
func TestServer_GetAttestationData_InvalidRequestSlot(t *testing.T) {
ctx := context.Background()
slot := 3*params.BeaconConfig().SlotsPerEpoch + 1
attesterServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*int64(slot*params.BeaconConfig().SecondsPerSlot)) * time.Second)},
}
req := &ethpb.AttestationDataRequest{
Slot: 1000000000000,
}
_, err := attesterServer.GetAttestationData(ctx, req)
if s, ok := status.FromError(err); !ok || s.Message() != msgInvalidAttestationRequest {
t.Fatalf("Wrong error. Wanted %v, got %v", msgInvalidAttestationRequest, err)
}
}
func TestServer_GetAttestationData_HeadStateSlotGreaterThanRequestSlot(t *testing.T) {
// There exists a rare scenario where the validator may request an attestation for a slot less
// than the head state's slot. The ETH2 spec constraints require that the block root the
// attestation is referencing be less than or equal to the attestation data slot.
// See: https://github.com/prysmaticlabs/prysm/issues/5164
ctx := context.Background()
db := dbutil.SetupDB(t)
defer dbutil.TeardownDB(t, db)
slot := 3*params.BeaconConfig().SlotsPerEpoch + 1
block := &ethpb.BeaconBlock{
Slot: slot,
}
block2 := &ethpb.BeaconBlock{Slot: slot - 1}
targetBlock := &ethpb.BeaconBlock{
Slot: 1 * params.BeaconConfig().SlotsPerEpoch,
}
justifiedBlock := &ethpb.BeaconBlock{
Slot: 2 * params.BeaconConfig().SlotsPerEpoch,
}
blockRoot, err := ssz.HashTreeRoot(block)
if err != nil {
t.Fatalf("Could not hash beacon block: %v", err)
}
blockRoot2, err := ssz.HashTreeRoot(block2)
if err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: block2}); err != nil {
t.Fatal(err)
}
justifiedRoot, err := ssz.HashTreeRoot(justifiedBlock)
if err != nil {
t.Fatalf("Could not get signing root for justified block: %v", err)
}
targetRoot, err := ssz.HashTreeRoot(targetBlock)
if err != nil {
t.Fatalf("Could not get signing root for target block: %v", err)
}
beaconState := &pbp2p.BeaconState{
Slot: slot,
GenesisTime: uint64(time.Now().Unix() - int64((slot * params.BeaconConfig().SecondsPerSlot))),
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
LatestBlockHeader: &ethpb.BeaconBlockHeader{
ParentRoot: blockRoot2[:],
},
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 2,
Root: justifiedRoot[:],
},
}
beaconState.BlockRoots[1] = blockRoot[:]
beaconState.BlockRoots[1*params.BeaconConfig().SlotsPerEpoch] = targetRoot[:]
beaconState.BlockRoots[2*params.BeaconConfig().SlotsPerEpoch] = justifiedRoot[:]
s, _ := beaconstate.InitializeFromProto(beaconState)
beaconState2 := s.CloneInnerState()
beaconState2.Slot--
s2, _ := beaconstate.InitializeFromProto(beaconState2)
if err := db.SaveState(ctx, s2, blockRoot2); err != nil {
t.Fatal(err)
}
chainService := &mock.ChainService{
Genesis: time.Now(),
}
attesterServer := &Server{
BeaconDB: db,
P2P: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{State: s, Root: blockRoot[:]},
FinalizationFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: beaconState.CurrentJustifiedCheckpoint},
GenesisTimeFetcher: &mock.ChainService{Genesis: time.Now().Add(time.Duration(-1*int64(slot*params.BeaconConfig().SecondsPerSlot)) * time.Second)},
StateNotifier: chainService.StateNotifier(),
}
if err := db.SaveState(ctx, s, blockRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: block}); err != nil {
t.Fatal(err)
}
if err := db.SaveHeadBlockRoot(ctx, blockRoot); err != nil {
t.Fatal(err)
}
req := &ethpb.AttestationDataRequest{
CommitteeIndex: 0,
Slot: slot - 1,
}
res, err := attesterServer.GetAttestationData(ctx, req)
if err != nil {
t.Fatalf("Could not get attestation info at slot: %v", err)
}
expectedInfo := &ethpb.AttestationData{
Slot: slot - 1,
BeaconBlockRoot: blockRoot2[:],
Source: &ethpb.Checkpoint{
Epoch: 2,
Root: justifiedRoot[:],
},
Target: &ethpb.Checkpoint{
Epoch: 3,
Root: blockRoot2[:],
},
}
if !proto.Equal(res, expectedInfo) {
t.Errorf("Expected attestation info to match, received %v, wanted %v", res, expectedInfo)
}
}
func TestGetAttestationData_SucceedsInFirstEpoch(t *testing.T) {
ctx := context.Background()
db := dbutil.SetupDB(t)
defer dbutil.TeardownDB(t, db)
slot := uint64(5)
block := &ethpb.BeaconBlock{
Slot: slot,
}
targetBlock := &ethpb.BeaconBlock{
Slot: 0,
}
justifiedBlock := &ethpb.BeaconBlock{
Slot: 0,
}
blockRoot, err := ssz.HashTreeRoot(block)
if err != nil {
t.Fatalf("Could not hash beacon block: %v", err)
}
justifiedRoot, err := ssz.HashTreeRoot(justifiedBlock)
if err != nil {
t.Fatalf("Could not get signing root for justified block: %v", err)
}
targetRoot, err := ssz.HashTreeRoot(targetBlock)
if err != nil {
t.Fatalf("Could not get signing root for target block: %v", err)
}
beaconState := &pbp2p.BeaconState{
Slot: slot,
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{
Epoch: 0,
Root: justifiedRoot[:],
},
}
beaconState.BlockRoots[1] = blockRoot[:]
beaconState.BlockRoots[1*params.BeaconConfig().SlotsPerEpoch] = targetRoot[:]
beaconState.BlockRoots[2*params.BeaconConfig().SlotsPerEpoch] = justifiedRoot[:]
s, _ := beaconstate.InitializeFromProto(beaconState)
chainService := &mock.ChainService{
Genesis: time.Now(),
}
attesterServer := &Server{
BeaconDB: db,
P2P: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
AttestationCache: cache.NewAttestationCache(),
HeadFetcher: &mock.ChainService{State: s, Root: blockRoot[:]},
FinalizationFetcher: &mock.ChainService{CurrentJustifiedCheckPoint: beaconState.CurrentJustifiedCheckpoint},
GenesisTimeFetcher: &mock.ChainService{Genesis: roughtime.Now().Add(time.Duration(-1*int64(slot*params.BeaconConfig().SecondsPerSlot)) * time.Second)},
StateNotifier: chainService.StateNotifier(),
}
if err := db.SaveState(ctx, s, blockRoot); err != nil {
t.Fatal(err)
}
if err := db.SaveBlock(ctx, &ethpb.SignedBeaconBlock{Block: block}); err != nil {
t.Fatal(err)
}
if err := db.SaveHeadBlockRoot(ctx, blockRoot); err != nil {
t.Fatal(err)
}
req := &ethpb.AttestationDataRequest{
CommitteeIndex: 0,
Slot: 5,
}
res, err := attesterServer.GetAttestationData(context.Background(), req)
if err != nil {
t.Fatalf("Could not get attestation info at slot: %v", err)
}
expectedInfo := &ethpb.AttestationData{
Slot: slot,
BeaconBlockRoot: blockRoot[:],
Source: &ethpb.Checkpoint{
Epoch: 0,
Root: justifiedRoot[:],
},
Target: &ethpb.Checkpoint{
Epoch: 0,
Root: blockRoot[:],
},
}
if !proto.Equal(res, expectedInfo) {
t.Errorf("Expected attestation info to match, received %v, wanted %v", res, expectedInfo)
}
}

View File

@@ -42,14 +42,14 @@ func TestSub(t *testing.T) {
genesisTime := time.Now().Add(time.Duration(-100*int64(params.BeaconConfig().SecondsPerSlot*params.BeaconConfig().SlotsPerEpoch)) * time.Second)
mockChainService := &mockChain.ChainService{State: beaconState, Root: genesisRoot[:], Genesis: genesisTime}
server := &Server{
BeaconDB: db,
HeadFetcher: mockChainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTime: genesisTime,
StateNotifier: mockChainService.StateNotifier(),
OperationNotifier: mockChainService.OperationNotifier(),
ExitPool: voluntaryexits.NewPool(),
P2P: mockp2p.NewTestP2P(t),
BeaconDB: db,
HeadFetcher: mockChainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
GenesisTimeFetcher: mockChainService,
StateNotifier: mockChainService.StateNotifier(),
OperationNotifier: mockChainService.OperationNotifier(),
ExitPool: voluntaryexits.NewPool(),
P2P: mockp2p.NewTestP2P(t),
}
// Subscribe to operation notifications.

View File

@@ -46,7 +46,7 @@ func TestGetBlock_OK(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, params.BeaconConfig().MinGenesisActiveValidatorCount)
stateRoot, err := beaconState.HashTreeRoot()
stateRoot, err := beaconState.HashTreeRoot(ctx)
if err != nil {
t.Fatalf("Could not hash genesis state: %v", err)
}
@@ -151,7 +151,7 @@ func TestGetBlock_AddsUnaggregatedAtts(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, params.BeaconConfig().MinGenesisActiveValidatorCount)
stateRoot, err := beaconState.HashTreeRoot()
stateRoot, err := beaconState.HashTreeRoot(ctx)
if err != nil {
t.Fatalf("Could not hash genesis state: %v", err)
}
@@ -319,7 +319,7 @@ func TestComputeStateRoot_OK(t *testing.T) {
beaconState, privKeys := testutil.DeterministicGenesisState(t, 100)
stateRoot, err := beaconState.HashTreeRoot()
stateRoot, err := beaconState.HashTreeRoot(ctx)
if err != nil {
t.Fatalf("Could not hash genesis state: %v", err)
}
@@ -1307,7 +1307,7 @@ func TestFilterAttestation_OK(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices, err := attestationutil.AttestingIndices(atts[i].AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(atts[i].AggregationBits, committee)
if err != nil {
t.Error(err)
}

View File

@@ -67,7 +67,6 @@ type Server struct {
Eth1BlockFetcher powchain.POWBlockFetcher
PendingDepositsFetcher depositcache.PendingDepositsFetcher
OperationNotifier opfeed.Notifier
GenesisTime time.Time
StateGen *stategen.State
}

View File

@@ -32,6 +32,7 @@ go_library(
"@com_github_protolambda_zssz//merkle:go_default_library",
"@com_github_prysmaticlabs_ethereumapis//eth/v1alpha1:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -68,6 +68,9 @@ func (f *FieldTrie) RecomputeTrie(indices []uint64, elements interface{}) ([32]b
f.Lock()
defer f.Unlock()
var fieldRoot [32]byte
if len(indices) == 0 {
return f.TrieRoot()
}
datType, ok := fieldMap[f.field]
if !ok {
return [32]byte{}, errors.Errorf("unrecognized field in trie")

View File

@@ -3,6 +3,7 @@ package state
import (
"errors"
"fmt"
"time"
ethpb "github.com/prysmaticlabs/ethereumapis/eth/v1alpha1"
"github.com/prysmaticlabs/go-bitfield"
@@ -129,6 +130,14 @@ func (b *BeaconState) GenesisTime() uint64 {
return b.state.GenesisTime
}
// GenesisUnixTime returns the genesis time as time.Time.
func (b *BeaconState) GenesisUnixTime() time.Time {
if !b.HasInnerState() {
return time.Unix(0, 0)
}
return time.Unix(int64(b.state.GenesisTime), 0)
}
// Slot of the current beacon chain state.
func (b *BeaconState) Slot() uint64 {
if !b.HasInnerState() {
@@ -192,6 +201,16 @@ func (b *BeaconState) LatestBlockHeader() *ethpb.BeaconBlockHeader {
return hdr
}
// ParentRoot is a convenience method to access state.LatestBlockRoot.ParentRoot.
func (b *BeaconState) ParentRoot() [32]byte {
if !b.HasInnerState() {
return [32]byte{}
}
parentRoot := [32]byte{}
copy(parentRoot[:], b.state.LatestBlockHeader.ParentRoot)
return parentRoot
}
// BlockRoots kept track of in the beacon state.
func (b *BeaconState) BlockRoots() [][]byte {
if !b.HasInnerState() {

View File

@@ -1,12 +1,11 @@
package state
import (
"context"
"runtime"
"sort"
"sync"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
"github.com/protolambda/zssz/merkle"
@@ -18,6 +17,8 @@ import (
"github.com/prysmaticlabs/prysm/shared/hashutil"
"github.com/prysmaticlabs/prysm/shared/memorypool"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/sliceutil"
"go.opencensus.io/trace"
)
// InitializeFromProto the beacon state from a protobuf representation.
@@ -178,7 +179,10 @@ func (b *BeaconState) Copy() *BeaconState {
// HashTreeRoot of the beacon state retrieves the Merkle root of the trie
// representation of the beacon state based on the eth2 Simple Serialize specification.
func (b *BeaconState) HashTreeRoot() ([32]byte, error) {
func (b *BeaconState) HashTreeRoot(ctx context.Context) ([32]byte, error) {
_, span := trace.StartSpan(ctx, "beaconState.HashTreeRoot")
defer span.End()
b.lock.Lock()
defer b.lock.Unlock()
@@ -376,7 +380,7 @@ func (b *BeaconState) recomputeFieldTrie(index fieldIndex, elements interface{})
fTrie = newTrie
}
// remove duplicate indexes
b.dirtyIndices[index] = sliceutil.UnionUint64(b.dirtyIndices[index], []uint64{})
b.dirtyIndices[index] = sliceutil.SetUint64(b.dirtyIndices[index])
// sort indexes again
sort.Slice(b.dirtyIndices[index], func(i int, j int) bool {
return b.dirtyIndices[index][i] < b.dirtyIndices[index][j]

View File

@@ -78,7 +78,10 @@ func (s *State) MigrateToCold(ctx context.Context, finalizedState *state.BeaconS
}).Info("Saved archived point during state migration")
}
if s.beaconDB.HasState(ctx, r) {
// Do not delete the current finalized state in case user wants to
// switch back to old state service, deleting the recent finalized state
// could cause issue switching back.
if s.beaconDB.HasState(ctx, r) && r != finalizedRoot {
if err := s.beaconDB.DeleteState(ctx, r); err != nil {
return err
}

View File

@@ -1,6 +1,7 @@
package state_test
import (
"context"
"reflect"
"strconv"
"testing"
@@ -18,6 +19,7 @@ import (
func TestBeaconState_ProtoBeaconStateCompatibility(t *testing.T) {
params.UseMinimalConfig()
ctx := context.Background()
genesis := setupGenesisState(t, 64)
customState, err := stateTrie.InitializeFromProto(genesis)
if err != nil {
@@ -29,7 +31,7 @@ func TestBeaconState_ProtoBeaconStateCompatibility(t *testing.T) {
t.Fatal("Cloned states did not match")
}
r1, err := customState.HashTreeRoot()
r1, err := customState.HashTreeRoot(ctx)
if err != nil {
t.Fatal(err)
}
@@ -47,7 +49,7 @@ func TestBeaconState_ProtoBeaconStateCompatibility(t *testing.T) {
if err := customState.SetBalances(balances); err != nil {
t.Fatal(err)
}
r1, err = customState.HashTreeRoot()
r1, err = customState.HashTreeRoot(ctx)
if err != nil {
t.Fatal(err)
}

View File

@@ -127,6 +127,7 @@ go_test(
"//shared/bls:go_default_library",
"//shared/bytesutil:go_default_library",
"//shared/params:go_default_library",
"//shared/roughtime:go_default_library",
"//shared/testutil:go_default_library",
"@com_github_gogo_protobuf//proto:go_default_library",
"@com_github_kevinms_leakybucket_go//:go_default_library",

View File

@@ -370,7 +370,7 @@ func connectPeers(t *testing.T, host *p2pt.TestP2P, data []*peerData, peerStatus
peer.Connect(host)
peerStatus.Add(peer.PeerID(), nil, network.DirOutbound)
peerStatus.Add(peer.PeerID(), nil, network.DirOutbound, []uint64{})
peerStatus.SetConnectionState(peer.PeerID(), peers.PeerConnected)
peerStatus.SetChainState(peer.PeerID(), &p2ppb.Status{
HeadForkVersion: params.BeaconConfig().GenesisForkVersion,

View File

@@ -72,7 +72,7 @@ func newBlocksFetcher(ctx context.Context, cfg *blocksFetcherConfig) *blocksFetc
rateLimiter := leakybucket.NewCollector(
allowedBlocksPerSecond, /* rate */
allowedBlocksPerSecond, /* capacity */
false /* deleteEmptyBuckets */)
false /* deleteEmptyBuckets */)
return &blocksFetcher{
ctx: ctx,

View File

@@ -370,7 +370,7 @@ func connectPeers(t *testing.T, host *p2pt.TestP2P, data []*peerData, peerStatus
peer.Connect(host)
peerStatus.Add(peer.PeerID(), nil, network.DirOutbound)
peerStatus.Add(peer.PeerID(), nil, network.DirOutbound, []uint64{})
peerStatus.SetConnectionState(peer.PeerID(), peers.PeerConnected)
peerStatus.SetChainState(peer.PeerID(), &p2ppb.Status{
HeadForkVersion: params.BeaconConfig().GenesisForkVersion,

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/bytesutil"
"github.com/prysmaticlabs/prysm/shared/featureconfig"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/runutil"
"github.com/prysmaticlabs/prysm/shared/traceutil"
@@ -61,7 +62,8 @@ func (s *Service) processPendingAtts(ctx context.Context) error {
attestations := s.blkRootToPendingAtts[bRoot]
s.pendingAttsLock.RUnlock()
// Has the pending attestation's missing block arrived and the node processed block yet?
if s.db.HasBlock(ctx, bRoot) && s.db.HasState(ctx, bRoot) {
hasStateSummary := featureconfig.Get().NewStateMgmt && s.db.HasStateSummary(ctx, bRoot)
if s.db.HasBlock(ctx, bRoot) && (s.db.HasState(ctx, bRoot) || hasStateSummary) {
numberOfBlocksRecoveredFromAtt.Inc()
for _, att := range attestations {
// The pending attestations can arrive in both aggregated and unaggregated forms,

View File

@@ -21,6 +21,7 @@ import (
"github.com/prysmaticlabs/prysm/shared/attestationutil"
"github.com/prysmaticlabs/prysm/shared/bls"
"github.com/prysmaticlabs/prysm/shared/params"
"github.com/prysmaticlabs/prysm/shared/roughtime"
"github.com/prysmaticlabs/prysm/shared/testutil"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -35,14 +36,14 @@ func TestProcessPendingAtts_NoBlockRequestBlock(t *testing.T) {
if len(p1.Host.Network().Peers()) != 1 {
t.Error("Expected peers to be connected")
}
p1.Peers().Add(p2.PeerID(), nil, network.DirOutbound)
p1.Peers().Add(p2.PeerID(), nil, network.DirOutbound, []uint64{})
p1.Peers().SetConnectionState(p2.PeerID(), peers.PeerConnected)
p1.Peers().SetChainState(p2.PeerID(), &pb.Status{})
r := &Service{
p2p: p1,
db: db,
chain: &mock.ChainService{},
chain: &mock.ChainService{Genesis: roughtime.Now()},
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.AggregateAttestationAndProof),
}
@@ -64,7 +65,7 @@ func TestProcessPendingAtts_HasBlockSaveUnAggregatedAtt(t *testing.T) {
r := &Service{
p2p: p1,
db: db,
chain: &mock.ChainService{},
chain: &mock.ChainService{Genesis: roughtime.Now()},
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.AggregateAttestationAndProof),
attPool: attestations.NewPool(),
}
@@ -128,7 +129,7 @@ func TestProcessPendingAtts_HasBlockSaveAggregatedAtt(t *testing.T) {
if err != nil {
t.Error(err)
}
attestingIndices, err := attestationutil.AttestingIndices(att.AggregationBits, committee)
attestingIndices := attestationutil.AttestingIndices(att.AggregationBits, committee)
if err != nil {
t.Error(err)
}

Some files were not shown because too many files have changed in this diff Show More