Compare commits

..

28 Commits

Author SHA1 Message Date
nisdas
fac05d4d38 Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into copyOnWrite 2022-04-16 11:37:02 +08:00
Radosław Kapka
a0679c70d3 Service constructors and Start() - better separation of concerns (#10532)
* move waitForStateInitialization to Start

* remove channel

* handle error in test

* fix service tests

* use fatal log

* deterministic-genesis

* sync

* rpc

* monitor

* validator-client

* test fixes

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-04-16 02:45:35 +00:00
Leo Lara
7f53700306 Remove feature and flag Pyrmont testnet (#10522)
* Remove feature and flag Pyrmont testnet

* Remove unused parameter

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-04-15 15:30:32 +00:00
Radosław Kapka
3b69f7a196 Simplify Initial Sync (#10523)
* move waitForStateInitialization to Start

* remove channel

* handle error in test

* fix service tests

* use fatal log

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-04-15 13:32:31 +00:00
Nishant Das
72562dcf3a Fix Another Off By 1 In Our Finalized Trie (#10524)
* fix everything

* fix more

* fix test

* Update beacon-chain/powchain/service.go

* Update beacon-chain/powchain/service.go

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-04-15 12:51:56 +00:00
Preston Van Loon
cc23b8311a Static analysis: gocognit (#10527)
* Add gocognit to static analyzers with a very high threshold

* edit readme and sort analyzers
2022-04-15 06:29:07 +00:00
terence tsao
cbe54fe3f9 Handle nil execution block response when logging TTD (#10502) 2022-04-13 19:47:04 -07:00
Nishant Das
1b6adca3ca Handle Finalized Deposit Insertion Better (#10517) 2022-04-13 10:08:19 +02:00
Nishant Das
1651649e5a Update to 1.17.9 (#10518) 2022-04-13 05:25:13 +00:00
Potuz
56187edb98 Use forkchoice first when checking canonical status (#10516) 2022-04-12 21:34:13 -03:00
Preston Van Loon
ecad5bbffc e2e: Provide e2e config yaml to web3signer (#10123)
* e2e: Provide e2e config yaml to web3signer

* fix build for //testing/endtoend:go_default_test

* Update with web3signer with teku fixes

* buildifier

* Add slasher case for web3signer

* Update testing/endtoend/minimal_e2e_test.go

* Update web3signer to 21.10.6

* Revert "Update web3signer to 21.10.6"

This reverts commit bdf3c408f2.

* Remove slasher part of web3signer e2e tests

* Revert "Remove slasher part of web3signer e2e tests"

This reverts commit 24249802ae.

* fix slasher web3signer test

* fixing build

* updating yaml to match testnet_e2e_config.go

* trying a different order to the e2e test and adding a log

* trying different way to kill process

* handling unhandled error

* testing changes to config WIP

* fixing bazel WIP

* fixing build

* ignoring test for now to test

* fixing bazel

* Test only web3signer e2e

* rolling back some commits to test

* fixing bazel

* trying an updated web3signer version

* changing flag to match version

* trying current version of develop for web3signer

* testing not using the --network property

* addressing build error

* testing config change

* reverting to go bakc to using the network file

* testing adding epochs per sync committee period

* rolling back configs

* removing check to test

* adding log to get sync committee duties and changing bellatrix fork epoch to something large and altair to epoch 1

* fixing bazel

* updating epoch in config file

* fixing more descrepencies between the configurations

* removing un used yaml

* removing goland added duplicates

* reverting using network minimal

* fixing bug

* rolling back some changes

* rolling back some changes

* rolling back changes

* making sure web3signer test doesn't make it to bellatrix fork yet

* reverting changes I did not touch

* undo comment

* Update testing/endtoend/deps.bzl

* Apply suggestions from code review

* rm nl

* fix //testing/endtoend:go_mainnet_test

* Remove unnecessary dep

* fix //testing/endtoend:go_mainnet_test

* addressing review comments

* fixing build and internal conflict

* removing web3signer slasher test as it's unneeded due to the interface nature of key signing, the regular slashing test is enough

* fix: The validator we fetch from the binary can only run before altair, if you add that and the web3signer then these things can never run together as the web3signer sets it to before bellatrix

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: James He <james@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2022-04-12 16:57:46 +00:00
Nishant Das
407182387b fix it all (#10511) 2022-04-12 19:50:29 +08:00
terence tsao
ad0b0b503d Move GetTerminalBlockHash to powchain engine (#10500)
* Move GetTerminalBlockHash to powchain engine

* Update service.go

* Update service.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-04-12 10:19:07 +00:00
terence tsao
58f4ba758c Metrics tracking EE VALID/SYNCING/INVALID response counter (#10504)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-04-12 09:51:13 +00:00
james-prysm
64f64f06bf Remote Key Manager API(web3signer) (#10302)
* removing flag requirement, can run web3signer without predefined public keys

* placeholders for remote-keymanager-api

* adding proto and accountschangedfeed

* updating generated code

* fix imports

* fixing interface

* adding work in progress apimiddleware code

* started implementing functions for remote keymanager api

* fixing generted code from proto

* fixing protos

* fixing import format

* fixing proto generation again , didn't fix the first time

* fixing imports again

* continuing on implementing functions

* implementing add function

* implementing delete API function

* handling errors for API

* removing unusedcode and fixing format

* fixing bazel

* wip enable --web when running web3signer

* fixing wallet check for web3signer

* fixing apis

* adding list remote keys unit test

* import remote keys test

* delete pubkeys tests

* moving location of tests

* adding unit tests

* adding placeholder functions

* adding more unit tests

* fixing bazel

* fixing build

* fixing already slice issue with unit test

* fixing linting

* Update validator/client/validator.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/keymanager/types.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/node/node.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/keymanager/types.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* Update validator/client/validator.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* adding comment on proto based on review

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* adding generated code based on review

* updating based on feedback

* fixing imports

* fixing formatting

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing event call

* fixing dependency

* updating bazel

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/standard_api.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing comment from review

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2022-04-11 16:05:40 -04:00
terence tsao
e70055733f Save state to DB after proposer boost (#10509) 2022-04-11 16:51:49 +00:00
Radosław Kapka
36e4f49af0 Bellatrix evaluators (#10506)
* defensive nil check

* separate ExecutionPayload/Header from codegen

* tell bazel about this new file

* Merge: support terminal difficulty override (#9769)

* Fix finding terminal block hash calculation

* Update mainnet_config.go

* Update beacon_block.pb.go

* Various fixes to pass all spec tests for Merge (#9777)

* Proper upgrade altair to merge state

* Use uint64 for ttd

* Correctly upgrade to merge state + object mapping fixes

* Use proper receive block path for initial syncing

* Disable contract lookback

* Disable deposit contract lookback

* Go fmt

* Merge: switch from go bindings to raw rpc calls (#9803)

* Disable genesis ETH1.0 chain header logging

* Update htrutils.go

* all gossip tests passing

* Remove gas validations

* Update penalty params for Merge

* Fix gossip and tx size limits for the merge part 1

* Remove extraneous p2p condition

* Add and use

* Add and use TBH_ACTIVATION_EPOCH

* Update WORKSPACE

* Update Kintsugi engine API (#9865)

* Kintsugi ssz (#9867)

* All spec tests pass

* Update spec test shas

* Update Kintsugi consensus implementations (#9872)

* Remove secp256k1

* Remove unused merge genesis state gen tool

* Manually override nil transaction field. M2 works

* Fix bad hex conversion

* Change Gossip message size and Chunk SIze from 1 MB t0 10MB (#9860)

* change gossip size and chunk size after merge

* change ssz to accomodate both changes

* gofmt config file

* add testcase for merge MsgId

* Update beacon-chain/p2p/message_id.go

Change MB to Mib in comment

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* change function name from altairMsgID to postAltairMsgID

Co-authored-by: terence tsao <terence@prysmaticlabs.com>

* Sync with develop

* Merge branch 'develop' of github.com:prysmaticlabs/prysm into kintsugi

* Update state_trie.go

* Clean up conflicts

* Fix build

* Update config to devnet1

* Fix state merge

* Handle merge test case for update balance

* Fix build

* State pkg cleanup

* Fix a bug with loading mainnet state

* Fix transactions root

* Add v2 endpoint for merge blocks (#9802)

* Add V2 blocks endpoint for merge blocks

* Update beacon-chain/rpc/apimiddleware/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* go mod

* fix transactions

* Terence's comments

* add missing file

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Sync

* Go mod tidy

* change EP field names

* latest kintusgi execution api

* fix conflicts

* converting base fee to big endian format (#10018)

* ReverseByteOrder function does not mess the input

* sync with develop

* use merge gossip sizes

* correct gossip sizes this time

* visibility

* clean ups

* Sync with develop, fix payload nil check bug

* Speed up syncing, hide cosmetic errors

* Sync with develop

* Clean up after sync

* Update generate_keys.go

* sync with develop

* Update mainnet_config.go

* Clean ups

* Sync optimistically candidate blocks (#10193)

* Revert "Sync optimistically candidate blocks (#10193)"

This reverts commit f99a0419ef.

* Sync optimistically candidate blocks (#10193)

* allow optimistic sync

* Fix merge transition block validation

* Update proposer.go

* Sync with develop

* delete deprecated client, update testnet flag

* Change optimistic logic (#10194)

* Logs and err handling

* Fix build

* Clean ups

* Add back get payload

* c

* Done

* Rm uncommented

* Optimistic sync: prysm validator rpcs (#10200)

* Logs to reproduce

* Use pointers

* Use pointers

* Use pointers

* Update json_marshal_unmarshal.go

* Fix marshal

* Update json_marshal_unmarshal.go

* Log

* string total diff

* str

* marshal un

* set string

* json

* gaz

* Comment out optimistic status

* remove kiln flag here (#10269)

* Sync with devleop

* Sync with develop

* clean ups

* refactor engine calls

* Update process_block.go

* Fix deadlock, uncomment duty opt sync

* Update proposer_execution_payload.go

* Sync with develop

* Rm post state check

* Bypass eth1 data checks

* Update proposer_execution_payload.go

* Return early if ttd is not reached

* Sync with devleop

* Update process_block.go

* Update receive_block.go

* Update bzl

* Revert "Update receive_block.go"

This reverts commit 5b4a87c512.

* Fix run time

* add in all the fixes

* fix evaluator bugs

* latest fixes

* sum

* fix to be configurable

* Update go.mod

* Fix AltairCompatible to account for future state version

* Update proposer_execution_payload.go

* fix broken conditional checks

* fix all issues

* Handle pre state Altair with valid payload

* Handle pre state Altair with valid payload

* Log bellatrix fields

* Update log.go

* Revert "fix broken conditional checks"

This reverts commit e118db6c20.

* LH multiclient working

* Friendly fee recipient log

* Remove extra SetOptimisticToValid

* fix race

* fix test

* Fix base fee per gas

* Fix notifypayload headroot

* tx fuzzer

* clean up with develop branch

* save progress

* 200tx/block

* add LH flags

* Sync with devleop

* cleanup

* cleanup

* hash

* fix build

* fix test

* fix go check

* fmt

* gosec

* Blocked stream

(cherry picked from commit f362af9862db680b6352692217ad5c08d44a1e86)

# Conflicts:
#	proto/prysm/v1alpha1/validator.pb.go

* remove duplicate param

* test

* revert some test changes

* Initial version of EE tx count

* evaluate all txs in epoch

* remove logs

* uncomment tests

* remove unwanted change

* parameterize ExpectedExecEngineTxsThreshold

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Zahoor Mohamed <zahoor@zahoor.in>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: Zahoor Mohamed <zahoor@prysmaticlabs.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2022-04-11 13:45:22 +00:00
terence tsao
d98428dec4 Can prune nodes from canonical and payload maps (#10496)
* Can prune nodes from canonical and payload maps

* Update store_test.go

* prune payload hashes, canonical nodes and better testing

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2022-04-11 11:08:50 +00:00
Nishant Das
00b92e01d3 Fetch Non Finalized Deposits Better (#10505) 2022-04-11 09:59:22 +02:00
Potuz
ca5adbf7e4 Fix optimistic logging (#10503) 2022-04-09 23:09:04 +00:00
Nishant Das
a083b7a0a5 Set Auth Differently In Our Powchain Constructor (#10501) 2022-04-09 11:03:05 +02:00
Raul Jordan
dd5995b665 Proper Connection Management for ETH JSON-RPC Client for Startup and Runtime (#10498)
* begin connection management revamp

* kiln runs

* retry

* reconnect

* add

* rpc connect fix

* remove logging

* logs

* retry

* default value for web3flag

* test pass

* comments

* ensure auth works
2022-04-09 09:28:40 +08:00
nisdas
f5afe81b5d Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into copyOnWrite 2022-03-24 20:35:22 +08:00
nisdas
7ab72d4645 save current changes 2022-03-22 21:55:55 +08:00
nisdas
9eb968d1ca Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into copyOnWrite 2022-03-21 19:17:15 +08:00
nisdas
6bb7617a99 fixes 2022-03-01 14:46:56 +08:00
nisdas
497bbdbeb5 randao mixes 2022-02-28 08:13:57 +08:00
nisdas
5245c7500f add progress 2022-02-26 13:24:07 +08:00
162 changed files with 5403 additions and 3161 deletions

View File

@@ -115,18 +115,19 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/cryptorand:go_default_library",
"//tools/analyzers/errcheck:go_default_library",
"//tools/analyzers/featureconfig:go_default_library",
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/nop:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/nop:go_default_library",
"//tools/analyzers/properpermissions:go_default_library",
"//tools/analyzers/recursivelock:go_default_library",
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/uintcast:go_default_library",
] + select({
# nogo checks that fail with coverage enabled.

View File

@@ -176,7 +176,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.17.6",
go_version = "1.17.9",
nogo = "@//:nogo",
)

View File

@@ -40,15 +40,15 @@ func (od *OriginData) CheckpointString() string {
// SaveBlock saves the downloaded block to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (od *OriginData) SaveBlock(dir string) (string, error) {
blockPath := path.Join(dir, fname("block", od.cf, od.b.Block().Slot(), od.wsd.BlockRoot))
return blockPath, file.WriteFile(blockPath, od.BlockBytes())
blockPath := path.Join(dir, fname("state", od.cf, od.st.Slot(), od.wsd.BlockRoot))
return blockPath, file.WriteFile(blockPath, od.sb)
}
// SaveState saves the downloaded state to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (od *OriginData) SaveState(dir string) (string, error) {
statePath := path.Join(dir, fname("state", od.cf, od.st.Slot(), od.wsd.StateRoot))
return statePath, file.WriteFile(statePath, od.StateBytes())
return statePath, file.WriteFile(statePath, od.sb)
}
// StateBytes returns the ssz-encoded bytes of the downloaded BeaconState value.

View File

@@ -66,6 +66,7 @@ go_library(
"//config/params:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",

View File

@@ -273,13 +273,13 @@ func (s *Service) CurrentFork() *ethpb.Fork {
// IsCanonical returns true if the input block root is part of the canonical chain.
func (s *Service) IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error) {
// If the block has been finalized, the block will always be part of the canonical chain.
if s.cfg.BeaconDB.IsFinalizedBlock(ctx, blockRoot) {
return true, nil
// If the block has not been finalized, check fork choice store to see if the block is canonical
if s.cfg.ForkChoiceStore.HasNode(blockRoot) {
return s.cfg.ForkChoiceStore.IsCanonical(blockRoot), nil
}
// If the block has not been finalized, check fork choice store to see if the block is canonical
return s.cfg.ForkChoiceStore.IsCanonical(blockRoot), nil
// If the block has been finalized, the block will always be part of the canonical chain.
return s.cfg.BeaconDB.IsFinalizedBlock(ctx, blockRoot), nil
}
// ChainHeads returns all possible chain heads (leaves of fork choice tree).

View File

@@ -138,6 +138,26 @@ var (
Name: "state_balance_cache_miss",
Help: "Count the number of state balance cache hits.",
})
newPayloadValidNodeCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "new_payload_valid_node_count",
Help: "Count the number of valid nodes after newPayload EE call",
})
newPayloadOptimisticNodeCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "new_payload_optimistic_node_count",
Help: "Count the number of optimistic nodes after newPayload EE call",
})
newPayloadInvalidNodeCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "new_payload_invalid_node_count",
Help: "Count the number of invalid nodes after newPayload EE call",
})
forkchoiceUpdatedValidNodeCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "forkchoice_updated_valid_node_count",
Help: "Count the number of valid nodes after forkchoiceUpdated EE call",
})
forkchoiceUpdatedOptimisticNodeCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "forkchoice_updated_optimistic_node_count",
Help: "Count the number of optimistic nodes after forkchoiceUpdated EE call",
})
)
// reportSlotMetrics reports slot related metrics.

View File

@@ -46,15 +46,31 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headState state.Be
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
finalizedHash, err := s.getFinalizedPayloadHash(ctx, finalizedRoot)
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, s.ensureRootNotZeros(finalizedRoot))
if err != nil {
return nil, errors.Wrap(err, "could not get finalized payload hash")
return nil, errors.Wrap(err, "could not get finalized block")
}
if finalizedBlock == nil || finalizedBlock.IsNil() {
finalizedBlock = s.getInitSyncBlock(s.ensureRootNotZeros(finalizedRoot))
if finalizedBlock == nil || finalizedBlock.IsNil() {
return nil, errors.Errorf("finalized block with root %#x does not exist in the db or our cache", s.ensureRootNotZeros(finalizedRoot))
}
}
var finalizedHash []byte
if blocks.IsPreBellatrixVersion(finalizedBlock.Block().Version()) {
finalizedHash = params.BeaconConfig().ZeroHash[:]
} else {
payload, err := finalizedBlock.Block().Body().ExecutionPayload()
if err != nil {
return nil, errors.Wrap(err, "could not get finalized block execution payload")
}
finalizedHash = payload.BlockHash
}
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: headPayload.BlockHash,
SafeBlockHash: headPayload.BlockHash,
FinalizedBlockHash: finalizedHash[:],
FinalizedBlockHash: finalizedHash,
}
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
@@ -67,16 +83,18 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headState state.Be
if err != nil {
switch err {
case powchain.ErrAcceptedSyncingPayloadStatus:
forkchoiceUpdatedOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"headSlot": headBlk.Slot(),
"headPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(headPayload.BlockHash)),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash)),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
default:
return nil, errors.Wrap(err, "could not notify forkchoice update from execution engine")
}
}
forkchoiceUpdatedValidNodeCount.Inc()
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, headRoot); err != nil {
return nil, errors.Wrap(err, "could not set block to valid")
}
@@ -88,55 +106,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, headState state.Be
return payloadID, nil
}
// getFinalizedPayloadHash returns the finalized payload hash for the given finalized block root.
// It checks the following in order:
// 1. The finalized block exists in db
// 2. The finalized block exists in initial sync block cache
// 3. The finalized block is the weak subjectivity block and exists in db
// Error is returned if the finalized block is not found from above.
func (s *Service) getFinalizedPayloadHash(ctx context.Context, finalizedRoot [32]byte) ([32]byte, error) {
b, err := s.cfg.BeaconDB.Block(ctx, s.ensureRootNotZeros(finalizedRoot))
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get finalized block")
}
if b != nil {
return getPayloadHash(b.Block())
}
b = s.getInitSyncBlock(finalizedRoot)
if b != nil {
return getPayloadHash(b.Block())
}
r, err := s.cfg.BeaconDB.OriginCheckpointBlockRoot(ctx)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get finalized block")
}
b, err = s.cfg.BeaconDB.Block(ctx, r)
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get finalized block")
}
if b != nil {
return getPayloadHash(b.Block())
}
return [32]byte{}, errors.Errorf("finalized block with root %#x does not exist in the db or our cache", s.ensureRootNotZeros(finalizedRoot))
}
// getPayloadHash returns the payload hash for the input given block.
// zeros are returned if the block is older than bellatrix.
func getPayloadHash(b block.BeaconBlock) ([32]byte, error) {
if blocks.IsPreBellatrixVersion(b.Version()) {
return params.BeaconConfig().ZeroHash, nil
}
payload, err := b.Body().ExecutionPayload()
if err != nil {
return [32]byte{}, errors.Wrap(err, "could not get finalized block execution payload")
}
return bytesutil.ToBytes32(payload.BlockHash), nil
}
// notifyForkchoiceUpdate signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion, postStateVersion int,
@@ -168,12 +137,14 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion, postSta
if err != nil {
switch err {
case powchain.ErrAcceptedSyncingPayloadStatus:
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash)),
"slot": blk.Block().Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash)),
}).Info("Called new payload with optimistic block")
return false, nil
case powchain.ErrInvalidPayloadStatus:
newPayloadInvalidNodeCount.Inc()
root, err := blk.Block().HashTreeRoot()
if err != nil {
return false, err
@@ -190,7 +161,7 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion, postSta
return false, errors.Wrap(err, "could not validate execution payload from execution engine")
}
}
newPayloadValidNodeCount.Inc()
// During the transition event, the transition block should be verified for sanity.
if blocks.IsPreBellatrixVersion(preStateVersion) {
// Handle case where pre-state is Altair but block contains payload.

View File

@@ -792,74 +792,3 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
require.NoError(t, err)
require.Equal(t, false, has)
}
func TestService_getFinalizedPayloadHash(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB)),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
// Use the block in DB
b := util.NewBeaconBlockBellatrix()
b.Block.Body.ExecutionPayload.BlockHash = bytesutil.PadTo([]byte("hi"), 32)
blk, err := wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, blk))
h, err := service.getFinalizedPayloadHash(ctx, r)
require.NoError(t, err)
require.Equal(t, bytesutil.ToBytes32(b.Block.Body.ExecutionPayload.BlockHash), h)
// Use the block in init sync cache
b = util.NewBeaconBlockBellatrix()
b.Block.Body.ExecutionPayload.BlockHash = bytesutil.PadTo([]byte("hello"), 32)
blk, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
service.initSyncBlocks[r] = blk
h, err = service.getFinalizedPayloadHash(ctx, r)
require.NoError(t, err)
require.Equal(t, bytesutil.ToBytes32(b.Block.Body.ExecutionPayload.BlockHash), h)
// Use the weak subjectivity sync block
b = util.NewBeaconBlockBellatrix()
b.Block.Body.ExecutionPayload.BlockHash = bytesutil.PadTo([]byte("howdy"), 32)
blk, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, blk))
require.NoError(t, service.cfg.BeaconDB.SaveOriginCheckpointBlockRoot(ctx, r))
h, err = service.getFinalizedPayloadHash(ctx, r)
require.NoError(t, err)
require.Equal(t, bytesutil.ToBytes32(b.Block.Body.ExecutionPayload.BlockHash), h)
// None of the above should error
require.NoError(t, service.cfg.BeaconDB.SaveOriginCheckpointBlockRoot(ctx, [32]byte{'a'}))
_, err = service.getFinalizedPayloadHash(ctx, [32]byte{'a'})
require.ErrorContains(t, "does not exist in the db or our cache", err)
}
func TestService_getPayloadHash(t *testing.T) {
// Pre-bellatrix
blk, err := wrapper.WrappedSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
h, err := getPayloadHash(blk.Block())
require.NoError(t, err)
require.Equal(t, [32]byte{}, h)
// Post bellatrix
b := util.NewBeaconBlockBellatrix()
b.Block.Body.ExecutionPayload.BlockHash = bytesutil.PadTo([]byte("hi"), 32)
blk, err = wrapper.WrappedSignedBeaconBlock(b)
require.NoError(t, err)
h, err = getPayloadHash(blk.Block())
require.NoError(t, err)
require.Equal(t, bytesutil.ToBytes32(bytesutil.PadTo([]byte("hi"), 32)), h)
}

View File

@@ -127,8 +127,8 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
}
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState, false /* reg sync */); err != nil {
return err
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, signed.Block(), blockRoot, postState); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
}
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
@@ -148,6 +148,9 @@ func (s *Service) onBlock(ctx context.Context, signed block.SignedBeaconBlock, b
return err
}
if err := s.savePostStateInfo(ctx, blockRoot, signed, postState, false /* reg sync */); err != nil {
return err
}
// If slasher is configured, forward the attestations in the block via
// an event feed for processing.
if features.Get().EnableSlasher {
@@ -506,7 +509,6 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
if postState.Slot()+1 == s.nextEpochBoundarySlot {
// Update caches for the next epoch at epoch boundary slot - 1.
log.Infof("UpdateCommitteeCache from handleEpochBoundary (postState.Slot()+1 == s.nextEpochBoundarySlot), slot=%d, epoch=%d", postState.Slot(), coreTime.CurrentEpoch(postState))
if err := helpers.UpdateCommitteeCache(postState, coreTime.NextEpoch(postState)); err != nil {
return err
}
@@ -515,7 +517,6 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
if err != nil {
return err
}
log.Info("UpdateProposerIndicesInCache from handleEpochBoundary (postState.Slot()+1 == s.nextEpochBoundarySlot)")
if err := helpers.UpdateProposerIndicesInCache(ctx, copied); err != nil {
return err
}
@@ -531,12 +532,9 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
// Update caches at epoch boundary slot.
// The following updates have short cut to return nil cheaply if fulfilled during boundary slot - 1.
log.Info("UpdateCommitteeCache from handleEpochBoundary (postState.Slot() >= s.nextEpochBoundarySlot)")
if err := helpers.UpdateCommitteeCache(postState, coreTime.CurrentEpoch(postState)); err != nil {
return err
}
log.Info("UpdateProposerIndicesInCache from handleEpochBoundary (postState.Slot() >= s.nextEpochBoundarySlot)")
if err := helpers.UpdateProposerIndicesInCache(ctx, postState); err != nil {
return err
}
@@ -614,9 +612,6 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b block.Sig
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return errors.Wrap(err, "could not save state")
}
if err := s.insertBlockAndAttestationsToForkChoiceStore(ctx, b.Block(), r, st); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", b.Block().Slot())
}
return nil
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/math"
"github.com/prysmaticlabs/prysm/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1/block"
@@ -402,10 +403,17 @@ func (s *Service) insertFinalizedDeposits(ctx context.Context, fRoot [32]byte) e
// We update the cache up to the last deposit index in the finalized block's state.
// We can be confident that these deposits will be included in some block
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
eth1DepositIndex := int64(finalizedState.Eth1Data().DepositCount - 1)
s.cfg.DepositCache.InsertFinalizedDeposits(ctx, eth1DepositIndex)
eth1DepositIndex, err := mathutil.Int(finalizedState.Eth1DepositIndex())
if err != nil {
return errors.Wrap(err, "could not cast eth1 deposit index")
}
// The deposit index in the state is always the index of the next deposit
// to be included(rather than the last one to be processed). This was most likely
// done as the state cannot represent signed integers.
eth1DepositIndex -= 1
s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(eth1DepositIndex))
// Deposit proofs are only used during state transition and can be safely removed to save space.
if err = s.cfg.DepositCache.PruneProofs(ctx, eth1DepositIndex); err != nil {
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(eth1DepositIndex)); err != nil {
return errors.Wrap(err, "could not prune deposit proofs")
}
return nil

View File

@@ -1516,6 +1516,7 @@ func TestInsertFinalizedDeposits(t *testing.T) {
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
gs = gs.Copy()
assert.NoError(t, gs.SetEth1Data(&ethpb.Eth1Data{DepositCount: 10}))
assert.NoError(t, gs.SetEth1DepositIndex(8))
assert.NoError(t, service.cfg.StateGen.SaveState(ctx, [32]byte{'m', 'o', 'c', 'k'}, gs))
zeroSig := [96]byte{}
for i := uint64(0); i < uint64(4*params.BeaconConfig().SlotsPerEpoch); i++ {
@@ -1529,8 +1530,64 @@ func TestInsertFinalizedDeposits(t *testing.T) {
}
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'}))
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 9, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(109))
assert.Equal(t, 7, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(107))
for _, d := range deps {
assert.DeepEqual(t, [][]byte(nil), d.Proof, "Proofs are not empty")
}
}
func TestInsertFinalizedDeposits_MultipleFinalizedRoutines(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
depositCache, err := depositcache.New()
require.NoError(t, err)
opts = append(opts, WithDepositCache(depositCache))
service, err := NewService(ctx, opts...)
require.NoError(t, err)
gs, _ := util.DeterministicGenesisState(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
gBlk, err := service.cfg.BeaconDB.GenesisBlock(ctx)
require.NoError(t, err)
gRoot, err := gBlk.Block().HashTreeRoot()
require.NoError(t, err)
service.store.SetFinalizedCheckpt(&ethpb.Checkpoint{Root: gRoot[:]})
gs = gs.Copy()
assert.NoError(t, gs.SetEth1Data(&ethpb.Eth1Data{DepositCount: 7}))
assert.NoError(t, gs.SetEth1DepositIndex(6))
assert.NoError(t, service.cfg.StateGen.SaveState(ctx, [32]byte{'m', 'o', 'c', 'k'}, gs))
gs2 := gs.Copy()
assert.NoError(t, gs2.SetEth1Data(&ethpb.Eth1Data{DepositCount: 15}))
assert.NoError(t, gs2.SetEth1DepositIndex(13))
assert.NoError(t, service.cfg.StateGen.SaveState(ctx, [32]byte{'m', 'o', 'c', 'k', '2'}, gs2))
zeroSig := [96]byte{}
for i := uint64(0); i < uint64(4*params.BeaconConfig().SlotsPerEpoch); i++ {
root := []byte(strconv.Itoa(int(i)))
assert.NoError(t, depositCache.InsertDeposit(ctx, &ethpb.Deposit{Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.FromBytes48([fieldparams.BLSPubkeyLength]byte{}),
WithdrawalCredentials: params.BeaconConfig().ZeroHash[:],
Amount: 0,
Signature: zeroSig[:],
}, Proof: [][]byte{root}}, 100+i, int64(i), bytesutil.ToBytes32(root)))
}
// Insert 3 deposits before hand.
depositCache.InsertFinalizedDeposits(ctx, 2)
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k'}))
fDeposits := depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 5, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps := depositCache.AllDeposits(ctx, big.NewInt(105))
for _, d := range deps {
assert.DeepEqual(t, [][]byte(nil), d.Proof, "Proofs are not empty")
}
// Insert New Finalized State with higher deposit count.
assert.NoError(t, service.insertFinalizedDeposits(ctx, [32]byte{'m', 'o', 'c', 'k', '2'}))
fDeposits = depositCache.FinalizedDeposits(ctx)
assert.Equal(t, 12, int(fDeposits.MerkleTrieIndex), "Finalized deposits not inserted correctly")
deps = depositCache.AllDeposits(ctx, big.NewInt(112))
for _, d := range deps {
assert.DeepEqual(t, [][]byte(nil), d.Proof, "Proofs are not empty")
}

View File

@@ -83,7 +83,8 @@ func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestat
func (s *Service) VerifyFinalizedConsistency(ctx context.Context, root []byte) error {
// A canonical root implies the root to has an ancestor that aligns with finalized check point.
// In this case, we could exit early to save on additional computation.
if s.cfg.ForkChoiceStore.IsCanonical(bytesutil.ToBytes32(root)) {
blockRoot := bytesutil.ToBytes32(root)
if s.cfg.ForkChoiceStore.HasNode(blockRoot) && s.cfg.ForkChoiceStore.IsCanonical(blockRoot) {
return nil
}

View File

@@ -440,11 +440,9 @@ func (s *Service) initializeBeaconChain(
s.cfg.ChainStartFetcher.ClearPreGenesisData()
// Update committee shuffled indices for genesis epoch.
log.Infof("UpdateCommitteeCache from initializeBeaconChain, slot=%d", genesisState.Slot())
if err := helpers.UpdateCommitteeCache(genesisState, 0 /* genesis epoch */); err != nil {
return nil, err
}
log.Info("UpdateProposerIndicesInCache from initializeBeaconChain")
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState); err != nil {
return nil, err
}

View File

@@ -36,7 +36,7 @@ type DepositFetcher interface {
DepositByPubkey(ctx context.Context, pubKey []byte) (*ethpb.Deposit, *big.Int)
DepositsNumberAndRootAtHeight(ctx context.Context, blockHeight *big.Int) (uint64, [32]byte)
FinalizedDeposits(ctx context.Context) *FinalizedDeposits
NonFinalizedDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit
NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int64, untilBlk *big.Int) []*ethpb.Deposit
}
// FinalizedDeposits stores the trie of deposits that have been included
@@ -137,6 +137,22 @@ func (dc *DepositCache) InsertFinalizedDeposits(ctx context.Context, eth1Deposit
depositTrie := dc.finalizedDeposits.Deposits
insertIndex := int(dc.finalizedDeposits.MerkleTrieIndex + 1)
// Don't insert into finalized trie if there is no deposit to
// insert.
if len(dc.deposits) == 0 {
return
}
// In the event we have less deposits than we need to
// finalize we finalize till the index on which we do have it.
if len(dc.deposits) <= int(eth1DepositIndex) {
eth1DepositIndex = int64(len(dc.deposits)) - 1
}
// If we finalize to some lower deposit index, we
// ignore it.
if int(eth1DepositIndex) < insertIndex {
return
}
for _, d := range dc.deposits {
if d.Index <= dc.finalizedDeposits.MerkleTrieIndex {
continue
@@ -246,7 +262,7 @@ func (dc *DepositCache) FinalizedDeposits(ctx context.Context) *FinalizedDeposit
// NonFinalizedDeposits returns the list of non-finalized deposits until the given block number (inclusive).
// If no block is specified then this method returns all non-finalized deposits.
func (dc *DepositCache) NonFinalizedDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit {
func (dc *DepositCache) NonFinalizedDeposits(ctx context.Context, lastFinalizedIndex int64, untilBlk *big.Int) []*ethpb.Deposit {
ctx, span := trace.StartSpan(ctx, "DepositsCache.NonFinalizedDeposits")
defer span.End()
dc.depositsLock.RLock()
@@ -256,10 +272,9 @@ func (dc *DepositCache) NonFinalizedDeposits(ctx context.Context, untilBlk *big.
return dc.allDeposits(untilBlk)
}
lastFinalizedDepositIndex := dc.finalizedDeposits.MerkleTrieIndex
var deposits []*ethpb.Deposit
for _, d := range dc.deposits {
if (d.Index > lastFinalizedDepositIndex) && (untilBlk == nil || untilBlk.Uint64() >= d.Eth1BlockHeight) {
if (d.Index > lastFinalizedIndex) && (untilBlk == nil || untilBlk.Uint64() >= d.Eth1BlockHeight) {
deposits = append(deposits, d.Deposit)
}
}

View File

@@ -459,7 +459,7 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
Index: 1,
},
}
newFinalizedDeposit := ethpb.DepositContainer{
newFinalizedDeposit := &ethpb.DepositContainer{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{2}, 48),
@@ -471,17 +471,17 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
}
dc.deposits = oldFinalizedDeposits
dc.InsertFinalizedDeposits(context.Background(), 1)
// Artificially exclude old deposits so that they can only be retrieved from previously finalized deposits.
dc.deposits = []*ethpb.DepositContainer{&newFinalizedDeposit}
dc.InsertFinalizedDeposits(context.Background(), 2)
dc.deposits = append(dc.deposits, []*ethpb.DepositContainer{newFinalizedDeposit}...)
cachedDeposits := dc.FinalizedDeposits(context.Background())
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex)
assert.Equal(t, int64(1), cachedDeposits.MerkleTrieIndex)
var deps [][]byte
for _, d := range append(oldFinalizedDeposits, &newFinalizedDeposit) {
for _, d := range oldFinalizedDeposits {
hash, err := d.Deposit.Data.HashTreeRoot()
require.NoError(t, err, "Could not hash deposit data")
deps = append(deps, hash[:])
@@ -491,6 +491,140 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
assert.Equal(t, trie.HashTreeRoot(), cachedDeposits.Deposits.HashTreeRoot())
}
func TestFinalizedDeposits_HandleZeroDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.InsertFinalizedDeposits(context.Background(), 2)
cachedDeposits := dc.FinalizedDeposits(context.Background())
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(-1), cachedDeposits.MerkleTrieIndex)
}
func TestFinalizedDeposits_HandleSmallerThanExpectedDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
finalizedDeposits := []*ethpb.DepositContainer{
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{0}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 0,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 1,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{2}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 2,
},
}
dc.deposits = finalizedDeposits
dc.InsertFinalizedDeposits(context.Background(), 5)
cachedDeposits := dc.FinalizedDeposits(context.Background())
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex)
}
func TestFinalizedDeposits_HandleLowerEth1DepositIndex(t *testing.T) {
dc, err := New()
require.NoError(t, err)
finalizedDeposits := []*ethpb.DepositContainer{
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{0}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 0,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 1,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{2}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 2,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 3,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{4}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 4,
},
{
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{5}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: 5,
},
}
dc.deposits = finalizedDeposits
dc.InsertFinalizedDeposits(context.Background(), 5)
// Reinsert finalized deposits with a lower index.
dc.InsertFinalizedDeposits(context.Background(), 2)
cachedDeposits := dc.FinalizedDeposits(context.Background())
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(5), cachedDeposits.MerkleTrieIndex)
}
func TestFinalizedDeposits_InitializedCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)
@@ -554,7 +688,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
})
dc.InsertFinalizedDeposits(context.Background(), 1)
deps := dc.NonFinalizedDeposits(context.Background(), nil)
deps := dc.NonFinalizedDeposits(context.Background(), 1, nil)
assert.Equal(t, 2, len(deps))
}
@@ -611,10 +745,89 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
})
dc.InsertFinalizedDeposits(context.Background(), 1)
deps := dc.NonFinalizedDeposits(context.Background(), big.NewInt(10))
deps := dc.NonFinalizedDeposits(context.Background(), 1, big.NewInt(10))
assert.Equal(t, 1, len(deps))
}
func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
dc, err := New()
require.NoError(t, err)
generateCtr := func(height uint64, index int64) *ethpb.DepositContainer {
return &ethpb.DepositContainer{
Eth1BlockHeight: height,
Deposit: &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{uint8(index)}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
},
},
Index: index,
}
}
finalizedDeposits := []*ethpb.DepositContainer{
generateCtr(10, 0),
generateCtr(11, 1),
generateCtr(12, 2),
generateCtr(12, 3),
generateCtr(13, 4),
generateCtr(13, 5),
generateCtr(13, 6),
generateCtr(14, 7),
}
dc.deposits = append(finalizedDeposits,
generateCtr(15, 8),
generateCtr(15, 9),
generateCtr(30, 10))
trieItems := make([][]byte, 0, len(dc.deposits))
for _, dep := range dc.allDeposits(big.NewInt(30)) {
depHash, err := dep.Data.HashTreeRoot()
assert.NoError(t, err)
trieItems = append(trieItems, depHash[:])
}
depositTrie, err := trie.GenerateTrieFromItems(trieItems, params.BeaconConfig().DepositContractTreeDepth)
assert.NoError(t, err)
// Perform this in a non-sensical ordering
dc.InsertFinalizedDeposits(context.Background(), 10)
dc.InsertFinalizedDeposits(context.Background(), 2)
dc.InsertFinalizedDeposits(context.Background(), 3)
dc.InsertFinalizedDeposits(context.Background(), 4)
// Mimick finalized deposit trie fetch.
fd := dc.FinalizedDeposits(context.Background())
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex, big.NewInt(14))
insertIndex := fd.MerkleTrieIndex + 1
for _, dep := range deps {
depHash, err := dep.Data.HashTreeRoot()
assert.NoError(t, err)
if err = fd.Deposits.Insert(depHash[:], int(insertIndex)); err != nil {
assert.NoError(t, err)
}
insertIndex++
}
dc.InsertFinalizedDeposits(context.Background(), 15)
dc.InsertFinalizedDeposits(context.Background(), 15)
dc.InsertFinalizedDeposits(context.Background(), 14)
fd = dc.FinalizedDeposits(context.Background())
deps = dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex, big.NewInt(30))
insertIndex = fd.MerkleTrieIndex + 1
for _, dep := range deps {
depHash, err := dep.Data.HashTreeRoot()
assert.NoError(t, err)
if err = fd.Deposits.Insert(depHash[:], int(insertIndex)); err != nil {
assert.NoError(t, err)
}
insertIndex++
}
assert.Equal(t, fd.Deposits.NumOfItems(), depositTrie.NumOfItems())
}
func TestPruneProofs_Ok(t *testing.T) {
dc, err := New()
require.NoError(t, err)

View File

@@ -6,7 +6,6 @@ import (
"bytes"
"context"
"fmt"
log "github.com/sirupsen/logrus"
"sort"
"github.com/pkg/errors"
@@ -287,44 +286,41 @@ func ShuffledIndices(s state.ReadOnlyBeaconState, epoch types.Epoch) ([]types.Va
// UpdateCommitteeCache gets called at the beginning of every epoch to cache the committee shuffled indices
// list with committee index and epoch number. It caches the shuffled indices for current epoch and next epoch.
func UpdateCommitteeCache(state state.ReadOnlyBeaconState, epoch types.Epoch) error {
//for _, e := range []types.Epoch{epoch, epoch + 1} {
seed, err := Seed(state, epoch, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
log.Infof("computed seed=%#x for slot=%d", seed, state.Slot())
if committeeCache.HasEntry(string(seed[:])) {
log.Infof("UpdateCommitteeCache: seed=%#x already in cache at slot=%d", seed, state.Slot())
return nil
}
for _, e := range []types.Epoch{epoch, epoch + 1} {
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconAttester)
if err != nil {
return err
}
if committeeCache.HasEntry(string(seed[:])) {
return nil
}
shuffledIndices, err := ShuffledIndices(state, epoch)
if err != nil {
return err
shuffledIndices, err := ShuffledIndices(state, e)
if err != nil {
return err
}
count := SlotCommitteeCount(uint64(len(shuffledIndices)))
// Store the sorted indices as well as shuffled indices. In current spec,
// sorted indices is required to retrieve proposer index. This is also
// used for failing verify signature fallback.
sortedIndices := make([]types.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
if err := committeeCache.AddCommitteeShuffledList(&cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,
SortedIndices: sortedIndices,
}); err != nil {
return err
}
}
count := SlotCommitteeCount(uint64(len(shuffledIndices)))
// Store the sorted indices as well as shuffled indices. In current spec,
// sorted indices is required to retrieve proposer index. This is also
// used for failing verify signature fallback.
sortedIndices := make([]types.ValidatorIndex, len(shuffledIndices))
copy(sortedIndices, shuffledIndices)
sort.Slice(sortedIndices, func(i, j int) bool {
return sortedIndices[i] < sortedIndices[j]
})
log.Infof("UpdateCommitteeCache: epoch=%d, state.slot=%d, indices=%v, seed=%#x", epoch, state.Slot(), sortedIndices, seed)
if err := committeeCache.AddCommitteeShuffledList(&cache.Committees{
ShuffledIndices: shuffledIndices,
CommitteeCount: uint64(params.BeaconConfig().SlotsPerEpoch.Mul(count)),
Seed: seed,
SortedIndices: sortedIndices,
}); err != nil {
return err
}
//}
return nil
}
@@ -367,7 +363,6 @@ func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaco
if err != nil {
return err
}
log.Infof("UpdateProposerIndicesInCache: state.slot=%d, slot=%d, root=%#x, indices=%v", state.Slot(), s, r, indices)
return proposerIndicesCache.AddProposerIndices(&cache.ProposerIndices{
BlockRoot: bytesutil.ToBytes32(r),
ProposerIndices: proposerIndices,

View File

@@ -7,7 +7,6 @@ import (
"github.com/prysmaticlabs/prysm/crypto/bls"
"github.com/prysmaticlabs/prysm/crypto/hash"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
log "github.com/sirupsen/logrus"
)
// Seed returns the randao seed used for shuffling of a given epoch.
@@ -34,8 +33,6 @@ func Seed(state state.ReadOnlyBeaconState, epoch types.Epoch, domain [bls.Domain
seed32 := hash.Hash(seed)
log.Infof("seed computation params for slot=%d: domain=%#x, epoch=%d, randaoMix=%#x", state.Slot(), domain, epoch, randaoMix)
return seed32, nil
}

View File

@@ -3,8 +3,6 @@ package helpers
import (
"bytes"
"context"
"fmt"
"github.com/prysmaticlabs/prysm/time/slots"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
@@ -17,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/crypto/hash"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/time/slots"
log "github.com/sirupsen/logrus"
)
@@ -89,27 +88,11 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
if err != nil {
return nil, errors.Wrap(err, "could not get seed")
}
var ci []types.ValidatorIndex
if s.Slot() == 78 {
if err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
if IsActiveValidatorUsingTrie(val, epoch) {
ci = append(ci, types.ValidatorIndex(idx))
}
return nil
}); err != nil {
log.Errorf("got error doing double-check validator index computation=%v", err)
return nil, err
}
}
activeIndices, err := committeeCache.ActiveIndices(ctx, seed)
if err != nil {
return nil, errors.Wrap(err, "could not interface with committee cache")
}
if activeIndices != nil {
if s.Slot() == 78 {
log.Infof("double check indices for 78, len=%d, low=%d, high=%d", len(ci), ci[0], ci[len(ci)-1])
}
log.Infof("found indices in cache for slot=%d, len=%d, low=%d, high=%d", s.Slot(), len(activeIndices), activeIndices[0], activeIndices[len(activeIndices)-1])
return activeIndices, nil
}
@@ -123,7 +106,6 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
return nil, errors.New("nil active indices")
}
CommitteeCacheInProgressHit.Inc()
log.Infof("found indices in in-progress cache for slot=%d, len=%d, low=%d, high=%d", s.Slot(), len(activeIndices), activeIndices[0], activeIndices[len(activeIndices)-1])
return activeIndices, nil
}
return nil, errors.Wrap(err, "could not mark committee cache as in progress")
@@ -144,16 +126,9 @@ func ActiveValidatorIndices(ctx context.Context, s state.ReadOnlyBeaconState, ep
return nil, err
}
log.Infof("computed indices slot=%d, len=%d, low=%d, high=%d", s.Slot(), len(indices), indices[0], indices[len(indices)-1])
log.Infof("UpdateCommitteeCache from ActiveValidatorIndices, slot=%d", s.Slot())
if err := UpdateCommitteeCache(s, epoch); err != nil {
return nil, errors.Wrap(err, "could not update committee cache")
}
/*
if err := UpdateProposerIndicesInCache(ctx, s); err != nil {
return nil, errors.Wrap(err, "failed to update proposer indices cache in ActiveValidatorIndices")
}
*/
return indices, nil
}
@@ -200,7 +175,6 @@ func ActiveValidatorCount(ctx context.Context, s state.ReadOnlyBeaconState, epoc
return 0, err
}
log.Infof("UpdateCommitteeCache from ActiveValidatorCount, slot=%d", s.Slot())
if err := UpdateCommitteeCache(s, epoch); err != nil {
return 0, errors.Wrap(err, "could not update committee cache")
}
@@ -275,7 +249,6 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
}
return proposerIndices[state.Slot()%params.BeaconConfig().SlotsPerEpoch], nil
}
log.Info("UpdateProposerIndicesInCache from BeaconProposerIndex")
if err := UpdateProposerIndicesInCache(ctx, state); err != nil {
return 0, errors.Wrap(err, "could not update committee cache")
}
@@ -286,19 +259,15 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
if err != nil {
return 0, errors.Wrap(err, "could not generate seed")
}
fmt.Printf("BeaconProposerIndex:seed=%#x", seed)
seedWithSlot := append(seed[:], bytesutil.Bytes8(uint64(state.Slot()))...)
fmt.Printf("BeaconProposerIndex:seedWithSlot=%#x", seed)
seedWithSlotHash := hash.Hash(seedWithSlot)
fmt.Printf("BeaconProposerIndex:seedWithSlotHash=%#x", seed)
indices, err := ActiveValidatorIndices(ctx, state, e)
if err != nil {
return 0, errors.Wrap(err, "could not get active indices")
}
log.Infof("validator index length=%d, low=%d, high=%d", len(indices), indices[0], indices[len(indices)-1])
return ComputeProposerIndex(state, indices, seedWithSlotHash)
}

View File

@@ -138,7 +138,6 @@ func ProcessSlotsUsingNextSlotCache(
ctx, span := trace.StartSpan(ctx, "core.state.ProcessSlotsUsingNextSlotCache")
defer span.End()
/*
// Check whether the parent state has been advanced by 1 slot in next slot cache.
nextSlotState, err := NextSlotState(ctx, parentRoot)
if err != nil {
@@ -149,11 +148,6 @@ func ProcessSlotsUsingNextSlotCache(
// We replace next slot state with parent state.
if cachedStateExists {
parentState = nextSlotState
root, err := parentState.HashTreeRoot(ctx)
if err != nil {
log.Errorf("weird, got an error calling HTR for the state=%v where root should be=%#x", err, parentRoot)
}
log.Infof("found state in NextSlotCache at slot=%d with root=%#x (parentRoot=%#x)", parentState.Slot(), root, parentRoot)
}
// In the event our cached state has advanced our
@@ -161,12 +155,9 @@ func ProcessSlotsUsingNextSlotCache(
if cachedStateExists && parentState.Slot() == slot {
return parentState, nil
}
*/
log.Infof("process_slots being called up to slot=%d where state.slot=%d", slot, parentState.Slot())
// Since next slot cache only advances state by 1 slot,
// we check if there's more slots that need to process.
parentState, err := ProcessSlots(ctx, parentState, slot)
parentState, err = ProcessSlots(ctx, parentState, slot)
if err != nil {
return nil, errors.Wrap(err, "could not process slots")
}

View File

@@ -282,11 +282,7 @@ func ProcessBlockForStateRoot(
state, err = b.ProcessBlockHeaderNoVerify(ctx, state, blk.Slot(), blk.ProposerIndex(), blk.ParentRoot(), bodyRoot[:])
if err != nil {
tracing.AnnotateError(span, err)
r, err := signed.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not process block header, also failed to compute its htr")
}
return nil, errors.Wrapf(err, "could not process block header, state slot=%d, root=%#x", state.Slot(), r)
return nil, errors.Wrap(err, "could not process block header")
}
enabled, err := b.IsExecutionEnabled(state, blk.Body())

View File

@@ -105,7 +105,6 @@ type HeadAccessDatabase interface {
// initialization method needed for origin checkpoint sync
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32]byte) error
SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error
}

View File

@@ -23,7 +23,7 @@ var previousFinalizedCheckpointKey = []byte("previous-finalized-checkpoint")
var containerFinalizedButNotCanonical = []byte("recent block needs reindexing to determine canonical")
// The finalized block roots index tracks beacon blocks which are finalized in the canonical chain.
// The finalized checkpoint contains the epoch which was finalized and the highest beacon block
// The finalized checkpoint contains the the epoch which was finalized and the highest beacon block
// root where block.slot <= start_slot(epoch). As a result, we cannot index the finalized canonical
// beacon block chain using the finalized root alone as this would exclude all other blocks in the
// finalized epoch from being indexed as "final and canonical".
@@ -75,7 +75,7 @@ func (s *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, chec
// Walk up the ancestry chain until we reach a block root present in the finalized block roots
// index bucket or genesis block root.
for {
if bytes.Equal(root, genesisRoot) {
if bytes.Equal(root, genesisRoot) || bytes.Equal(root, initCheckpointRoot) {
break
}
@@ -105,12 +105,6 @@ func (s *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, chec
return err
}
// breaking here allows the initial checkpoint root to be correctly inserted,
// but stops the loop from trying to search for its parent.
if bytes.Equal(root, initCheckpointRoot) {
break
}
// Found parent, loop exit condition.
if parentBytes := bkt.Get(block.ParentRoot()); parentBytes != nil {
parent := &ethpb.FinalizedBlockRootContainer{}

View File

@@ -42,8 +42,4 @@ func TestSaveOrigin(t *testing.T) {
cbb, err := scb.MarshalSSZ()
require.NoError(t, err)
require.NoError(t, db.SaveOrigin(ctx, csb, cbb))
broot, err := scb.Block().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, db.IsFinalizedBlock(ctx, broot))
}

View File

@@ -47,14 +47,18 @@ type Config struct {
// into the beacon chain database and running services at start up. This service should not be used in production
// as it does not have any value other than ease of use for testing purposes.
func NewService(ctx context.Context, cfg *Config) *Service {
log.Warn("Saving generated genesis state in database for interop testing")
ctx, cancel := context.WithCancel(ctx)
s := &Service{
return &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
}
}
// Start initializes the genesis state from configured flags.
func (s *Service) Start() {
log.Warn("Saving generated genesis state in database for interop testing")
if s.cfg.GenesisPath != "" {
data, err := ioutil.ReadFile(s.cfg.GenesisPath)
@@ -69,14 +73,14 @@ func NewService(ctx context.Context, cfg *Config) *Service {
if err != nil {
log.Fatalf("Could not get state trie: %v", err)
}
if err := s.saveGenesisState(ctx, genesisTrie); err != nil {
if err := s.saveGenesisState(s.ctx, genesisTrie); err != nil {
log.Fatalf("Could not save interop genesis state %v", err)
}
return s
return
}
// Save genesis state in db
genesisState, _, err := interop.GenerateGenesisState(ctx, s.cfg.GenesisTime, s.cfg.NumValidators)
genesisState, _, err := interop.GenerateGenesisState(s.ctx, s.cfg.GenesisTime, s.cfg.NumValidators)
if err != nil {
log.Fatalf("Could not generate interop genesis state: %v", err)
}
@@ -92,17 +96,11 @@ func NewService(ctx context.Context, cfg *Config) *Service {
if err != nil {
log.Fatalf("Could not hash tree root genesis state: %v", err)
}
go slots.CountdownToGenesis(ctx, time.Unix(int64(s.cfg.GenesisTime), 0), s.cfg.NumValidators, gRoot)
go slots.CountdownToGenesis(s.ctx, time.Unix(int64(s.cfg.GenesisTime), 0), s.cfg.NumValidators, gRoot)
if err := s.saveGenesisState(ctx, genesisTrie); err != nil {
if err := s.saveGenesisState(s.ctx, genesisTrie); err != nil {
log.Fatalf("Could not save interop genesis state %v", err)
}
return s
}
// Start initializes the genesis state from configured flags.
func (_ *Service) Start() {
}
// Stop does nothing.
@@ -155,7 +153,7 @@ func (_ *Service) FinalizedDeposits(_ context.Context) *depositcache.FinalizedDe
}
// NonFinalizedDeposits mocks out the deposit cache functionality for interop.
func (_ *Service) NonFinalizedDeposits(_ context.Context, _ *big.Int) []*ethpb.Deposit {
func (_ *Service) NonFinalizedDeposits(_ context.Context, _ int64, _ *big.Int) []*ethpb.Deposit {
return []*ethpb.Deposit{}
}

View File

@@ -51,10 +51,4 @@ var (
Help: "The number of times pruning happened.",
},
)
validatedCount = promauto.NewGauge(
prometheus.GaugeOpts{
Name: "doublylinkedtree_validated_count",
Help: "The number of blocks that have been fully validated.",
},
)
)

View File

@@ -120,7 +120,6 @@ func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
}
n.optimistic = false
validatedCount.Inc()
return n.parent.setNodeAndParentValidated(ctx)
}

View File

@@ -51,10 +51,4 @@ var (
Help: "The number of times pruning happened.",
},
)
validatedNodesCount = promauto.NewCounter(
prometheus.CounterOpts{
Name: "proto_array_validated_nodes_count",
Help: "The number of nodes that have been fully validated.",
},
)
)

View File

@@ -43,7 +43,6 @@ func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [32]byte) er
if index == NonExistentNode {
break
}
validatedNodesCount.Inc()
}
return nil
}

View File

@@ -614,16 +614,22 @@ func (s *Store) prune(ctx context.Context, finalizedRoot [32]byte) error {
node := copyNode(s.nodes[idx])
parentIdx, ok := canonicalNodesMap[node.parent]
if ok {
s.nodesIndices[node.root] = uint64(len(canonicalNodes))
canonicalNodesMap[idx] = uint64(len(canonicalNodes))
currentIndex := uint64(len(canonicalNodes))
s.nodesIndices[node.root] = currentIndex
s.payloadIndices[node.payloadHash] = currentIndex
canonicalNodesMap[idx] = currentIndex
node.parent = parentIdx
canonicalNodes = append(canonicalNodes, node)
} else {
// Remove node and synced tip that is not part of finalized branch.
// Remove node that is not part of finalized branch.
delete(s.nodesIndices, node.root)
delete(s.canonicalNodes, node.root)
delete(s.payloadIndices, node.payloadHash)
}
}
s.nodesIndices[finalizedRoot] = uint64(0)
s.canonicalNodes[finalizedRoot] = true
s.payloadIndices[finalizedNode.payloadHash] = uint64(0)
// Recompute the best child and descendant for each canonical nodes.
for _, node := range canonicalNodes {

View File

@@ -375,7 +375,7 @@ func TestStore_Prune_MoreThanThreshold(t *testing.T) {
parent: uint64(numOfNodes - 2),
})
indices[indexToHash(uint64(numOfNodes-1))] = uint64(numOfNodes - 1)
s := &Store{nodes: nodes, nodesIndices: indices}
s := &Store{nodes: nodes, nodesIndices: indices, canonicalNodes: map[[32]byte]bool{}, payloadIndices: map[[32]byte]uint64{}}
// Finalized root is at index 99 so everything before 99 should be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(99)))
@@ -413,7 +413,7 @@ func TestStore_Prune_MoreThanOnce(t *testing.T) {
parent: uint64(numOfNodes - 2),
})
s := &Store{nodes: nodes, nodesIndices: indices}
s := &Store{nodes: nodes, nodesIndices: indices, canonicalNodes: map[[32]byte]bool{}, payloadIndices: map[[32]byte]uint64{}}
// Finalized root is at index 11 so everything before 11 should be pruned.
require.NoError(t, s.prune(context.Background(), indexToHash(10)))
@@ -441,6 +441,7 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
bestDescendant: 1,
root: indexToHash(uint64(0)),
parent: NonExistentNode,
payloadHash: [32]byte{'A'},
},
{
slot: 101,
@@ -448,6 +449,7 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
bestChild: NonExistentNode,
bestDescendant: NonExistentNode,
parent: 0,
payloadHash: [32]byte{'B'},
},
{
slot: 101,
@@ -455,6 +457,7 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
parent: 0,
bestChild: NonExistentNode,
bestDescendant: NonExistentNode,
payloadHash: [32]byte{'C'},
},
}
s := &Store{
@@ -465,9 +468,22 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
indexToHash(uint64(1)): 1,
indexToHash(uint64(2)): 2,
},
canonicalNodes: map[[32]byte]bool{
indexToHash(uint64(0)): true,
indexToHash(uint64(1)): true,
indexToHash(uint64(2)): true,
},
payloadIndices: map[[32]byte]uint64{
[32]byte{'A'}: 0,
[32]byte{'B'}: 1,
[32]byte{'C'}: 2,
},
}
require.NoError(t, s.prune(context.Background(), indexToHash(uint64(1))))
require.Equal(t, len(s.nodes), 1)
require.Equal(t, 1, len(s.nodes))
require.Equal(t, 1, len(s.nodesIndices))
require.Equal(t, 1, len(s.canonicalNodes))
require.Equal(t, 1, len(s.payloadIndices))
}
// This test starts with the following branching diagram
@@ -482,25 +498,74 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
// J -- K -- L
//
//
func TestStore_PruneSyncedTips(t *testing.T) {
func TestStore_PruneBranched(t *testing.T) {
ctx := context.Background()
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, params.BeaconConfig().ZeroHash, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, params.BeaconConfig().ZeroHash, 1, 1))
f.store.pruneThreshold = 0
require.NoError(t, f.Prune(ctx, [32]byte{'f'}))
require.Equal(t, 1, f.NodeCount())
tests := []struct {
finalizedRoot [32]byte
wantedCanonical [32]byte
wantedNonCanonical [32]byte
canonicalCount int
payloadHash [32]byte
payloadIndex uint64
nonExistentPayload [32]byte
}{
{
[32]byte{'f'},
[32]byte{'f'},
[32]byte{'a'},
1,
[32]byte{'F'},
0,
[32]byte{'H'},
},
{
[32]byte{'d'},
[32]byte{'e'},
[32]byte{'i'},
3,
[32]byte{'E'},
1,
[32]byte{'C'},
},
{
[32]byte{'b'},
[32]byte{'f'},
[32]byte{'h'},
5,
[32]byte{'D'},
3,
[32]byte{'A'},
},
}
for _, tc := range tests {
f := setup(1, 1)
require.NoError(t, f.InsertOptimisticBlock(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1))
require.NoError(t, f.InsertOptimisticBlock(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1))
f.store.pruneThreshold = 0
require.NoError(t, f.store.updateCanonicalNodes(ctx, [32]byte{'f'}))
require.Equal(t, true, f.IsCanonical([32]byte{'a'}))
require.Equal(t, true, f.IsCanonical([32]byte{'f'}))
require.NoError(t, f.Prune(ctx, tc.finalizedRoot))
require.Equal(t, tc.canonicalCount, len(f.store.canonicalNodes))
require.Equal(t, true, f.IsCanonical(tc.wantedCanonical))
require.Equal(t, false, f.IsCanonical(tc.wantedNonCanonical))
require.Equal(t, tc.payloadIndex, f.store.payloadIndices[tc.payloadHash])
_, ok := f.store.payloadIndices[tc.nonExistentPayload]
require.Equal(t, false, ok)
}
}
func TestStore_LeadsToViableHead(t *testing.T) {

View File

@@ -95,6 +95,7 @@ func NewService(ctx context.Context, config *ValidatorMonitorConfig, tracked []t
latestPerformance: make(map[types.ValidatorIndex]ValidatorLatestPerformance),
aggregatedPerformance: make(map[types.ValidatorIndex]ValidatorAggregatedPerformance),
trackedSyncCommitteeIndices: make(map[types.ValidatorIndex][]types.CommitteeIndex),
isLogging: false,
}
for _, idx := range tracked {
r.TrackedValidators[idx] = true
@@ -117,7 +118,6 @@ func (s *Service) Start() {
"ValidatorIndices": tracked,
}).Info("Starting service")
s.isLogging = false
stateChannel := make(chan *feed.Event, 1)
stateSub := s.config.StateNotifier.StateFeed().Subscribe(stateChannel)

View File

@@ -906,6 +906,7 @@ func (b *BeaconNode) registerDeterminsticGenesisService() error {
DepositCache: b.depositCache,
GenesisPath: genesisStatePath,
})
svc.Start()
// Register genesis state as start-up state when interop mode.
// The start-up state gets reused across services.

View File

@@ -3,7 +3,6 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"auth.go",
"block_cache.go",
"block_reader.go",
"check_transition_config.go",
@@ -16,6 +15,7 @@ go_library(
"options.go",
"prometheus.go",
"provider.go",
"rpc_connection.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/powchain",
@@ -59,7 +59,6 @@ go_library(
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//ethclient:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
@@ -74,7 +73,6 @@ go_test(
name = "go_default_test",
size = "medium",
srcs = [
"auth_test.go",
"block_cache_test.go",
"block_reader_test.go",
"check_transition_config_test.go",
@@ -125,7 +123,6 @@ go_test(
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_ethereum_go_ethereum//trie:go_default_library",
"@com_github_golang_jwt_jwt_v4//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",

View File

@@ -13,6 +13,7 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/network"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/sirupsen/logrus"
)
@@ -81,7 +82,7 @@ func (s *Service) checkTransitionConfiguration(
return
}
case tm := <-ticker.C:
ctx, cancel := context.WithDeadline(ctx, tm.Add(DefaultRPCHTTPTimeout))
ctx, cancel := context.WithDeadline(ctx, tm.Add(network.DefaultRPCHTTPTimeout))
err = s.ExchangeTransitionConfiguration(ctx, cfg)
s.handleExchangeConfigurationError(err)
if !hasTtdReached {
@@ -119,11 +120,16 @@ func (s *Service) handleExchangeConfigurationError(err error) {
// Logs the terminal total difficulty status.
func (s *Service) logTtdStatus(ctx context.Context, ttd *uint256.Int) (bool, error) {
latest, err := s.LatestExecutionBlock(ctx)
if err != nil {
switch {
case errors.Is(err, hexutil.ErrEmptyString):
return false, nil
case err != nil:
return false, err
}
if latest == nil {
case latest == nil:
return false, errors.New("latest block is nil")
case latest.TotalDifficulty == "":
return false, nil
default:
}
latestTtd, err := hexutil.DecodeBig(latest.TotalDifficulty)
if err != nil {

View File

@@ -190,6 +190,38 @@ func TestService_logTtdStatus(t *testing.T) {
require.Equal(t, false, reached)
}
func TestService_logTtdStatus_NotSyncedClient(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
resp := (*pb.ExecutionBlock)(nil) // Nil response when a client is not synced
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
ttd := new(uint256.Int)
reached, err := service.logTtdStatus(context.Background(), ttd.SetUint64(24343))
require.NoError(t, err)
require.Equal(t, false, reached)
}
func emptyPayload() *pb.ExecutionPayload {
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),

View File

@@ -10,9 +10,12 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -49,8 +52,8 @@ type EngineCaller interface {
ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) error
LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock, error)
ExecutionBlockByHash(ctx context.Context, hash common.Hash) (*pb.ExecutionBlock, error)
GetTerminalBlockHash(ctx context.Context) ([]byte, bool, error)
}
// NewPayload calls the engine_newPayloadV1 method via JSON-RPC.
@@ -174,6 +177,78 @@ func (s *Service) ExchangeTransitionConfiguration(
return nil
}
// GetTerminalBlockHash returns the valid terminal block hash based on total difficulty.
//
// Spec code:
// def get_pow_block_at_terminal_total_difficulty(pow_chain: Dict[Hash32, PowBlock]) -> Optional[PowBlock]:
// # `pow_chain` abstractly represents all blocks in the PoW chain
// for block in pow_chain:
// parent = pow_chain[block.parent_hash]
// block_reached_ttd = block.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// parent_reached_ttd = parent.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// if block_reached_ttd and not parent_reached_ttd:
// return block
//
// return None
func (s *Service) GetTerminalBlockHash(ctx context.Context) ([]byte, bool, error) {
ttd := new(big.Int)
ttd.SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
terminalTotalDifficulty, overflows := uint256.FromBig(ttd)
if overflows {
return nil, false, errors.New("could not convert terminal total difficulty to uint256")
}
blk, err := s.LatestExecutionBlock(ctx)
if err != nil {
return nil, false, errors.Wrap(err, "could not get latest execution block")
}
if blk == nil {
return nil, false, errors.New("latest execution block is nil")
}
for {
if ctx.Err() != nil {
return nil, false, ctx.Err()
}
currentTotalDifficulty, err := tDStringToUint256(blk.TotalDifficulty)
if err != nil {
return nil, false, errors.Wrap(err, "could not convert total difficulty to uint256")
}
blockReachedTTD := currentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
parentHash := bytesutil.ToBytes32(blk.ParentHash)
if len(blk.ParentHash) == 0 || parentHash == params.BeaconConfig().ZeroHash {
return nil, false, nil
}
parentBlk, err := s.ExecutionBlockByHash(ctx, parentHash)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent execution block")
}
if parentBlk == nil {
return nil, false, errors.New("parent execution block is nil")
}
if blockReachedTTD {
parentTotalDifficulty, err := tDStringToUint256(parentBlk.TotalDifficulty)
if err != nil {
return nil, false, errors.Wrap(err, "could not convert total difficulty to uint256")
}
parentReachedTTD := parentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
if !parentReachedTTD {
log.WithFields(logrus.Fields{
"number": blk.Number,
"hash": fmt.Sprintf("%#x", bytesutil.Trunc(blk.Hash)),
"td": blk.TotalDifficulty,
"parentTd": parentBlk.TotalDifficulty,
"ttd": terminalTotalDifficulty,
}).Info("Retrieved terminal block hash")
return blk.Hash, true, nil
}
} else {
return nil, false, nil
}
blk = parentBlk
}
}
// LatestExecutionBlock fetches the latest execution engine block by calling
// eth_blockByNumber via JSON-RPC.
func (s *Service) LatestExecutionBlock(ctx context.Context) (*pb.ExecutionBlock, error) {
@@ -251,3 +326,15 @@ func isTimeout(e error) bool {
t, ok := e.(httpTimeoutError)
return ok && t.Timeout()
}
func tDStringToUint256(td string) (*uint256.Int, error) {
b, err := hexutil.DecodeBig(td)
if err != nil {
return nil, err
}
i, overflows := uint256.FromBig(b)
if overflows {
return nil, errors.New("total difficulty overflowed")
}
return i, nil
}

View File

@@ -418,6 +418,155 @@ func TestClient_HTTP(t *testing.T) {
})
}
func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
tests := []struct {
name string
paramsTd string
currentPowBlock *pb.ExecutionBlock
parentPowBlock *pb.ExecutionBlock
errLatestExecutionBlk error
wantTerminalBlockHash []byte
wantExists bool
errString string
}{
{
name: "config td overflows",
paramsTd: "1115792089237316195423570985008687907853269984665640564039457584007913129638912",
errString: "could not convert terminal total difficulty to uint256",
},
{
name: "could not get latest execution block",
paramsTd: "1",
errLatestExecutionBlk: errors.New("blah"),
errString: "could not get latest execution block",
},
{
name: "nil latest execution block",
paramsTd: "1",
errString: "latest execution block is nil",
},
{
name: "current execution block invalid TD",
paramsTd: "1",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
TotalDifficulty: "1115792089237316195423570985008687907853269984665640564039457584007913129638912",
},
errString: "could not convert total difficulty to uint256",
},
{
name: "current execution block has zero hash parent",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: params.BeaconConfig().ZeroHash[:],
TotalDifficulty: "0x3",
},
},
{
name: "could not get parent block",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x3",
},
errString: "could not get parent execution block",
},
{
name: "parent execution block invalid TD",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x3",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
TotalDifficulty: "1",
},
errString: "could not convert total difficulty to uint256",
},
{
name: "happy case",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x3",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
TotalDifficulty: "0x1",
},
wantExists: true,
wantTerminalBlockHash: []byte{'a'},
},
{
name: "ttd not reached",
paramsTd: "3",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x2",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
TotalDifficulty: "0x1",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = tt.paramsTd
params.OverrideBeaconConfig(cfg)
var m map[[32]byte]*pb.ExecutionBlock
if tt.parentPowBlock != nil {
m = map[[32]byte]*pb.ExecutionBlock{
bytesutil.ToBytes32(tt.parentPowBlock.Hash): tt.parentPowBlock,
}
}
client := mocks.EngineClient{
ErrLatestExecBlock: tt.errLatestExecutionBlk,
ExecutionBlock: tt.currentPowBlock,
BlockByHashMap: m,
}
b, e, err := client.GetTerminalBlockHash(context.Background())
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.DeepEqual(t, tt.wantExists, e)
require.DeepEqual(t, tt.wantTerminalBlockHash, b)
}
})
}
}
func Test_tDStringToUint256(t *testing.T) {
i, err := tDStringToUint256("0x0")
require.NoError(t, err)
require.DeepEqual(t, uint256.NewInt(0), i)
i, err = tDStringToUint256("0x10000")
require.NoError(t, err)
require.DeepEqual(t, uint256.NewInt(65536), i)
_, err = tDStringToUint256("100")
require.ErrorContains(t, "hex string without 0x prefix", err)
_, err = tDStringToUint256("0xzzzzzz")
require.ErrorContains(t, "invalid hex string", err)
_, err = tDStringToUint256("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" +
"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF")
require.ErrorContains(t, "hex number > 256 bits", err)
}
func TestExchangeTransitionConfiguration(t *testing.T) {
fix := fixtures()
ctx := context.Background()

View File

@@ -1,9 +1,6 @@
package powchain
import (
"net/http"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/beacon-chain/core/feed/state"
@@ -11,11 +8,9 @@ import (
"github.com/prysmaticlabs/prysm/beacon-chain/state"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/network"
"github.com/prysmaticlabs/prysm/network/authorization"
)
// DefaultRPCHTTPTimeout for HTTP requests via an RPC connection to an execution node.
const DefaultRPCHTTPTimeout = time.Second * 6
type Option func(s *Service) error
// WithHttpEndpoints deduplicates and parses http endpoints for the powchain service to use,
@@ -38,20 +33,29 @@ func WithHttpEndpoints(endpointStrings []string) Option {
}
}
// WithJWTSecret for authenticating the execution node JSON-RPC endpoint.
func WithJWTSecret(secret []byte) Option {
return func(c *Service) error {
// WithHttpEndpointsAndJWTSecret for authenticating the execution node JSON-RPC endpoint.
func WithHttpEndpointsAndJWTSecret(endpointStrings []string, secret []byte) Option {
return func(s *Service) error {
if len(secret) == 0 {
return nil
}
authTransport := &jwtTransport{
underlyingTransport: http.DefaultTransport,
jwtSecret: secret,
stringEndpoints := dedupEndpoints(endpointStrings)
endpoints := make([]network.Endpoint, len(stringEndpoints))
// Overwrite authorization type for all endpoints to be of a bearer
// type.
for i, e := range stringEndpoints {
hEndpoint := HttpEndpoint(e)
hEndpoint.Auth.Method = authorization.Bearer
hEndpoint.Auth.Value = string(secret)
endpoints[i] = hEndpoint
}
c.cfg.httpRPCClient = &http.Client{
Timeout: DefaultRPCHTTPTimeout,
Transport: authTransport,
// Select first http endpoint in the provided list.
var currEndpoint network.Endpoint
if len(endpointStrings) > 0 {
currEndpoint = endpoints[0]
}
s.cfg.httpEndpoints = endpoints
s.cfg.currHttpEndpoint = currEndpoint
return nil
}
}

View File

@@ -0,0 +1,176 @@
package powchain
import (
"context"
"fmt"
"net/url"
"time"
"github.com/ethereum/go-ethereum/ethclient"
gethRPC "github.com/ethereum/go-ethereum/rpc"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/params"
contracts "github.com/prysmaticlabs/prysm/contracts/deposit"
"github.com/prysmaticlabs/prysm/io/logs"
"github.com/prysmaticlabs/prysm/network"
"github.com/prysmaticlabs/prysm/network/authorization"
)
func (s *Service) setupExecutionClientConnections(ctx context.Context, currEndpoint network.Endpoint) error {
client, err := s.newRPCClientWithAuth(ctx, currEndpoint)
if err != nil {
return errors.Wrap(err, "could not dial execution node")
}
// Attach the clients to the service struct.
fetcher := ethclient.NewClient(client)
s.rpcClient = client
s.httpLogger = fetcher
s.eth1DataFetcher = fetcher
depositContractCaller, err := contracts.NewDepositContractCaller(s.cfg.depositContractAddr, fetcher)
if err != nil {
client.Close()
return errors.Wrap(err, "could not initialize deposit contract caller")
}
s.depositContractCaller = depositContractCaller
// Ensure we have the correct chain and deposit IDs.
if err := ensureCorrectExecutionChain(ctx, fetcher); err != nil {
client.Close()
return errors.Wrap(err, "could not make initial request to verify execution chain ID")
}
s.updateConnectedETH1(true)
s.runError = nil
return nil
}
// Every N seconds, defined as a backoffPeriod, attempts to re-establish an execution client
// connection and if this does not work, we fallback to the next endpoint if defined.
func (s *Service) pollConnectionStatus(ctx context.Context) {
// Use a custom logger to only log errors
logCounter := 0
errorLogger := func(err error, msg string) {
if logCounter > logThreshold {
log.Errorf("%s: %v", msg, err)
logCounter = 0
}
logCounter++
}
ticker := time.NewTicker(backOffPeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
log.Debugf("Trying to dial endpoint: %s", logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url))
if err := s.setupExecutionClientConnections(ctx, s.cfg.currHttpEndpoint); err != nil {
errorLogger(err, "Could not connect to execution client endpoint")
s.runError = err
s.fallbackToNextEndpoint()
}
case <-s.ctx.Done():
log.Debug("Received cancelled context,closing existing powchain service")
return
}
}
}
// Forces to retry an execution client connection.
func (s *Service) retryExecutionClientConnection(ctx context.Context, err error) {
s.runError = err
s.updateConnectedETH1(false)
// Back off for a while before redialing.
time.Sleep(backOffPeriod)
if err := s.setupExecutionClientConnections(ctx, s.cfg.currHttpEndpoint); err != nil {
s.runError = err
return
}
// Reset run error in the event of a successful connection.
s.runError = nil
}
// This performs a health check on our primary endpoint, and if it
// is ready to serve we connect to it again. This method is only
// relevant if we are on our backup endpoint.
func (s *Service) checkDefaultEndpoint(ctx context.Context) {
primaryEndpoint := s.cfg.httpEndpoints[0]
// Return early if we are running on our primary
// endpoint.
if s.cfg.currHttpEndpoint.Equals(primaryEndpoint) {
return
}
if err := s.setupExecutionClientConnections(ctx, primaryEndpoint); err != nil {
log.Debugf("Primary endpoint not ready: %v", err)
return
}
s.updateCurrHttpEndpoint(primaryEndpoint)
}
// This is an inefficient way to search for the next endpoint, but given N is
// expected to be small, it is fine to search this way.
func (s *Service) fallbackToNextEndpoint() {
currEndpoint := s.cfg.currHttpEndpoint
currIndex := 0
totalEndpoints := len(s.cfg.httpEndpoints)
for i, endpoint := range s.cfg.httpEndpoints {
if endpoint.Equals(currEndpoint) {
currIndex = i
break
}
}
nextIndex := currIndex + 1
if nextIndex >= totalEndpoints {
nextIndex = 0
}
s.updateCurrHttpEndpoint(s.cfg.httpEndpoints[nextIndex])
if nextIndex != currIndex {
log.Infof("Falling back to alternative endpoint: %s", logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url))
}
}
// Initializes an RPC connection with authentication headers.
func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.Endpoint) (*gethRPC.Client, error) {
// Need to handle ipc and http
var client *gethRPC.Client
u, err := url.Parse(endpoint.Url)
if err != nil {
return nil, err
}
switch u.Scheme {
case "http", "https":
client, err = gethRPC.DialHTTPWithClient(endpoint.Url, endpoint.HttpClient())
if err != nil {
return nil, err
}
case "":
client, err = gethRPC.DialIPC(ctx, endpoint.Url)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("no known transport for URL scheme %q", u.Scheme)
}
if endpoint.Auth.Method != authorization.None {
header, err := endpoint.Auth.ToHeaderValue()
if err != nil {
return nil, err
}
client.SetHeader("Authorization", header)
}
return client, nil
}
// Checks the chain ID of the execution client to ensure
// it matches local parameters of what Prysm expects.
func ensureCorrectExecutionChain(ctx context.Context, client *ethclient.Client) error {
cID, err := client.ChainID(ctx)
if err != nil {
return err
}
wantChainID := params.BeaconConfig().DepositChainID
if cID.Uint64() != wantChainID {
return fmt.Errorf("wanted chain ID %d, got %d", wantChainID, cID.Uint64())
}
return nil
}

View File

@@ -7,7 +7,6 @@ import (
"context"
"fmt"
"math/big"
"net/http"
"reflect"
"runtime/debug"
"sort"
@@ -39,10 +38,8 @@ import (
"github.com/prysmaticlabs/prysm/container/trie"
contracts "github.com/prysmaticlabs/prysm/contracts/deposit"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/io/logs"
"github.com/prysmaticlabs/prysm/monitoring/clientstats"
"github.com/prysmaticlabs/prysm/network"
"github.com/prysmaticlabs/prysm/network/authorization"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/time"
"github.com/prysmaticlabs/prysm/time/slots"
@@ -114,6 +111,7 @@ type Chain interface {
// RPCDataFetcher defines a subset of methods conformed to by ETH1.0 RPC clients for
// fetching eth1 data from the clients.
type RPCDataFetcher interface {
Close()
HeaderByNumber(ctx context.Context, number *big.Int) (*gethTypes.Header, error)
HeaderByHash(ctx context.Context, hash common.Hash) (*gethTypes.Header, error)
SyncProgress(ctx context.Context) (*ethereum.SyncProgress, error)
@@ -121,6 +119,7 @@ type RPCDataFetcher interface {
// RPCClient defines the rpc methods required to interact with the eth1 node.
type RPCClient interface {
Close()
BatchCall(b []gethRPC.BatchElem) error
CallContext(ctx context.Context, result interface{}, method string, args ...interface{}) error
}
@@ -135,7 +134,6 @@ type config struct {
eth1HeaderReqLimit uint64
beaconNodeStatsUpdater BeaconNodeStatsUpdater
httpEndpoints []network.Endpoint
httpRPCClient *http.Client
currHttpEndpoint network.Endpoint
finalizedStateAtStartup state.BeaconState
}
@@ -228,14 +226,9 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
// Start a web3 service's main event loop.
func (s *Service) Start() {
if err := s.connectToPowChain(); err != nil {
log.WithError(err).Fatal("Could not connect to execution endpoint")
if err := s.setupExecutionClientConnections(s.ctx, s.cfg.currHttpEndpoint); err != nil {
log.WithError(err).Error("Could not connect to execution endpoint")
}
log.WithFields(logrus.Fields{
"endpoint": logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url),
}).Info("Connected to Ethereum execution client RPC")
// If the chain has not started already and we don't have access to eth1 nodes, we will not be
// able to generate the genesis state.
if !s.chainStartData.Chainstarted && s.cfg.currHttpEndpoint.Url == "" {
@@ -253,7 +246,7 @@ func (s *Service) Start() {
s.isRunning = true
// Poll the execution client connection and fallback if errors occur.
go s.pollConnectionStatus()
go s.pollConnectionStatus(s.ctx)
// Check transition configuration for the engine API client in the background.
go s.checkTransitionConfiguration(s.ctx, make(chan *feed.Event, 1))
@@ -266,7 +259,12 @@ func (s *Service) Stop() error {
if s.cancel != nil {
defer s.cancel()
}
s.closeClients()
if s.rpcClient != nil {
s.rpcClient.Close()
}
if s.eth1DataFetcher != nil {
s.eth1DataFetcher.Close()
}
return nil
}
@@ -338,10 +336,7 @@ func (s *Service) CurrentETH1Endpoint() string {
// CurrentETH1ConnectionError returns the error (if any) of the current connection.
func (s *Service) CurrentETH1ConnectionError() error {
httpClient, rpcClient, err := s.dialETH1Nodes(s.cfg.currHttpEndpoint)
httpClient.Close()
rpcClient.Close()
return err
return s.runError
}
// ETH1Endpoints returns the slice of HTTP endpoint URLs (default is 0th element).
@@ -358,10 +353,17 @@ func (s *Service) ETH1Endpoints() []string {
func (s *Service) ETH1ConnectionErrors() []error {
var errs []error
for _, ep := range s.cfg.httpEndpoints {
httpClient, rpcClient, err := s.dialETH1Nodes(ep)
httpClient.Close()
rpcClient.Close()
errs = append(errs, err)
client, err := s.newRPCClientWithAuth(s.ctx, ep)
if err != nil {
errs = append(errs, err)
continue
}
if err := ensureCorrectExecutionChain(s.ctx, ethclient.NewClient(client)); err != nil {
client.Close()
errs = append(errs, err)
continue
}
client.Close()
}
return errs
}
@@ -376,146 +378,6 @@ func (s *Service) followBlockHeight(_ context.Context) (uint64, error) {
return latestValidBlock, nil
}
func (s *Service) connectToPowChain() error {
httpClient, rpcClient, err := s.dialETH1Nodes(s.cfg.currHttpEndpoint)
if err != nil {
return errors.Wrap(err, "could not dial execution node")
}
depositContractCaller, err := contracts.NewDepositContractCaller(s.cfg.depositContractAddr, httpClient)
if err != nil {
return errors.Wrap(err, "could not initialize deposit contract caller")
}
if httpClient == nil || rpcClient == nil || depositContractCaller == nil {
return errors.New("execution client RPC is nil")
}
s.httpLogger = httpClient
s.eth1DataFetcher = httpClient
s.depositContractCaller = depositContractCaller
s.rpcClient = rpcClient
s.updateConnectedETH1(true)
s.runError = nil
return nil
}
func (s *Service) dialETH1Nodes(endpoint network.Endpoint) (*ethclient.Client, *gethRPC.Client, error) {
httpRPCClient, err := gethRPC.Dial(endpoint.Url)
if err != nil {
return nil, nil, err
}
if endpoint.Auth.Method != authorization.None {
header, err := endpoint.Auth.ToHeaderValue()
if err != nil {
return nil, nil, err
}
httpRPCClient.SetHeader("Authorization", header)
}
httpClient := ethclient.NewClient(httpRPCClient)
// Add a method to clean-up and close clients in the event
// of any connection failure.
closeClients := func() {
httpRPCClient.Close()
httpClient.Close()
}
// Make a simple call to ensure we are actually connected to a working node.
cID, err := httpClient.ChainID(s.ctx)
if err != nil {
closeClients()
return nil, nil, err
}
nID, err := httpClient.NetworkID(s.ctx)
if err != nil {
closeClients()
return nil, nil, err
}
if cID.Uint64() != params.BeaconConfig().DepositChainID {
closeClients()
return nil, nil, fmt.Errorf("eth1 node using incorrect chain id, %d != %d", cID.Uint64(), params.BeaconConfig().DepositChainID)
}
if nID.Uint64() != params.BeaconConfig().DepositNetworkID {
closeClients()
return nil, nil, fmt.Errorf("eth1 node using incorrect network id, %d != %d", nID.Uint64(), params.BeaconConfig().DepositNetworkID)
}
return httpClient, httpRPCClient, nil
}
// closes down our active eth1 clients.
func (s *Service) closeClients() {
gethClient, ok := s.rpcClient.(*gethRPC.Client)
if ok {
gethClient.Close()
}
httpClient, ok := s.eth1DataFetcher.(*ethclient.Client)
if ok {
httpClient.Close()
}
}
func (s *Service) pollConnectionStatus() {
// Use a custom logger to only log errors
logCounter := 0
errorLogger := func(err error, msg string) {
if logCounter > logThreshold {
log.Errorf("%s: %v", msg, err)
logCounter = 0
}
logCounter++
}
ticker := time.NewTicker(backOffPeriod)
defer ticker.Stop()
for {
select {
case <-ticker.C:
log.Debugf("Trying to dial endpoint: %s", logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url))
errConnect := s.connectToPowChain()
if errConnect != nil {
errorLogger(errConnect, "Could not connect to powchain endpoint")
s.runError = errConnect
s.fallbackToNextEndpoint()
continue
}
case <-s.ctx.Done():
log.Debug("Received cancelled context,closing existing powchain service")
return
}
}
}
// checks if the eth1 node is healthy and ready to serve before
// fetching data from it.
func (s *Service) isEth1NodeSynced() (bool, error) {
syncProg, err := s.eth1DataFetcher.SyncProgress(s.ctx)
if err != nil {
return false, err
}
if syncProg != nil {
return false, nil
}
head, err := s.eth1DataFetcher.HeaderByNumber(s.ctx, nil)
if err != nil {
return false, err
}
return !eth1HeadIsBehind(head.Time), nil
}
// Reconnect to eth1 node in case of any failure.
func (s *Service) retryETH1Node(err error) {
s.runError = err
s.updateConnectedETH1(false)
// Back off for a while before
// resuming dialing the eth1 node.
time.Sleep(backOffPeriod)
if err := s.connectToPowChain(); err != nil {
s.runError = err
return
}
// Reset run error in the event of a successful connection.
s.runError = nil
}
func (s *Service) initDepositCaches(ctx context.Context, ctrs []*ethpb.DepositContainer) error {
if len(ctrs) == 0 {
return nil
@@ -551,11 +413,14 @@ func (s *Service) initDepositCaches(ctx context.Context, ctrs []*ethpb.DepositCo
// accumulates. we finalize them here before we are ready to receive a block.
// Otherwise, the first few blocks will be slower to compute as we will
// hold the lock and be busy finalizing the deposits.
s.cfg.depositCache.InsertFinalizedDeposits(ctx, int64(currIndex)) // lint:ignore uintcast -- deposit index will not exceed int64 in your lifetime.
// The deposit index in the state is always the index of the next deposit
// to be included(rather than the last one to be processed). This was most likely
// done as the state cannot represent signed integers.
actualIndex := int64(currIndex) - 1 // lint:ignore uintcast -- deposit index will not exceed int64 in your lifetime.
s.cfg.depositCache.InsertFinalizedDeposits(ctx, actualIndex)
// Deposit proofs are only used during state transition and can be safely removed to save space.
// lint:ignore uintcast -- deposit index will not exceed int64 in your lifetime.
if err = s.cfg.depositCache.PruneProofs(ctx, int64(currIndex)); err != nil {
if err = s.cfg.depositCache.PruneProofs(ctx, actualIndex); err != nil {
return errors.Wrap(err, "could not prune deposit proofs")
}
}
@@ -650,7 +515,7 @@ func (s *Service) handleETH1FollowDistance() {
fiveMinutesTimeout := prysmTime.Now().Add(-5 * time.Minute)
// check that web3 client is syncing
if time.Unix(int64(s.latestEth1Data.BlockTime), 0).Before(fiveMinutesTimeout) {
log.Warn("eth1 client is not syncing")
log.Warn("Execution client is not syncing")
}
if !s.chainStartData.Chainstarted {
if err := s.checkBlockNumberForChainStart(ctx, big.NewInt(int64(s.latestEth1Data.LastRequestedBlock))); err != nil {
@@ -680,6 +545,15 @@ func (s *Service) handleETH1FollowDistance() {
}
func (s *Service) initPOWService() {
// Use a custom logger to only log errors
logCounter := 0
errorLogger := func(err error, msg string) {
if logCounter > logThreshold {
log.Errorf("%s: %v", msg, err)
logCounter = 0
}
logCounter++
}
// Run in a select loop to retry in the event of any failures.
for {
@@ -690,8 +564,8 @@ func (s *Service) initPOWService() {
ctx := s.ctx
header, err := s.eth1DataFetcher.HeaderByNumber(ctx, nil)
if err != nil {
log.Errorf("Unable to retrieve latest ETH1.0 chain header: %v", err)
s.retryETH1Node(err)
s.retryExecutionClientConnection(ctx, err)
errorLogger(err, "Unable to retrieve latest execution client header")
continue
}
@@ -700,14 +574,14 @@ func (s *Service) initPOWService() {
s.latestEth1Data.BlockTime = header.Time
if err := s.processPastLogs(ctx); err != nil {
log.Errorf("Unable to process past logs %v", err)
s.retryETH1Node(err)
s.retryExecutionClientConnection(ctx, err)
errorLogger(err, "Unable to process past deposit contract logs")
continue
}
// Cache eth1 headers from our voting period.
if err := s.cacheHeadersForEth1DataVote(ctx); err != nil {
log.Errorf("Unable to process past headers %v", err)
s.retryETH1Node(err)
s.retryExecutionClientConnection(ctx, err)
errorLogger(err, "Unable to cache headers for execution client votes")
continue
}
// Handle edge case with embedded genesis state by fetching genesis header to determine
@@ -720,15 +594,15 @@ func (s *Service) initPOWService() {
if genHash != [32]byte{} {
genHeader, err := s.eth1DataFetcher.HeaderByHash(ctx, genHash)
if err != nil {
log.Errorf("Unable to retrieve genesis ETH1.0 chain header: %v", err)
s.retryETH1Node(err)
s.retryExecutionClientConnection(ctx, err)
errorLogger(err, "Unable to retrieve proof-of-stake genesis block data")
continue
}
genBlock = genHeader.Number.Uint64()
}
s.chainStartData.GenesisBlock = genBlock
if err := s.savePowchainData(ctx); err != nil {
log.Errorf("Unable to save powchain data: %v", err)
errorLogger(err, "Unable to save execution client data")
}
}
return
@@ -757,17 +631,16 @@ func (s *Service) run(done <-chan struct{}) {
head, err := s.eth1DataFetcher.HeaderByNumber(s.ctx, nil)
if err != nil {
log.WithError(err).Debug("Could not fetch latest eth1 header")
s.retryETH1Node(err)
continue
}
if eth1HeadIsBehind(head.Time) {
s.retryExecutionClientConnection(s.ctx, err)
log.WithError(errFarBehind).Debug("Could not get an up to date eth1 header")
s.retryETH1Node(errFarBehind)
continue
}
s.processBlockHeader(head)
s.handleETH1FollowDistance()
s.checkDefaultEndpoint()
s.checkDefaultEndpoint(s.ctx)
case <-chainstartTicker.C:
if s.chainStartData.Chainstarted {
chainstartTicker.Stop()
@@ -854,59 +727,6 @@ func (s *Service) determineEarliestVotingBlock(ctx context.Context, followBlock
return hdr.Number.Uint64(), nil
}
// This performs a health check on our primary endpoint, and if it
// is ready to serve we connect to it again. This method is only
// relevant if we are on our backup endpoint.
func (s *Service) checkDefaultEndpoint() {
primaryEndpoint := s.cfg.httpEndpoints[0]
// Return early if we are running on our primary
// endpoint.
if s.cfg.currHttpEndpoint.Equals(primaryEndpoint) {
return
}
httpClient, rpcClient, err := s.dialETH1Nodes(primaryEndpoint)
if err != nil {
log.Debugf("Primary endpoint not ready: %v", err)
return
}
log.Info("Primary endpoint ready again, switching back to it")
// Close the clients and let our main connection routine
// properly connect with it.
httpClient.Close()
rpcClient.Close()
// Close current active clients.
s.closeClients()
// Switch back to primary endpoint and try connecting
// to it again.
s.updateCurrHttpEndpoint(primaryEndpoint)
s.retryETH1Node(nil)
}
// This is an inefficient way to search for the next endpoint, but given N is expected to be
// small ( < 25), it is fine to search this way.
func (s *Service) fallbackToNextEndpoint() {
currEndpoint := s.cfg.currHttpEndpoint
currIndex := 0
totalEndpoints := len(s.cfg.httpEndpoints)
for i, endpoint := range s.cfg.httpEndpoints {
if endpoint.Equals(currEndpoint) {
currIndex = i
break
}
}
nextIndex := currIndex + 1
if nextIndex >= totalEndpoints {
nextIndex = 0
}
s.updateCurrHttpEndpoint(s.cfg.httpEndpoints[nextIndex])
if nextIndex != currIndex {
log.Infof("Falling back to alternative endpoint: %s", logs.MaskCredentialsLogging(s.cfg.currHttpEndpoint.Url))
}
}
// initializes our service from the provided eth1data object by initializing all the relevant
// fields and data.
func (s *Service) initializeEth1Data(ctx context.Context, eth1DataInDB *ethpb.ETH1ChainData) error {

View File

@@ -42,6 +42,8 @@ type goodLogger struct {
backend *backends.SimulatedBackend
}
func (_ *goodLogger) Close() {}
func (g *goodLogger) SubscribeFilterLogs(ctx context.Context, q ethereum.FilterQuery, ch chan<- gethTypes.Log) (ethereum.Subscription, error) {
if g.backend == nil {
return new(event.Feed).Subscribe(ch), nil
@@ -80,6 +82,8 @@ type goodFetcher struct {
backend *backends.SimulatedBackend
}
func (_ *goodFetcher) Close() {}
func (g *goodFetcher) HeaderByHash(_ context.Context, hash common.Hash) (*gethTypes.Header, error) {
if bytes.Equal(hash.Bytes(), common.BytesToHash([]byte{0}).Bytes()) {
return nil, fmt.Errorf("expected block hash to be nonzero %v", hash)
@@ -225,10 +229,6 @@ func TestService_Eth1Synced(t *testing.T) {
now := time.Now()
assert.NoError(t, testAcc.Backend.AdjustTime(now.Sub(time.Unix(int64(currTime), 0))))
testAcc.Backend.Commit()
synced, err := web3Service.isEth1NodeSynced()
require.NoError(t, err)
assert.Equal(t, true, synced, "Expected eth1 nodes to be synced")
}
func TestFollowBlock_OK(t *testing.T) {
@@ -472,7 +472,7 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
require.NoError(t, s.cfg.beaconDB.SaveState(context.Background(), emptyState, headRoot))
require.NoError(t, stateGen.SaveState(context.Background(), headRoot, emptyState))
s.cfg.stateGen = stateGen
require.NoError(t, emptyState.SetEth1DepositIndex(2))
require.NoError(t, emptyState.SetEth1DepositIndex(3))
ctx := context.Background()
require.NoError(t, beaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Epoch: slots.ToEpoch(0), Root: headRoot[:]}))
@@ -480,8 +480,8 @@ func TestInitDepositCacheWithFinalization_OK(t *testing.T) {
s.chainStartData.Chainstarted = true
require.NoError(t, s.initDepositCaches(context.Background(), ctrs))
deps := s.cfg.depositCache.NonFinalizedDeposits(context.Background(), nil)
fDeposits := s.cfg.depositCache.FinalizedDeposits(ctx)
deps := s.cfg.depositCache.NonFinalizedDeposits(context.Background(), fDeposits.MerkleTrieIndex, nil)
assert.Equal(t, 0, len(deps))
}

View File

@@ -26,6 +26,7 @@ go_library(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -2,25 +2,32 @@ package testing
import (
"context"
"math/big"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/config/params"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
pb "github.com/prysmaticlabs/prysm/proto/engine/v1"
)
// EngineClient --
type EngineClient struct {
NewPayloadResp []byte
PayloadIDBytes *pb.PayloadIDBytes
ForkChoiceUpdatedResp []byte
ExecutionPayload *pb.ExecutionPayload
ExecutionBlock *pb.ExecutionBlock
Err error
ErrLatestExecBlock error
ErrExecBlockByHash error
ErrForkchoiceUpdated error
ErrNewPayload error
BlockByHashMap map[[32]byte]*pb.ExecutionBlock
NewPayloadResp []byte
PayloadIDBytes *pb.PayloadIDBytes
ForkChoiceUpdatedResp []byte
ExecutionPayload *pb.ExecutionPayload
ExecutionBlock *pb.ExecutionBlock
Err error
ErrLatestExecBlock error
ErrExecBlockByHash error
ErrForkchoiceUpdated error
ErrNewPayload error
BlockByHashMap map[[32]byte]*pb.ExecutionBlock
TerminalBlockHash []byte
TerminalBlockHashExists bool
}
// NewPayload --
@@ -58,3 +65,52 @@ func (e *EngineClient) ExecutionBlockByHash(_ context.Context, h common.Hash) (*
}
return b, e.ErrExecBlockByHash
}
// GetTerminalBlockHash --
func (e *EngineClient) GetTerminalBlockHash(ctx context.Context) ([]byte, bool, error) {
ttd := new(big.Int)
ttd.SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
terminalTotalDifficulty, overflows := uint256.FromBig(ttd)
if overflows {
return nil, false, errors.New("could not convert terminal total difficulty to uint256")
}
blk, err := e.LatestExecutionBlock(ctx)
if err != nil {
return nil, false, errors.Wrap(err, "could not get latest execution block")
}
if blk == nil {
return nil, false, errors.New("latest execution block is nil")
}
for {
b, err := hexutil.DecodeBig(blk.TotalDifficulty)
if err != nil {
return nil, false, errors.Wrap(err, "could not convert total difficulty to uint256")
}
currentTotalDifficulty, _ := uint256.FromBig(b)
blockReachedTTD := currentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
parentHash := bytesutil.ToBytes32(blk.ParentHash)
if len(blk.ParentHash) == 0 || parentHash == params.BeaconConfig().ZeroHash {
return nil, false, nil
}
parentBlk, err := e.ExecutionBlockByHash(ctx, parentHash)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent execution block")
}
if blockReachedTTD {
b, err := hexutil.DecodeBig(parentBlk.TotalDifficulty)
if err != nil {
return nil, false, errors.Wrap(err, "could not convert total difficulty to uint256")
}
parentTotalDifficulty, _ := uint256.FromBig(b)
parentReachedTTD := parentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
if !parentReachedTTD {
return blk.Hash, true, nil
}
} else {
return nil, false, nil
}
blk = parentBlk
}
}

View File

@@ -144,6 +144,8 @@ type RPCClient struct {
Backend *backends.SimulatedBackend
}
func (_ *RPCClient) Close() {}
func (*RPCClient) CallContext(_ context.Context, _ interface{}, _ string, _ ...interface{}) error {
return nil
}

View File

@@ -77,7 +77,6 @@ go_library(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -162,7 +161,6 @@ go_test(
"@com_github_d4l3k_messagediff//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_golang_mock//gomock:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_prysmaticlabs_eth2_types//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -37,17 +37,24 @@ func (vs *Server) StreamBlocksAltair(req *ethpb.StreamBlocksRequest, stream ethp
case version.Phase0:
phBlk, ok := data.SignedBlock.Proto().(*ethpb.SignedBeaconBlock)
if !ok {
log.Warn("Mismatch between version and block type, was expecting *ethpb.SignedBeaconBlock")
log.Warn("Mismatch between version and block type, was expecting SignedBeaconBlock")
continue
}
b.Block = &ethpb.StreamBlocksResponse_Phase0Block{Phase0Block: phBlk}
case version.Altair:
phBlk, ok := data.SignedBlock.Proto().(*ethpb.SignedBeaconBlockAltair)
if !ok {
log.Warn("Mismatch between version and block type, was expecting *v2.SignedBeaconBlockAltair")
log.Warn("Mismatch between version and block type, was expecting SignedBeaconBlockAltair")
continue
}
b.Block = &ethpb.StreamBlocksResponse_AltairBlock{AltairBlock: phBlk}
case version.Bellatrix:
phBlk, ok := data.SignedBlock.Proto().(*ethpb.SignedBeaconBlockBellatrix)
if !ok {
log.Warn("Mismatch between version and block type, was expecting SignedBeaconBlockBellatrix")
continue
}
b.Block = &ethpb.StreamBlocksResponse_BellatrixBlock{BellatrixBlock: phBlk}
}
if err := stream.Send(b); err != nil {

View File

@@ -161,7 +161,7 @@ func (vs *Server) depositTrie(ctx context.Context, canonicalEth1Data *ethpb.Eth1
finalizedDeposits := vs.DepositFetcher.FinalizedDeposits(ctx)
depositTrie = finalizedDeposits.Deposits
upToEth1DataDeposits := vs.DepositFetcher.NonFinalizedDeposits(ctx, canonicalEth1DataHeight)
upToEth1DataDeposits := vs.DepositFetcher.NonFinalizedDeposits(ctx, finalizedDeposits.MerkleTrieIndex, canonicalEth1DataHeight)
insertIndex := finalizedDeposits.MerkleTrieIndex + 1
for _, dep := range upToEth1DataDeposits {

View File

@@ -3,11 +3,7 @@ package validator
import (
"bytes"
"context"
"fmt"
"math/big"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -181,79 +177,7 @@ func (vs *Server) getTerminalBlockHashIfExists(ctx context.Context) ([]byte, boo
return terminalBlockHash.Bytes(), true, nil
}
return vs.getPowBlockHashAtTerminalTotalDifficulty(ctx)
}
// This returns the valid terminal block hash based on total difficulty.
//
// Spec code:
// def get_pow_block_at_terminal_total_difficulty(pow_chain: Dict[Hash32, PowBlock]) -> Optional[PowBlock]:
// # `pow_chain` abstractly represents all blocks in the PoW chain
// for block in pow_chain:
// parent = pow_chain[block.parent_hash]
// block_reached_ttd = block.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// parent_reached_ttd = parent.total_difficulty >= TERMINAL_TOTAL_DIFFICULTY
// if block_reached_ttd and not parent_reached_ttd:
// return block
//
// return None
func (vs *Server) getPowBlockHashAtTerminalTotalDifficulty(ctx context.Context) ([]byte, bool, error) {
ttd := new(big.Int)
ttd.SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
terminalTotalDifficulty, overflows := uint256.FromBig(ttd)
if overflows {
return nil, false, errors.New("could not convert terminal total difficulty to uint256")
}
blk, err := vs.ExecutionEngineCaller.LatestExecutionBlock(ctx)
if err != nil {
return nil, false, errors.Wrap(err, "could not get latest execution block")
}
if blk == nil {
return nil, false, errors.New("latest execution block is nil")
}
for {
if ctx.Err() != nil {
return nil, false, ctx.Err()
}
currentTotalDifficulty, err := tDStringToUint256(blk.TotalDifficulty)
if err != nil {
return nil, false, errors.Wrap(err, "could not convert total difficulty to uint256")
}
blockReachedTTD := currentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
parentHash := bytesutil.ToBytes32(blk.ParentHash)
if len(blk.ParentHash) == 0 || parentHash == params.BeaconConfig().ZeroHash {
return nil, false, nil
}
parentBlk, err := vs.ExecutionEngineCaller.ExecutionBlockByHash(ctx, parentHash)
if err != nil {
return nil, false, errors.Wrap(err, "could not get parent execution block")
}
if parentBlk == nil {
return nil, false, errors.New("parent execution block is nil")
}
if blockReachedTTD {
parentTotalDifficulty, err := tDStringToUint256(parentBlk.TotalDifficulty)
if err != nil {
return nil, false, errors.Wrap(err, "could not convert total difficulty to uint256")
}
parentReachedTTD := parentTotalDifficulty.Cmp(terminalTotalDifficulty) >= 0
if !parentReachedTTD {
log.WithFields(logrus.Fields{
"number": blk.Number,
"hash": fmt.Sprintf("%#x", bytesutil.Trunc(blk.Hash)),
"td": blk.TotalDifficulty,
"parentTd": parentBlk.TotalDifficulty,
"ttd": terminalTotalDifficulty,
}).Info("Retrieved terminal block hash")
return blk.Hash, true, nil
}
} else {
return nil, false, nil
}
blk = parentBlk
}
return vs.ExecutionEngineCaller.GetTerminalBlockHash(ctx)
}
// activationEpochNotReached returns true if activation epoch has not been reach.
@@ -270,18 +194,6 @@ func activationEpochNotReached(slot types.Slot) bool {
return false
}
func tDStringToUint256(td string) (*uint256.Int, error) {
b, err := hexutil.DecodeBig(td)
if err != nil {
return nil, err
}
i, overflows := uint256.FromBig(b)
if overflows {
return nil, errors.New("total difficulty overflowed")
}
return i, nil
}
func emptyPayload() *enginev1.ExecutionPayload {
return &enginev1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),

View File

@@ -6,7 +6,6 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/holiman/uint256"
types "github.com/prysmaticlabs/eth2-types"
chainMock "github.com/prysmaticlabs/prysm/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/beacon-chain/cache"
@@ -22,26 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/testing/util"
)
func Test_tDStringToUint256(t *testing.T) {
i, err := tDStringToUint256("0x0")
require.NoError(t, err)
require.DeepEqual(t, uint256.NewInt(0), i)
i, err = tDStringToUint256("0x10000")
require.NoError(t, err)
require.DeepEqual(t, uint256.NewInt(65536), i)
_, err = tDStringToUint256("100")
require.ErrorContains(t, "hex string without 0x prefix", err)
_, err = tDStringToUint256("0xzzzzzz")
require.ErrorContains(t, "invalid hex string", err)
_, err = tDStringToUint256("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" +
"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF")
require.ErrorContains(t, "hex number > 256 bits", err)
}
func TestServer_activationEpochNotReached(t *testing.T) {
require.Equal(t, false, activationEpochNotReached(0))
@@ -154,137 +133,6 @@ func TestServer_getExecutionPayload(t *testing.T) {
}
}
func TestServer_getPowBlockHashAtTerminalTotalDifficulty(t *testing.T) {
tests := []struct {
name string
paramsTd string
currentPowBlock *pb.ExecutionBlock
parentPowBlock *pb.ExecutionBlock
errLatestExecutionBlk error
wantTerminalBlockHash []byte
wantExists bool
errString string
}{
{
name: "config td overflows",
paramsTd: "1115792089237316195423570985008687907853269984665640564039457584007913129638912",
errString: "could not convert terminal total difficulty to uint256",
},
{
name: "could not get latest execution block",
paramsTd: "1",
errLatestExecutionBlk: errors.New("blah"),
errString: "could not get latest execution block",
},
{
name: "nil latest execution block",
paramsTd: "1",
errString: "latest execution block is nil",
},
{
name: "current execution block invalid TD",
paramsTd: "1",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
TotalDifficulty: "1115792089237316195423570985008687907853269984665640564039457584007913129638912",
},
errString: "could not convert total difficulty to uint256",
},
{
name: "current execution block has zero hash parent",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: params.BeaconConfig().ZeroHash[:],
TotalDifficulty: "0x3",
},
},
{
name: "could not get parent block",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x3",
},
errString: "could not get parent execution block",
},
{
name: "parent execution block invalid TD",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x3",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
TotalDifficulty: "1",
},
errString: "could not convert total difficulty to uint256",
},
{
name: "happy case",
paramsTd: "2",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x3",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
TotalDifficulty: "0x1",
},
wantExists: true,
wantTerminalBlockHash: []byte{'a'},
},
{
name: "ttd not reached",
paramsTd: "3",
currentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'a'},
ParentHash: []byte{'b'},
TotalDifficulty: "0x2",
},
parentPowBlock: &pb.ExecutionBlock{
Hash: []byte{'b'},
ParentHash: []byte{'c'},
TotalDifficulty: "0x1",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = tt.paramsTd
params.OverrideBeaconConfig(cfg)
var m map[[32]byte]*pb.ExecutionBlock
if tt.parentPowBlock != nil {
m = map[[32]byte]*pb.ExecutionBlock{
bytesutil.ToBytes32(tt.parentPowBlock.Hash): tt.parentPowBlock,
}
}
vs := &Server{
ExecutionEngineCaller: &powtesting.EngineClient{
ErrLatestExecBlock: tt.errLatestExecutionBlk,
ExecutionBlock: tt.currentPowBlock,
BlockByHashMap: m,
},
}
b, e, err := vs.getPowBlockHashAtTerminalTotalDifficulty(context.Background())
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
require.NoError(t, err)
require.DeepEqual(t, tt.wantExists, e)
require.DeepEqual(t, tt.wantTerminalBlockHash, b)
}
})
}
}
func TestServer_getTerminalBlockHashIfExists(t *testing.T) {
tests := []struct {
name string

View File

@@ -116,21 +116,14 @@ type Config struct {
// be registered into a running beacon node.
func NewService(ctx context.Context, cfg *Config) *Service {
ctx, cancel := context.WithCancel(ctx)
return &Service{
s := &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
incomingAttestation: make(chan *ethpbv1alpha1.Attestation, params.BeaconConfig().DefaultBufferSize),
connectedRPCClients: make(map[net.Addr]bool),
}
}
// paranoid build time check to ensure ChainInfoFetcher implements required interfaces
var _ stategen.CanonicalChecker = blockchain.ChainInfoFetcher(nil)
var _ stategen.CurrentSlotter = blockchain.ChainInfoFetcher(nil)
// Start the gRPC server.
func (s *Service) Start() {
address := fmt.Sprintf("%s:%s", s.cfg.Host, s.cfg.Port)
lis, err := net.Listen("tcp", address)
if err != nil {
@@ -159,7 +152,6 @@ func (s *Service) Start() {
)),
grpc.MaxRecvMsgSize(s.cfg.MaxMsgSize),
}
grpc_prometheus.EnableHandlingTimeHistogram()
if s.cfg.CertFlag != "" && s.cfg.KeyFlag != "" {
creds, err := credentials.NewServerTLSFromFile(s.cfg.CertFlag, s.cfg.KeyFlag)
if err != nil {
@@ -173,6 +165,17 @@ func (s *Service) Start() {
}
s.grpcServer = grpc.NewServer(opts...)
return s
}
// paranoid build time check to ensure ChainInfoFetcher implements required interfaces
var _ stategen.CanonicalChecker = blockchain.ChainInfoFetcher(nil)
var _ stategen.CurrentSlotter = blockchain.ChainInfoFetcher(nil)
// Start the gRPC server.
func (s *Service) Start() {
grpc_prometheus.EnableHandlingTimeHistogram()
var stateCache stategen.CachedGetter
if s.cfg.StateGen != nil {
stateCache = s.cfg.StateGen.CombinedCache()

View File

@@ -56,7 +56,7 @@ func NewFieldTrie(field types.FieldIndex, dataType types.DataType, elements inte
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: length,
numOfElems: reflect.Indirect(reflect.ValueOf(elements)).Len(),
numOfElems: retrieveLength(elements),
}, nil
case types.CompositeArray, types.CompressedArray:
return &FieldTrie{
@@ -66,7 +66,7 @@ func NewFieldTrie(field types.FieldIndex, dataType types.DataType, elements inte
reference: stateutil.NewRef(1),
RWMutex: new(sync.RWMutex),
length: length,
numOfElems: reflect.Indirect(reflect.ValueOf(elements)).Len(),
numOfElems: retrieveLength(elements),
}, nil
default:
return nil, errors.Errorf("unrecognized data type in field map: %v", reflect.TypeOf(dataType).Name())
@@ -97,14 +97,14 @@ func (f *FieldTrie) RecomputeTrie(indices []uint64, elements interface{}) ([32]b
if err != nil {
return [32]byte{}, err
}
f.numOfElems = reflect.Indirect(reflect.ValueOf(elements)).Len()
f.numOfElems = retrieveLength(elements)
return fieldRoot, nil
case types.CompositeArray:
fieldRoot, f.fieldLayers, err = stateutil.RecomputeFromLayerVariable(fieldRoots, indices, f.fieldLayers)
if err != nil {
return [32]byte{}, err
}
f.numOfElems = reflect.Indirect(reflect.ValueOf(elements)).Len()
f.numOfElems = retrieveLength(elements)
return stateutil.AddInMixin(fieldRoot, uint64(len(f.fieldLayers[0])))
case types.CompressedArray:
numOfElems, err := f.field.ElemsInChunk()
@@ -133,7 +133,7 @@ func (f *FieldTrie) RecomputeTrie(indices []uint64, elements interface{}) ([32]b
if err != nil {
return [32]byte{}, err
}
f.numOfElems = reflect.Indirect(reflect.ValueOf(elements)).Len()
f.numOfElems = retrieveLength(elements)
return stateutil.AddInMixin(fieldRoot, uint64(f.numOfElems))
default:
return [32]byte{}, errors.Errorf("unrecognized data type in field map: %v", reflect.TypeOf(f.dataType).Name())

View File

@@ -57,9 +57,11 @@ func validateElements(field types.FieldIndex, dataType types.DataType, elements
}
length *= comLength
}
val := reflect.Indirect(reflect.ValueOf(elements))
if uint64(val.Len()) > length {
return errors.Errorf("elements length is larger than expected for field %s: %d > %d", field.String(version.Phase0), val.Len(), length)
elemLen := retrieveLength(elements)
castedLen := int(length) // lint:ignore uintcast- ajhdjhd
if elemLen > castedLen {
return errors.Errorf("elements length is larger than expected for field %s: %d > %d", field.String(version.Phase0), elemLen, length)
}
return nil
}
@@ -72,7 +74,7 @@ func fieldConverters(field types.FieldIndex, indices []uint64, elements interfac
case [][]byte:
return handleByteArrays(val, indices, convertAll)
case *customtypes.BlockRoots:
return handle32ByteArrays(val[:], indices, convertAll)
return handleIndexer(val, indices, convertAll)
default:
return nil, errors.Errorf("Incorrect type used for block roots")
}
@@ -90,7 +92,7 @@ func fieldConverters(field types.FieldIndex, indices []uint64, elements interfac
case [][]byte:
return handleByteArrays(val, indices, convertAll)
case *customtypes.RandaoMixes:
return handle32ByteArrays(val[:], indices, convertAll)
return handleIndexer(val, indices, convertAll)
default:
return nil, errors.Errorf("Incorrect type used for randao mixes")
}
@@ -182,6 +184,34 @@ func handle32ByteArrays(val [][32]byte, indices []uint64, convertAll bool) ([][3
return roots, nil
}
// handle32ByteArrays computes and returns 32 byte arrays in a slice of root format.
func handleIndexer(indexer customtypes.Indexer, indices []uint64, convertAll bool) ([][32]byte, error) {
length := len(indices)
totalLength := indexer.TotalLength()
if convertAll {
length = int(totalLength) // lint:ignore uintcast- ajhdjhd
}
roots := make([][32]byte, 0, length)
rootCreator := func(input [32]byte) {
roots = append(roots, input)
}
if convertAll {
for i := uint64(0); i < uint64(length); i++ {
rootCreator(indexer.RootAtIndex(i))
}
return roots, nil
}
if totalLength > 0 {
for _, idx := range indices {
if idx > totalLength-1 {
return nil, fmt.Errorf("index %d greater than number of byte arrays %d", idx, totalLength)
}
rootCreator(indexer.RootAtIndex(idx))
}
}
return roots, nil
}
// handleValidatorSlice returns the validator indices in a slice of root format.
func handleValidatorSlice(val []*ethpb.Validator, indices []uint64, convertAll bool) ([][32]byte, error) {
length := len(indices)
@@ -348,3 +378,17 @@ func handleBalanceSlice(val, indices []uint64, convertAll bool) ([][32]byte, err
}
return [][32]byte{}, nil
}
func retrieveLength(elements interface{}) int {
elemLen := int(0)
elemVal := reflect.ValueOf(elements)
if reflect.Indirect(elemVal).Kind() == reflect.Struct {
meth := elemVal.MethodByName("TotalLength")
ret := meth.Call([]reflect.Value{})
elemLen = int(ret[0].Uint()) // lint:ignore uintcast- ajhdjhd
} else {
val := reflect.Indirect(elemVal)
elemLen = val.Len()
}
return elemLen
}

View File

@@ -12,6 +12,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/state/stateutil:go_default_library",
"//config/fieldparams:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
],

View File

@@ -2,28 +2,162 @@ package customtypes
import (
"fmt"
"reflect"
"runtime"
"sort"
"sync"
"unsafe"
fssz "github.com/ferranbt/fastssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
)
var _ fssz.HashRoot = (BlockRoots)([fieldparams.BlockRootsLength][32]byte{})
var _ fssz.HashRoot = (*BlockRoots)(nil)
var _ fssz.Marshaler = (*BlockRoots)(nil)
var _ fssz.Unmarshaler = (*BlockRoots)(nil)
type Indexer interface {
RootAtIndex(idx uint64) [32]byte
TotalLength() uint64
}
// BlockRoots represents block roots of the beacon state.
type BlockRoots [fieldparams.BlockRootsLength][32]byte
type BlockRoots struct {
baseArray *baseArrayBlockRoots
fieldJournal map[uint64][32]byte
generation uint64
*stateutil.Reference
}
type baseArrayBlockRoots struct {
baseArray *[fieldparams.BlockRootsLength][32]byte
descendantMap map[uint64][]uintptr
*sync.RWMutex
*stateutil.Reference
}
type sorter struct {
objs [][]uintptr
generations []uint64
}
func (s sorter) Len() int {
return len(s.generations)
}
func (s sorter) Swap(i, j int) {
s.objs[i], s.objs[j] = s.objs[j], s.objs[i]
s.generations[i], s.generations[j] = s.generations[j], s.generations[i]
}
func (s sorter) Less(i, j int) bool {
return s.generations[i] < s.generations[j]
}
func (b *baseArrayBlockRoots) RootAtIndex(idx uint64) [32]byte {
b.RWMutex.RLock()
defer b.RWMutex.RUnlock()
return b.baseArray[idx]
}
func (b *baseArrayBlockRoots) TotalLength() uint64 {
return fieldparams.BlockRootsLength
}
func (b *baseArrayBlockRoots) addGeneration(generation uint64, descendant uintptr) {
b.RWMutex.Lock()
defer b.RWMutex.Unlock()
b.descendantMap[generation] = append(b.descendantMap[generation], descendant)
}
func (b *baseArrayBlockRoots) removeGeneration(generation uint64, descendant uintptr) {
b.RWMutex.Lock()
defer b.RWMutex.Unlock()
ptrVals := b.descendantMap[generation]
newVals := []uintptr{}
for _, v := range ptrVals {
if v == descendant {
continue
}
newVals = append(newVals, v)
}
b.descendantMap[generation] = newVals
}
func (b *baseArrayBlockRoots) numOfDescendants() uint64 {
b.RWMutex.RLock()
defer b.RWMutex.RUnlock()
return uint64(len(b.descendantMap))
}
func (b *baseArrayBlockRoots) cleanUp() {
b.RWMutex.Lock()
defer b.RWMutex.Unlock()
fmt.Printf("\n cleaning up block roots %d \n ", len(b.descendantMap))
listOfObjs := [][]uintptr{}
generations := []uint64{}
for g, objs := range b.descendantMap {
generations = append(generations, g)
listOfObjs = append(listOfObjs, objs)
}
sortedObj := sorter{
objs: listOfObjs,
generations: generations,
}
sort.Sort(sortedObj)
lastReferencedGen := 0
lastRefrencedIdx := 0
lastRefPointer := 0
for i, g := range sortedObj.generations {
for j, o := range sortedObj.objs[i] {
x := (*BlockRoots)(unsafe.Pointer(o))
if x == nil {
continue
}
lastReferencedGen = int(g) // lint:ignore uintcast- ajhdjhd
lastRefrencedIdx = i
lastRefPointer = j
break
}
if lastReferencedGen != 0 {
break
}
}
fmt.Printf("\n block root map %d, %d, %d \n ", lastReferencedGen, lastRefrencedIdx, lastRefPointer)
br := (*BlockRoots)(unsafe.Pointer(sortedObj.objs[lastRefrencedIdx][lastRefPointer]))
for k, v := range br.fieldJournal {
b.baseArray[k] = v
}
sortedObj.generations = sortedObj.generations[lastRefrencedIdx:]
sortedObj.objs = sortedObj.objs[lastRefrencedIdx:]
newMap := make(map[uint64][]uintptr)
for i, g := range sortedObj.generations {
newMap[g] = sortedObj.objs[i]
}
b.descendantMap = newMap
}
// HashTreeRoot returns calculated hash root.
func (r BlockRoots) HashTreeRoot() ([32]byte, error) {
func (r *BlockRoots) HashTreeRoot() ([32]byte, error) {
return fssz.HashWithDefaultHasher(r)
}
// HashTreeRootWith hashes a BlockRoots object with a Hasher from the default HasherPool.
func (r BlockRoots) HashTreeRootWith(hh *fssz.Hasher) error {
func (r *BlockRoots) HashTreeRootWith(hh *fssz.Hasher) error {
index := hh.Index()
for _, sRoot := range r {
hh.Append(sRoot[:])
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
hh.Append(val[:])
continue
}
rt := r.baseArray.RootAtIndex(i)
hh.Append(rt[:])
}
hh.Merkleize(index)
return nil
@@ -34,12 +168,13 @@ func (r *BlockRoots) UnmarshalSSZ(buf []byte) error {
if len(buf) != r.SizeSSZ() {
return fmt.Errorf("expected buffer of length %d received %d", r.SizeSSZ(), len(buf))
}
r.baseArray.Lock()
defer r.baseArray.Unlock()
var roots BlockRoots
for i := range roots {
copy(roots[i][:], buf[i*32:(i+1)*32])
for i := range r.baseArray.baseArray {
copy(r.baseArray.baseArray[i][:], buf[i*32:(i+1)*32])
}
*r = roots
return nil
}
@@ -55,10 +190,13 @@ func (r *BlockRoots) MarshalSSZTo(dst []byte) ([]byte, error) {
// MarshalSSZ marshals BlockRoots into a serialized object.
func (r *BlockRoots) MarshalSSZ() ([]byte, error) {
marshalled := make([]byte, fieldparams.BlockRootsLength*32)
for i, r32 := range r {
for j, rr := range r32 {
marshalled[i*32+j] = rr
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
copy(marshalled[i*32:], val[:])
continue
}
rt := r.baseArray.RootAtIndex(i)
copy(marshalled[i*32:], rt[:])
}
return marshalled, nil
}
@@ -73,10 +211,152 @@ func (r *BlockRoots) Slice() [][]byte {
if r == nil {
return nil
}
bRoots := make([][]byte, len(r))
for i, root := range r {
tmp := root
bRoots[i] = tmp[:]
bRoots := make([][]byte, r.baseArray.TotalLength())
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
bRoots[i] = val[:]
continue
}
rt := r.baseArray.RootAtIndex(i)
bRoots[i] = rt[:]
}
return bRoots
}
// Slice converts a customtypes.BlockRoots object into a 2D byte slice.
func (r *BlockRoots) Array() [fieldparams.BlockRootsLength][32]byte {
if r == nil {
return [fieldparams.BlockRootsLength][32]byte{}
}
bRoots := [fieldparams.BlockRootsLength][32]byte{}
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
bRoots[i] = val
continue
}
rt := r.baseArray.RootAtIndex(i)
bRoots[i] = rt
}
return bRoots
}
func SetFromSlice(slice [][]byte) *BlockRoots {
br := &BlockRoots{
baseArray: &baseArrayBlockRoots{
baseArray: new([fieldparams.BlockRootsLength][32]byte),
descendantMap: map[uint64][]uintptr{},
RWMutex: new(sync.RWMutex),
Reference: stateutil.NewRef(1),
},
fieldJournal: map[uint64][32]byte{},
Reference: stateutil.NewRef(1),
}
for i, rt := range slice {
copy(br.baseArray.baseArray[i][:], rt)
}
runtime.SetFinalizer(br, blockRootsFinalizer)
return br
}
func (r *BlockRoots) SetFromBaseField(field [fieldparams.BlockRootsLength][32]byte) {
r.baseArray = &baseArrayBlockRoots{
baseArray: &field,
descendantMap: map[uint64][]uintptr{},
RWMutex: new(sync.RWMutex),
Reference: stateutil.NewRef(1),
}
r.fieldJournal = map[uint64][32]byte{}
r.Reference = stateutil.NewRef(1)
r.baseArray.addGeneration(0, reflect.ValueOf(r).Pointer())
runtime.SetFinalizer(r, blockRootsFinalizer)
}
func (r *BlockRoots) RootAtIndex(idx uint64) [32]byte {
if val, ok := r.fieldJournal[idx]; ok {
return val
}
return r.baseArray.RootAtIndex(idx)
}
func (r *BlockRoots) SetRootAtIndex(idx uint64, val [32]byte) {
if r.Refs() <= 1 && r.baseArray.Refs() <= 1 {
r.baseArray.Lock()
r.baseArray.baseArray[idx] = val
r.baseArray.Unlock()
return
}
if r.Refs() <= 1 {
r.fieldJournal[idx] = val
r.baseArray.removeGeneration(r.generation, reflect.ValueOf(r).Pointer())
r.generation++
r.baseArray.addGeneration(r.generation, reflect.ValueOf(r).Pointer())
return
}
newJournal := make(map[uint64][32]byte)
for k, val := range r.fieldJournal {
newJournal[k] = val
}
r.fieldJournal = newJournal
r.MinusRef()
r.Reference = stateutil.NewRef(1)
r.fieldJournal[idx] = val
r.baseArray.removeGeneration(r.generation, reflect.ValueOf(r).Pointer())
r.generation++
r.baseArray.addGeneration(r.generation, reflect.ValueOf(r).Pointer())
}
func (r *BlockRoots) Copy() *BlockRoots {
r.baseArray.AddRef()
r.Reference.AddRef()
br := &BlockRoots{
baseArray: r.baseArray,
fieldJournal: r.fieldJournal,
Reference: r.Reference,
generation: r.generation,
}
r.baseArray.addGeneration(r.generation, reflect.ValueOf(br).Pointer())
if r.baseArray.numOfDescendants() > 20 {
r.baseArray.cleanUp()
}
runtime.SetFinalizer(br, blockRootsFinalizer)
return br
}
func (r *BlockRoots) TotalLength() uint64 {
return fieldparams.BlockRootsLength
}
func (r *BlockRoots) IncreaseRef() {
r.Reference.AddRef()
r.baseArray.Reference.AddRef()
}
func (r *BlockRoots) DecreaseRef() {
r.Reference.MinusRef()
r.baseArray.Reference.MinusRef()
}
func blockRootsFinalizer(br *BlockRoots) {
br.baseArray.Lock()
defer br.baseArray.Unlock()
ptrVal := reflect.ValueOf(br).Pointer()
vals, ok := br.baseArray.descendantMap[br.generation]
if !ok {
return
}
exists := false
wantedVals := []uintptr{}
for _, v := range vals {
if v == ptrVal {
exists = true
continue
}
newV := v
wantedVals = append(wantedVals, newV)
}
if !exists {
return
}
br.baseArray.descendantMap[br.generation] = wantedVals
}

View File

@@ -1,24 +1,25 @@
package customtypes
import (
"bytes"
"reflect"
"testing"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/testing/assert"
)
func TestBlockRoots_Casting(t *testing.T) {
var b [fieldparams.BlockRootsLength][32]byte
d := BlockRoots(b)
if !reflect.DeepEqual([fieldparams.BlockRootsLength][32]byte(d), b) {
t.Errorf("Unequal: %v = %v", d, b)
f := SetFromSlice([][]byte{})
f.SetFromBaseField(b)
if !reflect.DeepEqual(f.Array(), b) {
t.Errorf("Unequal: %v = %v", f.Array(), b)
}
}
func TestBlockRoots_UnmarshalSSZ(t *testing.T) {
t.Run("Ok", func(t *testing.T) {
d := BlockRoots{}
d := SetFromSlice([][]byte{})
var b [fieldparams.BlockRootsLength][32]byte
b[0] = [32]byte{'f', 'o', 'o'}
b[1] = [32]byte{'b', 'a', 'r'}
@@ -32,8 +33,8 @@ func TestBlockRoots_UnmarshalSSZ(t *testing.T) {
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if !reflect.DeepEqual(b, [fieldparams.BlockRootsLength][32]byte(d)) {
t.Errorf("Unequal: %v = %v", b, [fieldparams.BlockRootsLength][32]byte(d))
if !reflect.DeepEqual(b, d.Array()) {
t.Errorf("Unequal: %v = %v", b, d.Array())
}
})
@@ -70,28 +71,47 @@ func TestBlockRoots_MarshalSSZTo(t *testing.T) {
}
func TestBlockRoots_MarshalSSZ(t *testing.T) {
d := BlockRoots{}
d[0] = [32]byte{'f', 'o', 'o'}
d[1] = [32]byte{'b', 'a', 'r'}
d := SetFromSlice([][]byte{})
d.IncreaseRef()
d.SetRootAtIndex(0, [32]byte{'f', 'o', 'o'})
d.IncreaseRef()
d.IncreaseRef()
d.SetRootAtIndex(1, [32]byte{'b', 'a', 'r'})
b, err := d.MarshalSSZ()
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if !reflect.DeepEqual(d[0][:], b[0:32]) {
t.Errorf("Unequal: %v = %v", d[0], b[0:32])
rt := d.RootAtIndex(0)
if !reflect.DeepEqual(rt[:], b[0:32]) {
t.Errorf("Unequal: %v = %v", rt, b[0:32])
}
if !reflect.DeepEqual(d[1][:], b[32:64]) {
t.Errorf("Unequal: %v = %v", d[0], b[32:64])
rt = d.RootAtIndex(1)
if !reflect.DeepEqual(rt[:], b[32:64]) {
t.Errorf("Unequal: %v = %v", rt, b[32:64])
}
d2 := SetFromSlice([][]byte{})
err = d2.UnmarshalSSZ(b)
if err != nil {
t.Error(err)
}
res, err := d2.MarshalSSZ()
if err != nil {
t.Error(err)
}
if !bytes.Equal(res, b) {
t.Error("unequal")
}
}
func TestBlockRoots_SizeSSZ(t *testing.T) {
d := BlockRoots{}
d := SetFromSlice([][]byte{})
if d.SizeSSZ() != fieldparams.BlockRootsLength*32 {
t.Errorf("Wrong SSZ size. Expected %v vs actual %v", fieldparams.BlockRootsLength*32, d.SizeSSZ())
}
}
/*
func TestBlockRoots_Slice(t *testing.T) {
a, b, c := [32]byte{'a'}, [32]byte{'b'}, [32]byte{'c'}
roots := BlockRoots{}
@@ -103,3 +123,4 @@ func TestBlockRoots_Slice(t *testing.T) {
assert.DeepEqual(t, b[:], slice[10])
assert.DeepEqual(t, c[:], slice[100])
}
*/

View File

@@ -2,48 +2,77 @@ package customtypes
import (
"fmt"
"sync"
fssz "github.com/ferranbt/fastssz"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
)
var _ fssz.HashRoot = (RandaoMixes)([fieldparams.RandaoMixesLength][32]byte{})
var _ fssz.HashRoot = (*RandaoMixes)(nil)
var _ fssz.Marshaler = (*RandaoMixes)(nil)
var _ fssz.Unmarshaler = (*RandaoMixes)(nil)
// RandaoMixes represents RANDAO mixes of the beacon state.
type RandaoMixes [fieldparams.RandaoMixesLength][32]byte
// BlockRoots represents block roots of the beacon state.
type RandaoMixes struct {
baseArray *baseArrayRandaoMixes
fieldJournal map[uint64][32]byte
*stateutil.Reference
}
type baseArrayRandaoMixes struct {
baseArray *[fieldparams.RandaoMixesLength][32]byte
*sync.RWMutex
*stateutil.Reference
}
func (b *baseArrayRandaoMixes) RootAtIndex(idx uint64) [32]byte {
b.RWMutex.RLock()
defer b.RWMutex.RUnlock()
return b.baseArray[idx]
}
func (b *baseArrayRandaoMixes) TotalLength() uint64 {
return fieldparams.RandaoMixesLength
}
// HashTreeRoot returns calculated hash root.
func (r RandaoMixes) HashTreeRoot() ([32]byte, error) {
func (r *RandaoMixes) HashTreeRoot() ([32]byte, error) {
return fssz.HashWithDefaultHasher(r)
}
// HashTreeRootWith hashes a RandaoMixes object with a Hasher from the default HasherPool.
func (r RandaoMixes) HashTreeRootWith(hh *fssz.Hasher) error {
// HashTreeRootWith hashes a BlockRoots object with a Hasher from the default HasherPool.
func (r *RandaoMixes) HashTreeRootWith(hh *fssz.Hasher) error {
index := hh.Index()
for _, sRoot := range r {
hh.Append(sRoot[:])
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
hh.Append(val[:])
continue
}
rt := r.baseArray.RootAtIndex(i)
hh.Append(rt[:])
}
hh.Merkleize(index)
return nil
}
// UnmarshalSSZ deserializes the provided bytes buffer into the RandaoMixes object.
// UnmarshalSSZ deserializes the provided bytes buffer into the BlockRoots object.
func (r *RandaoMixes) UnmarshalSSZ(buf []byte) error {
if len(buf) != r.SizeSSZ() {
return fmt.Errorf("expected buffer of length %d received %d", r.SizeSSZ(), len(buf))
}
r.baseArray.Lock()
defer r.baseArray.Unlock()
var roots RandaoMixes
for i := range roots {
copy(roots[i][:], buf[i*32:(i+1)*32])
for i := range r.baseArray.baseArray {
copy(r.baseArray.baseArray[i][:], buf[i*32:(i+1)*32])
}
*r = roots
return nil
}
// MarshalSSZTo marshals RandaoMixes with the provided byte slice.
// MarshalSSZTo marshals BlockRoots with the provided byte slice.
func (r *RandaoMixes) MarshalSSZTo(dst []byte) ([]byte, error) {
marshalled, err := r.MarshalSSZ()
if err != nil {
@@ -52,13 +81,16 @@ func (r *RandaoMixes) MarshalSSZTo(dst []byte) ([]byte, error) {
return append(dst, marshalled...), nil
}
// MarshalSSZ marshals RandaoMixes into a serialized object.
// MarshalSSZ marshals BlockRoots into a serialized object.
func (r *RandaoMixes) MarshalSSZ() ([]byte, error) {
marshalled := make([]byte, fieldparams.RandaoMixesLength*32)
for i, r32 := range r {
for j, rr := range r32 {
marshalled[i*32+j] = rr
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
copy(marshalled[i*32:], val[:])
continue
}
rt := r.baseArray.RootAtIndex(i)
copy(marshalled[i*32:], rt[:])
}
return marshalled, nil
}
@@ -68,15 +100,90 @@ func (_ *RandaoMixes) SizeSSZ() int {
return fieldparams.RandaoMixesLength * 32
}
// Slice converts a customtypes.RandaoMixes object into a 2D byte slice.
// Slice converts a customtypes.BlockRoots object into a 2D byte slice.
func (r *RandaoMixes) Slice() [][]byte {
if r == nil {
return nil
}
mixes := make([][]byte, len(r))
for i, root := range r {
tmp := root
mixes[i] = tmp[:]
bRoots := make([][]byte, r.baseArray.TotalLength())
for i := uint64(0); i < r.baseArray.TotalLength(); i++ {
if val, ok := r.fieldJournal[i]; ok {
bRoots[i] = val[:]
continue
}
rt := r.baseArray.RootAtIndex(i)
bRoots[i] = rt[:]
}
return mixes
return bRoots
}
func SetFromSliceRandao(slice [][]byte) *RandaoMixes {
br := &RandaoMixes{
baseArray: &baseArrayRandaoMixes{
baseArray: new([fieldparams.RandaoMixesLength][32]byte),
RWMutex: new(sync.RWMutex),
Reference: stateutil.NewRef(1),
},
fieldJournal: map[uint64][32]byte{},
Reference: stateutil.NewRef(1),
}
for i, rt := range slice {
copy(br.baseArray.baseArray[i][:], rt)
}
return br
}
func (r *RandaoMixes) SetFromBaseField(field [fieldparams.RandaoMixesLength][32]byte) {
r.baseArray.baseArray = &field
}
func (r *RandaoMixes) RootAtIndex(idx uint64) [32]byte {
if val, ok := r.fieldJournal[idx]; ok {
return val
}
return r.baseArray.RootAtIndex(idx)
}
func (r *RandaoMixes) SetRootAtIndex(idx uint64, val [32]byte) {
if r.Refs() <= 1 && r.baseArray.Refs() <= 1 {
r.baseArray.baseArray[idx] = val
return
}
if r.Refs() <= 1 {
r.fieldJournal[idx] = val
return
}
newJournal := make(map[uint64][32]byte)
for k, val := range r.fieldJournal {
newJournal[k] = val
}
r.fieldJournal = newJournal
r.MinusRef()
r.Reference = stateutil.NewRef(1)
r.fieldJournal[idx] = val
}
func (r *RandaoMixes) Copy() *RandaoMixes {
r.baseArray.AddRef()
r.Reference.AddRef()
rm := &RandaoMixes{
baseArray: r.baseArray,
fieldJournal: r.fieldJournal,
Reference: r.Reference,
}
return rm
}
func (r *RandaoMixes) TotalLength() uint64 {
return fieldparams.RandaoMixesLength
}
func (r *RandaoMixes) IncreaseRef() {
r.Reference.AddRef()
r.baseArray.Reference.AddRef()
}
func (r *RandaoMixes) DecreaseRef() {
r.Reference.MinusRef()
r.baseArray.Reference.MinusRef()
}

View File

@@ -76,8 +76,8 @@ func (b *BeaconState) BlockRootAtIndex(idx uint64) ([]byte, error) {
// input index value.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) blockRootAtIndex(idx uint64) ([32]byte, error) {
if uint64(len(b.blockRoots)) <= idx {
if b.blockRoots.TotalLength() <= idx {
return [32]byte{}, fmt.Errorf("index %d out of range", idx)
}
return b.blockRoots[idx], nil
return b.blockRoots.RootAtIndex(idx), nil
}

View File

@@ -37,10 +37,10 @@ func (b *BeaconState) RandaoMixAtIndex(idx uint64) ([]byte, error) {
// input index value.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) randaoMixAtIndex(idx uint64) ([32]byte, error) {
if uint64(len(b.randaoMixes)) <= idx {
if b.randaoMixes.TotalLength() <= idx {
return [32]byte{}, fmt.Errorf("index %d out of range", idx)
}
return b.randaoMixes[idx], nil
return b.randaoMixes.RootAtIndex(idx), nil
}
// RandaoMixesLength returns the length of the randao mixes slice.
@@ -62,5 +62,5 @@ func (b *BeaconState) randaoMixesLength() int {
return 0
}
return len(b.randaoMixes)
return int(b.randaoMixes.TotalLength()) // lint:ignore uintcast- ajhdjhd
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
customtypes "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
@@ -25,14 +24,12 @@ func (b *BeaconState) SetBlockRoots(val [][]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[blockRoots].MinusRef()
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
var rootsArr [fieldparams.BlockRootsLength][32]byte
for i := 0; i < len(rootsArr); i++ {
copy(rootsArr[i][:], val[i])
}
roots := customtypes.BlockRoots(rootsArr)
roots := customtypes.BlockRoots{}
roots.SetFromBaseField(rootsArr)
b.blockRoots = &roots
b.markFieldAsDirty(blockRoots)
b.rebuildTrie[blockRoots] = true
@@ -42,24 +39,13 @@ func (b *BeaconState) SetBlockRoots(val [][]byte) error {
// UpdateBlockRootAtIndex for the beacon state. Updates the block root
// at a specific index to a new value.
func (b *BeaconState) UpdateBlockRootAtIndex(idx uint64, blockRoot [32]byte) error {
if uint64(len(b.blockRoots)) <= idx {
if b.blockRoots.TotalLength() <= idx {
return fmt.Errorf("invalid index provided %d", idx)
}
b.lock.Lock()
defer b.lock.Unlock()
r := b.blockRoots
if ref := b.sharedFieldReferences[blockRoots]; ref.Refs() > 1 {
// Copy elements in underlying array by reference.
roots := *b.blockRoots
rootsCopy := roots
r = &rootsCopy
ref.MinusRef()
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
}
r[idx] = blockRoot
b.blockRoots = r
b.blockRoots.SetRootAtIndex(idx, blockRoot)
b.markFieldAsDirty(blockRoots)
b.addDirtyIndices(blockRoots, []uint64{idx})

View File

@@ -3,7 +3,6 @@ package v1
import (
"github.com/pkg/errors"
customtypes "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
)
@@ -14,14 +13,12 @@ func (b *BeaconState) SetRandaoMixes(val [][]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[randaoMixes].MinusRef()
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
var mixesArr [fieldparams.RandaoMixesLength][32]byte
for i := 0; i < len(mixesArr); i++ {
copy(mixesArr[i][:], val[i])
}
mixes := customtypes.RandaoMixes(mixesArr)
mixes := customtypes.RandaoMixes{}
mixes.SetFromBaseField(mixesArr)
b.randaoMixes = &mixes
b.markFieldAsDirty(randaoMixes)
b.rebuildTrie[randaoMixes] = true
@@ -31,24 +28,13 @@ func (b *BeaconState) SetRandaoMixes(val [][]byte) error {
// UpdateRandaoMixesAtIndex for the beacon state. Updates the randao mixes
// at a specific index to a new value.
func (b *BeaconState) UpdateRandaoMixesAtIndex(idx uint64, val []byte) error {
if uint64(len(b.randaoMixes)) <= idx {
if b.randaoMixes.TotalLength() <= idx {
return errors.Errorf("invalid index provided %d", idx)
}
b.lock.Lock()
defer b.lock.Unlock()
mixes := b.randaoMixes
if refs := b.sharedFieldReferences[randaoMixes].Refs(); refs > 1 {
// Copy elements in underlying array by reference.
m := *b.randaoMixes
mCopy := m
mixes = &mCopy
b.sharedFieldReferences[randaoMixes].MinusRef()
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
}
mixes[idx] = bytesutil.ToBytes32(val)
b.randaoMixes = mixes
b.randaoMixes.SetRootAtIndex(idx, bytesutil.ToBytes32(val))
b.markFieldAsDirty(randaoMixes)
b.addDirtyIndices(randaoMixes, []uint64{idx})

View File

@@ -36,10 +36,8 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconState) (state.BeaconState, error)
return nil, errors.New("received nil state")
}
var bRoots customtypes.BlockRoots
for i, r := range st.BlockRoots {
copy(bRoots[i][:], r)
}
bRoots := customtypes.SetFromSlice(st.BlockRoots)
var sRoots customtypes.StateRoots
for i, r := range st.StateRoots {
copy(sRoots[i][:], r)
@@ -48,10 +46,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconState) (state.BeaconState, error)
for i, r := range st.HistoricalRoots {
copy(hRoots[i][:], r)
}
var mixes customtypes.RandaoMixes
for i, m := range st.RandaoMixes {
copy(mixes[i][:], m)
}
mixes := customtypes.SetFromSliceRandao(st.RandaoMixes)
fieldCount := params.BeaconConfig().BeaconStateFieldCount
b := &BeaconState{
@@ -60,7 +55,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconState) (state.BeaconState, error)
slot: st.Slot,
fork: st.Fork,
latestBlockHeader: st.LatestBlockHeader,
blockRoots: &bRoots,
blockRoots: bRoots,
stateRoots: &sRoots,
historicalRoots: hRoots,
eth1Data: st.Eth1Data,
@@ -68,7 +63,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconState) (state.BeaconState, error)
eth1DepositIndex: st.Eth1DepositIndex,
validators: st.Validators,
balances: st.Balances,
randaoMixes: &mixes,
randaoMixes: mixes,
slashings: st.Slashings,
previousEpochAttestations: st.PreviousEpochAttestations,
currentEpochAttestations: st.CurrentEpochAttestations,
@@ -99,7 +94,6 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconState) (state.BeaconState, error)
// Initialize field reference tracking for shared data.
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
b.sharedFieldReferences[stateRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[previousEpochAttestations] = stateutil.NewRef(1)
b.sharedFieldReferences[currentEpochAttestations] = stateutil.NewRef(1)
b.sharedFieldReferences[slashings] = stateutil.NewRef(1)
@@ -127,9 +121,9 @@ func (b *BeaconState) Copy() state.BeaconState {
slashings: b.slashings,
// Large arrays, infrequently changed, constant size.
blockRoots: b.blockRoots,
blockRoots: b.blockRoots.Copy(),
stateRoots: b.stateRoots,
randaoMixes: b.randaoMixes,
randaoMixes: b.randaoMixes.Copy(),
previousEpochAttestations: b.previousEpochAttestations,
currentEpochAttestations: b.currentEpochAttestations,
eth1DataVotes: b.eth1DataVotes,
@@ -211,6 +205,9 @@ func (b *BeaconState) Copy() state.BeaconState {
}
}
b.blockRoots.MinusRef()
b.randaoMixes.MinusRef()
for i := 0; i < fieldCount; i++ {
field := types.FieldIndex(i)
delete(b.stateFieldLeaves, field)

View File

@@ -76,9 +76,8 @@ func (b *BeaconState) BlockRootAtIndex(idx uint64) ([]byte, error) {
// input index value.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) blockRootAtIndex(idx uint64) ([32]byte, error) {
if uint64(len(b.blockRoots)) <= idx {
if b.blockRoots.TotalLength() <= idx {
return [32]byte{}, fmt.Errorf("index %d out of range", idx)
}
return b.blockRoots[idx], nil
return b.blockRoots.RootAtIndex(idx), nil
}

View File

@@ -37,11 +37,10 @@ func (b *BeaconState) RandaoMixAtIndex(idx uint64) ([]byte, error) {
// input index value.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) randaoMixAtIndex(idx uint64) ([32]byte, error) {
if uint64(len(b.randaoMixes)) <= idx {
if b.randaoMixes.TotalLength() <= idx {
return [32]byte{}, fmt.Errorf("index %d out of range", idx)
}
return b.randaoMixes[idx], nil
return b.randaoMixes.RootAtIndex(idx), nil
}
// RandaoMixesLength returns the length of the randao mixes slice.
@@ -63,5 +62,5 @@ func (b *BeaconState) randaoMixesLength() int {
return 0
}
return len(b.randaoMixes)
return int(b.randaoMixes.TotalLength()) // lint:ignore uintcast- ajhdjhd
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
customtypes "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
@@ -25,14 +24,12 @@ func (b *BeaconState) SetBlockRoots(val [][]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[blockRoots].MinusRef()
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
var rootsArr [fieldparams.BlockRootsLength][32]byte
for i := 0; i < len(rootsArr); i++ {
copy(rootsArr[i][:], val[i])
}
roots := customtypes.BlockRoots(rootsArr)
roots := customtypes.BlockRoots{}
roots.SetFromBaseField(rootsArr)
b.blockRoots = &roots
b.markFieldAsDirty(blockRoots)
b.rebuildTrie[blockRoots] = true
@@ -42,24 +39,13 @@ func (b *BeaconState) SetBlockRoots(val [][]byte) error {
// UpdateBlockRootAtIndex for the beacon state. Updates the block root
// at a specific index to a new value.
func (b *BeaconState) UpdateBlockRootAtIndex(idx uint64, blockRoot [32]byte) error {
if uint64(len(b.blockRoots)) <= idx {
if b.blockRoots.TotalLength() <= idx {
return fmt.Errorf("invalid index provided %d", idx)
}
b.lock.Lock()
defer b.lock.Unlock()
r := b.blockRoots
if ref := b.sharedFieldReferences[blockRoots]; ref.Refs() > 1 {
// Copy elements in underlying array by reference.
roots := *b.blockRoots
rootsCopy := roots
r = &rootsCopy
ref.MinusRef()
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
}
r[idx] = blockRoot
b.blockRoots = r
b.blockRoots.SetRootAtIndex(idx, blockRoot)
b.markFieldAsDirty(blockRoots)
b.addDirtyIndices(blockRoots, []uint64{idx})

View File

@@ -3,7 +3,6 @@ package v2
import (
"github.com/pkg/errors"
customtypes "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
)
@@ -14,14 +13,12 @@ func (b *BeaconState) SetRandaoMixes(val [][]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[randaoMixes].MinusRef()
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
var mixesArr [fieldparams.RandaoMixesLength][32]byte
for i := 0; i < len(mixesArr); i++ {
copy(mixesArr[i][:], val[i])
}
mixes := customtypes.RandaoMixes(mixesArr)
mixes := customtypes.RandaoMixes{}
mixes.SetFromBaseField(mixesArr)
b.randaoMixes = &mixes
b.markFieldAsDirty(randaoMixes)
b.rebuildTrie[randaoMixes] = true
@@ -31,24 +28,13 @@ func (b *BeaconState) SetRandaoMixes(val [][]byte) error {
// UpdateRandaoMixesAtIndex for the beacon state. Updates the randao mixes
// at a specific index to a new value.
func (b *BeaconState) UpdateRandaoMixesAtIndex(idx uint64, val []byte) error {
if uint64(len(b.randaoMixes)) <= idx {
if b.randaoMixes.TotalLength() <= idx {
return errors.Errorf("invalid index provided %d", idx)
}
b.lock.Lock()
defer b.lock.Unlock()
mixes := b.randaoMixes
if refs := b.sharedFieldReferences[randaoMixes].Refs(); refs > 1 {
// Copy elements in underlying array by reference.
m := *b.randaoMixes
mCopy := m
mixes = &mCopy
b.sharedFieldReferences[randaoMixes].MinusRef()
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
}
mixes[idx] = bytesutil.ToBytes32(val)
b.randaoMixes = mixes
b.randaoMixes.SetRootAtIndex(idx, bytesutil.ToBytes32(val))
b.markFieldAsDirty(randaoMixes)
b.addDirtyIndices(randaoMixes, []uint64{idx})

View File

@@ -35,10 +35,8 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateAltair) (*BeaconState, error
return nil, errors.New("received nil state")
}
var bRoots customtypes.BlockRoots
for i, r := range st.BlockRoots {
bRoots[i] = bytesutil.ToBytes32(r)
}
bRoots := customtypes.SetFromSlice(st.BlockRoots)
var sRoots customtypes.StateRoots
for i, r := range st.StateRoots {
sRoots[i] = bytesutil.ToBytes32(r)
@@ -47,10 +45,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateAltair) (*BeaconState, error
for i, r := range st.HistoricalRoots {
hRoots[i] = bytesutil.ToBytes32(r)
}
var mixes customtypes.RandaoMixes
for i, m := range st.RandaoMixes {
mixes[i] = bytesutil.ToBytes32(m)
}
mixes := customtypes.SetFromSliceRandao(st.RandaoMixes)
fieldCount := params.BeaconConfig().BeaconStateAltairFieldCount
b := &BeaconState{
@@ -59,7 +54,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateAltair) (*BeaconState, error
slot: st.Slot,
fork: st.Fork,
latestBlockHeader: st.LatestBlockHeader,
blockRoots: &bRoots,
blockRoots: bRoots,
stateRoots: &sRoots,
historicalRoots: hRoots,
eth1Data: st.Eth1Data,
@@ -67,7 +62,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateAltair) (*BeaconState, error
eth1DepositIndex: st.Eth1DepositIndex,
validators: st.Validators,
balances: st.Balances,
randaoMixes: &mixes,
randaoMixes: mixes,
slashings: st.Slashings,
previousEpochParticipation: st.PreviousEpochParticipation,
currentEpochParticipation: st.CurrentEpochParticipation,
@@ -101,7 +96,6 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateAltair) (*BeaconState, error
// Initialize field reference tracking for shared data.
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
b.sharedFieldReferences[stateRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[previousEpochParticipationBits] = stateutil.NewRef(1) // New in Altair.
b.sharedFieldReferences[currentEpochParticipationBits] = stateutil.NewRef(1) // New in Altair.
b.sharedFieldReferences[slashings] = stateutil.NewRef(1)
@@ -128,9 +122,9 @@ func (b *BeaconState) Copy() state.BeaconState {
eth1DepositIndex: b.eth1DepositIndex,
// Large arrays, infrequently changed, constant size.
blockRoots: b.blockRoots,
blockRoots: b.blockRoots.Copy(),
stateRoots: b.stateRoots,
randaoMixes: b.randaoMixes,
randaoMixes: b.randaoMixes.Copy(),
slashings: b.slashings,
eth1DataVotes: b.eth1DataVotes,
@@ -215,6 +209,8 @@ func (b *BeaconState) Copy() state.BeaconState {
b.stateFieldLeaves[field].FieldReference().MinusRef()
}
}
b.blockRoots.DecreaseRef()
b.randaoMixes.DecreaseRef()
for i := 0; i < fieldCount; i++ {
field := types.FieldIndex(i)
delete(b.stateFieldLeaves, field)

View File

@@ -76,9 +76,8 @@ func (b *BeaconState) BlockRootAtIndex(idx uint64) ([]byte, error) {
// input index value.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) blockRootAtIndex(idx uint64) ([32]byte, error) {
if uint64(len(b.blockRoots)) <= idx {
if b.blockRoots.TotalLength() <= idx {
return [32]byte{}, fmt.Errorf("index %d out of range", idx)
}
return b.blockRoots[idx], nil
return b.blockRoots.RootAtIndex(idx), nil
}

View File

@@ -37,11 +37,10 @@ func (b *BeaconState) RandaoMixAtIndex(idx uint64) ([]byte, error) {
// input index value.
// This assumes that a lock is already held on BeaconState.
func (b *BeaconState) randaoMixAtIndex(idx uint64) ([32]byte, error) {
if uint64(len(b.randaoMixes)) <= idx {
if b.randaoMixes.TotalLength() <= idx {
return [32]byte{}, fmt.Errorf("index %d out of range", idx)
}
return b.randaoMixes[idx], nil
return b.randaoMixes.RootAtIndex(idx), nil
}
// RandaoMixesLength returns the length of the randao mixes slice.
@@ -63,5 +62,5 @@ func (b *BeaconState) randaoMixesLength() int {
return 0
}
return len(b.randaoMixes)
return int(b.randaoMixes.TotalLength()) // lint:ignore uintcast- ajhdjhd
}

View File

@@ -4,7 +4,6 @@ import (
"fmt"
customtypes "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/proto/prysm/v1alpha1"
)
@@ -25,14 +24,12 @@ func (b *BeaconState) SetBlockRoots(val [][]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[blockRoots].MinusRef()
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
var rootsArr [fieldparams.BlockRootsLength][fieldparams.RootLength]byte
for i := 0; i < len(rootsArr); i++ {
copy(rootsArr[i][:], val[i])
}
roots := customtypes.BlockRoots(rootsArr)
roots := customtypes.BlockRoots{}
roots.SetFromBaseField(rootsArr)
b.blockRoots = &roots
b.markFieldAsDirty(blockRoots)
b.rebuildTrie[blockRoots] = true
@@ -42,24 +39,13 @@ func (b *BeaconState) SetBlockRoots(val [][]byte) error {
// UpdateBlockRootAtIndex for the beacon state. Updates the block root
// at a specific index to a new value.
func (b *BeaconState) UpdateBlockRootAtIndex(idx uint64, blockRoot [32]byte) error {
if uint64(len(b.blockRoots)) <= idx {
if b.blockRoots.TotalLength() <= idx {
return fmt.Errorf("invalid index provided %d", idx)
}
b.lock.Lock()
defer b.lock.Unlock()
r := b.blockRoots
if ref := b.sharedFieldReferences[blockRoots]; ref.Refs() > 1 {
// Copy elements in underlying array by reference.
roots := *b.blockRoots
rootsCopy := roots
r = &rootsCopy
ref.MinusRef()
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
}
r[idx] = blockRoot
b.blockRoots = r
b.blockRoots.SetRootAtIndex(idx, blockRoot)
b.markFieldAsDirty(blockRoots)
b.addDirtyIndices(blockRoots, []uint64{idx})

View File

@@ -3,7 +3,6 @@ package v3
import (
"github.com/pkg/errors"
customtypes "github.com/prysmaticlabs/prysm/beacon-chain/state/state-native/custom-types"
"github.com/prysmaticlabs/prysm/beacon-chain/state/stateutil"
fieldparams "github.com/prysmaticlabs/prysm/config/fieldparams"
"github.com/prysmaticlabs/prysm/encoding/bytesutil"
)
@@ -14,14 +13,12 @@ func (b *BeaconState) SetRandaoMixes(val [][]byte) error {
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[randaoMixes].MinusRef()
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
var mixesArr [fieldparams.RandaoMixesLength][fieldparams.RootLength]byte
var mixesArr [fieldparams.RandaoMixesLength][32]byte
for i := 0; i < len(mixesArr); i++ {
copy(mixesArr[i][:], val[i])
}
mixes := customtypes.RandaoMixes(mixesArr)
mixes := customtypes.RandaoMixes{}
mixes.SetFromBaseField(mixesArr)
b.randaoMixes = &mixes
b.markFieldAsDirty(randaoMixes)
b.rebuildTrie[randaoMixes] = true
@@ -31,24 +28,13 @@ func (b *BeaconState) SetRandaoMixes(val [][]byte) error {
// UpdateRandaoMixesAtIndex for the beacon state. Updates the randao mixes
// at a specific index to a new value.
func (b *BeaconState) UpdateRandaoMixesAtIndex(idx uint64, val []byte) error {
if uint64(len(b.randaoMixes)) <= idx {
if b.randaoMixes.TotalLength() <= idx {
return errors.Errorf("invalid index provided %d", idx)
}
b.lock.Lock()
defer b.lock.Unlock()
mixes := b.randaoMixes
if refs := b.sharedFieldReferences[randaoMixes].Refs(); refs > 1 {
// Copy elements in underlying array by reference.
m := *b.randaoMixes
mCopy := m
mixes = &mCopy
b.sharedFieldReferences[randaoMixes].MinusRef()
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
}
mixes[idx] = bytesutil.ToBytes32(val)
b.randaoMixes = mixes
b.randaoMixes.SetRootAtIndex(idx, bytesutil.ToBytes32(val))
b.markFieldAsDirty(randaoMixes)
b.addDirtyIndices(randaoMixes, []uint64{idx})

View File

@@ -36,10 +36,8 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateBellatrix) (state.BeaconStat
return nil, errors.New("received nil state")
}
var bRoots customtypes.BlockRoots
for i, r := range st.BlockRoots {
bRoots[i] = bytesutil.ToBytes32(r)
}
bRoots := customtypes.SetFromSlice(st.BlockRoots)
var sRoots customtypes.StateRoots
for i, r := range st.StateRoots {
sRoots[i] = bytesutil.ToBytes32(r)
@@ -48,10 +46,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateBellatrix) (state.BeaconStat
for i, r := range st.HistoricalRoots {
hRoots[i] = bytesutil.ToBytes32(r)
}
var mixes customtypes.RandaoMixes
for i, m := range st.RandaoMixes {
mixes[i] = bytesutil.ToBytes32(m)
}
mixes := customtypes.SetFromSliceRandao(st.RandaoMixes)
fieldCount := params.BeaconConfig().BeaconStateBellatrixFieldCount
b := &BeaconState{
@@ -60,7 +55,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateBellatrix) (state.BeaconStat
slot: st.Slot,
fork: st.Fork,
latestBlockHeader: st.LatestBlockHeader,
blockRoots: &bRoots,
blockRoots: bRoots,
stateRoots: &sRoots,
historicalRoots: hRoots,
eth1Data: st.Eth1Data,
@@ -68,7 +63,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateBellatrix) (state.BeaconStat
eth1DepositIndex: st.Eth1DepositIndex,
validators: st.Validators,
balances: st.Balances,
randaoMixes: &mixes,
randaoMixes: mixes,
slashings: st.Slashings,
previousEpochParticipation: st.PreviousEpochParticipation,
currentEpochParticipation: st.CurrentEpochParticipation,
@@ -101,9 +96,7 @@ func InitializeFromProtoUnsafe(st *ethpb.BeaconStateBellatrix) (state.BeaconStat
}
// Initialize field reference tracking for shared data.
b.sharedFieldReferences[randaoMixes] = stateutil.NewRef(1)
b.sharedFieldReferences[stateRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[blockRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[previousEpochParticipationBits] = stateutil.NewRef(1) // New in Altair.
b.sharedFieldReferences[currentEpochParticipationBits] = stateutil.NewRef(1) // New in Altair.
b.sharedFieldReferences[slashings] = stateutil.NewRef(1)
@@ -130,9 +123,9 @@ func (b *BeaconState) Copy() state.BeaconState {
eth1DepositIndex: b.eth1DepositIndex,
// Large arrays, infrequently changed, constant size.
randaoMixes: b.randaoMixes,
randaoMixes: b.randaoMixes.Copy(),
stateRoots: b.stateRoots,
blockRoots: b.blockRoots,
blockRoots: b.blockRoots.Copy(),
slashings: b.slashings,
eth1DataVotes: b.eth1DataVotes,
@@ -208,6 +201,7 @@ func (b *BeaconState) Copy() state.BeaconState {
}
}
}
state.StateCount.Inc()
// Finalizer runs when dst is being destroyed in garbage collection.
runtime.SetFinalizer(dst, func(b *BeaconState) {
@@ -217,6 +211,9 @@ func (b *BeaconState) Copy() state.BeaconState {
b.stateFieldLeaves[field].FieldReference().MinusRef()
}
}
b.blockRoots.DecreaseRef()
b.randaoMixes.DecreaseRef()
for i := 0; i < fieldCount; i++ {
field := types.FieldIndex(i)
delete(b.stateFieldLeaves, field)

View File

@@ -45,7 +45,7 @@ func (f *blocksFetcher) nonSkippedSlotAfter(ctx context.Context, slot types.Slot
// Exit early if no peers with epoch higher than our known head are found.
if targetEpoch <= headEpoch {
return 0, errors.Wrapf(errSlotIsTooHigh, "no peers with epoch higher than our known head, peer epoch=%d, head=%d", targetEpoch, headEpoch)
return 0, errSlotIsTooHigh
}
// Transform peer list to avoid eclipsing (filter, shuffle, trim).

View File

@@ -2,7 +2,7 @@ package initialsync
import (
"context"
"github.com/pkg/errors"
"errors"
"time"
"github.com/libp2p/go-libp2p-core/peer"
@@ -285,7 +285,7 @@ func (q *blocksQueue) onScheduleEvent(ctx context.Context) eventHandlerFn {
}
if m.start > q.highestExpectedSlot {
m.setState(stateSkipped)
return m.state, errors.Wrapf(errSlotIsTooHigh, "slot=%d", m.start)
return m.state, errSlotIsTooHigh
}
blocksPerRequest := q.blocksFetcher.blocksPerSecond
if err := q.blocksFetcher.scheduleRequest(ctx, m.start, blocksPerRequest); err != nil {

View File

@@ -49,30 +49,29 @@ type Service struct {
synced *abool.AtomicBool
chainStarted *abool.AtomicBool
counter *ratecounter.RateCounter
genesisChan chan time.Time
}
// NewService configures the initial sync service responsible for bringing the node up to the
// latest head of the blockchain.
func NewService(ctx context.Context, cfg *Config) *Service {
ctx, cancel := context.WithCancel(ctx)
s := &Service{
return &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
synced: abool.New(),
chainStarted: abool.New(),
counter: ratecounter.NewRateCounter(counterSeconds * time.Second),
genesisChan: make(chan time.Time),
}
go s.waitForStateInitialization()
return s
}
// Start the initial sync service.
func (s *Service) Start() {
// Wait for state initialized event.
genesis := <-s.genesisChan
genesis, err := s.waitForStateInitialization()
if err != nil {
log.WithError(err).Fatal("Failed to wait for state initialization.")
return
}
if genesis.IsZero() {
log.Debug("Exiting Initial Sync Service")
return
@@ -180,9 +179,10 @@ func (s *Service) waitForMinimumPeers() {
}
}
// TODO: Return error
// waitForStateInitialization makes sure that beacon node is ready to be accessed: it is either
// already properly configured or system waits up until state initialized event is triggered.
func (s *Service) waitForStateInitialization() {
func (s *Service) waitForStateInitialization() (time.Time, error) {
// Wait for state to be initialized.
stateChannel := make(chan *feed.Event, 1)
stateSub := s.cfg.StateNotifier.StateFeed().Subscribe(stateChannel)
@@ -198,19 +198,14 @@ func (s *Service) waitForStateInitialization() {
continue
}
log.WithField("starttime", data.StartTime).Debug("Received state initialized event")
s.genesisChan <- data.StartTime
return
return data.StartTime, nil
}
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
// Send a zero time in the event we are exiting.
s.genesisChan <- time.Time{}
return
return time.Time{}, errors.New("context closed, exiting goroutine")
case err := <-stateSub.Err():
log.WithError(err).Error("Subscription to state notifier failed")
// Send a zero time in the event we are exiting.
s.genesisChan <- time.Time{}
return
return time.Time{}, errors.Wrap(err, "subscription to state notifier failed")
}
}
}

View File

@@ -168,11 +168,7 @@ func TestService_InitStartStop(t *testing.T) {
Chain: mc,
StateNotifier: notifier,
})
time.Sleep(500 * time.Millisecond)
assert.NotNil(t, s)
if tt.methodRuns != nil {
tt.methodRuns(notifier.StateFeed())
}
wg := &sync.WaitGroup{}
wg.Add(1)
@@ -181,6 +177,11 @@ func TestService_InitStartStop(t *testing.T) {
wg.Done()
}()
time.Sleep(500 * time.Millisecond)
if tt.methodRuns != nil {
tt.methodRuns(notifier.StateFeed())
}
go func() {
// Allow to exit from test (on no head loop waiting for head is started).
// In most tests, this is redundant, as Start() already exited.
@@ -207,7 +208,6 @@ func TestService_waitForStateInitialization(t *testing.T) {
synced: abool.New(),
chainStarted: abool.New(),
counter: ratecounter.NewRateCounter(counterSeconds * time.Second),
genesisChan: make(chan time.Time),
}
return s
}
@@ -221,9 +221,8 @@ func TestService_waitForStateInitialization(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
go s.waitForStateInitialization()
currTime := <-s.genesisChan
assert.Equal(t, true, currTime.IsZero())
_, err := s.waitForStateInitialization()
assert.ErrorContains(t, "context closed", err)
wg.Done()
}()
go func() {
@@ -236,8 +235,6 @@ func TestService_waitForStateInitialization(t *testing.T) {
t.Fatalf("Test should have exited by now, timed out")
}
assert.LogsContain(t, hook, "Waiting for state to be initialized")
assert.LogsContain(t, hook, "Context closed, exiting goroutine")
assert.LogsDoNotContain(t, hook, "Subscription to state notifier failed")
})
t.Run("no state and state init event received", func(t *testing.T) {
@@ -251,8 +248,9 @@ func TestService_waitForStateInitialization(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
go s.waitForStateInitialization()
receivedGenesisTime = <-s.genesisChan
var err error
receivedGenesisTime, err = s.waitForStateInitialization()
require.NoError(t, err)
assert.Equal(t, false, receivedGenesisTime.IsZero())
wg.Done()
}()
@@ -281,7 +279,6 @@ func TestService_waitForStateInitialization(t *testing.T) {
assert.LogsContain(t, hook, "Event feed data is not type *statefeed.InitializedData")
assert.LogsContain(t, hook, "Waiting for state to be initialized")
assert.LogsContain(t, hook, "Received state initialized event")
assert.LogsDoNotContain(t, hook, "Context closed, exiting goroutine")
})
t.Run("no state and state init event received and service start", func(t *testing.T) {
@@ -296,7 +293,8 @@ func TestService_waitForStateInitialization(t *testing.T) {
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
s.waitForStateInitialization()
_, err := s.waitForStateInitialization()
require.NoError(t, err)
wg.Done()
}()
@@ -321,7 +319,6 @@ func TestService_waitForStateInitialization(t *testing.T) {
}
assert.LogsContain(t, hook, "Waiting for state to be initialized")
assert.LogsContain(t, hook, "Received state initialized event")
assert.LogsDoNotContain(t, hook, "Context closed, exiting goroutine")
})
}

View File

@@ -3,7 +3,6 @@ package sync
import (
"bytes"
"context"
"fmt"
"sync"
"time"
@@ -308,7 +307,7 @@ func (s *Service) validateStatusMessage(ctx context.Context, msg *pb.Status) err
return nil
}
if !s.cfg.beaconDB.IsFinalizedBlock(ctx, bytesutil.ToBytes32(msg.FinalizedRoot)) {
return errors.Wrap(p2ptypes.ErrInvalidFinalizedRoot, fmt.Sprintf("root=%#x", msg.FinalizedRoot))
return p2ptypes.ErrInvalidFinalizedRoot
}
blk, err := s.cfg.beaconDB.Block(ctx, bytesutil.ToBytes32(msg.FinalizedRoot))
if err != nil {

View File

@@ -154,6 +154,7 @@ func NewService(ctx context.Context, opts ...Option) *Service {
}
r.subHandler = newSubTopicHandler()
r.rateLimiter = newRateLimiter(r.cfg.p2p)
r.initCaches()
go r.registerHandlers()
go r.verifierRoutine()
@@ -163,8 +164,6 @@ func NewService(ctx context.Context, opts ...Option) *Service {
// Start the regular sync service.
func (s *Service) Start() {
s.initCaches()
s.cfg.p2p.AddConnectionHandler(s.reValidatePeer, s.sendGoodbye)
s.cfg.p2p.AddDisconnectionHandler(func(_ context.Context, _ peer.ID) error {
// no-op

View File

@@ -97,15 +97,10 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
return pubsub.ValidationIgnore, nil
}
// Check that the block being voted on isn't invalid.
errBadBlockRef := errors.New("bad block referenced in attestation data")
if s.hasBadBlock(bytesutil.ToBytes32(m.Message.Aggregate.Data.BeaconBlockRoot)) {
return pubsub.ValidationReject, errors.Wrapf(errBadBlockRef, "block=BeaconBlockRoot, root=%#x", m.Message.Aggregate.Data.BeaconBlockRoot)
}
if s.hasBadBlock(bytesutil.ToBytes32(m.Message.Aggregate.Data.Target.Root)) {
return pubsub.ValidationReject, errors.Wrapf(errBadBlockRef, "block=Target, root=%#x", m.Message.Aggregate.Data.Target.Root)
}
if s.hasBadBlock(bytesutil.ToBytes32(m.Message.Aggregate.Data.Source.Root)) {
return pubsub.ValidationReject, errors.Wrapf(errBadBlockRef, "block=Source, root=%#x", m.Message.Aggregate.Data.Source.Root)
if s.hasBadBlock(bytesutil.ToBytes32(m.Message.Aggregate.Data.BeaconBlockRoot)) ||
s.hasBadBlock(bytesutil.ToBytes32(m.Message.Aggregate.Data.Target.Root)) ||
s.hasBadBlock(bytesutil.ToBytes32(m.Message.Aggregate.Data.Source.Root)) {
return pubsub.ValidationReject, errors.New("bad block referenced in attestation data")
}
// Verify aggregate attestation has not already been seen via aggregate gossip, within a block, or through the creation locally.

View File

@@ -189,8 +189,6 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
return pubsub.ValidationAccept, nil
}
var errIncorrectProposerIndex = errors.New("incorrect proposer index")
func (s *Service) validateBeaconBlock(ctx context.Context, blk block.SignedBeaconBlock, blockRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "sync.validateBeaconBlock")
defer span.End()
@@ -222,19 +220,13 @@ func (s *Service) validateBeaconBlock(ctx context.Context, blk block.SignedBeaco
if err != nil {
return err
}
sRoot, err := parentState.HashTreeRoot(ctx)
if err != nil {
log.Errorf("that's weird, htr fail")
}
log.Infof("validating block with slot=%d, state.slot=%d, block_root=%#x, state_root=%#x", blk.Block().Slot(), parentState.Slot(), blockRoot, sRoot)
idx, err := helpers.BeaconProposerIndex(ctx, parentState)
if err != nil {
return err
}
log.Infof("got BeaconProposerIndex=%d, block proposer index=%d", idx, blk.Block().ProposerIndex())
if blk.Block().ProposerIndex() != idx {
s.setBadBlock(ctx, blockRoot)
return errors.Wrapf(errIncorrectProposerIndex, "state slot=%d, root=%#x, block_root=%#x", parentState.Slot(), sRoot, blockRoot)
return errors.New("incorrect proposer index")
}
if err = s.validateBellatrixBeaconBlock(ctx, parentState, blk.Block()); err != nil {

View File

@@ -15,7 +15,7 @@ var (
HTTPWeb3ProviderFlag = &cli.StringFlag{
Name: "http-web3provider",
Usage: "A mainchain web3 provider string http endpoint. Can contain auth header as well in the format --http-web3provider=\"https://goerli.infura.io/v3/xxxx,Basic xxx\" for project secret (base64 encoded) and --http-web3provider=\"https://goerli.infura.io/v3/xxxx,Bearer xxx\" for jwt use",
Value: "",
Value: "http://localhost:8545",
}
// ExecutionJWTSecretFlag provides a path to a file containing a hex-encoded string representing a 32 byte secret
// used to authenticate with an execution node via HTTP. This is required if using an HTTP connection, otherwise all requests

View File

@@ -27,7 +27,7 @@ func FlagOptions(c *cli.Context) ([]powchain.Option, error) {
powchain.WithEth1HeaderRequestLimit(c.Uint64(flags.Eth1HeaderReqLimit.Name)),
}
if len(jwtSecret) > 0 {
opts = append(opts, powchain.WithJWTSecret(jwtSecret))
opts = append(opts, powchain.WithHttpEndpointsAndJWTSecret(endpoints, jwtSecret))
}
return opts, nil
}

View File

@@ -28,7 +28,6 @@ var Commands = &cli.Command{
flags.WalletPasswordFileFlag,
flags.DeletePublicKeysFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -62,7 +61,6 @@ var Commands = &cli.Command{
flags.GrpcRetriesFlag,
flags.GrpcRetryDelayFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -93,7 +91,6 @@ var Commands = &cli.Command{
flags.BackupPublicKeysFlag,
flags.BackupPasswordFile,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -121,7 +118,6 @@ var Commands = &cli.Command{
flags.AccountPasswordFileFlag,
flags.ImportPrivateKeyFileFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -155,7 +151,6 @@ var Commands = &cli.Command{
flags.GrpcRetryDelayFlag,
flags.ExitAllFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),

View File

@@ -22,7 +22,6 @@ var Commands = &cli.Command{
cmd.DataDirFlag,
flags.SlashingProtectionExportDirFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -47,7 +46,6 @@ var Commands = &cli.Command{
cmd.DataDirFlag,
flags.SlashingProtectionJSONFileFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),

View File

@@ -34,7 +34,6 @@ var Commands = &cli.Command{
flags.Mnemonic25thWordFileFlag,
flags.SkipMnemonic25thWordCheckFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -64,7 +63,6 @@ var Commands = &cli.Command{
flags.RemoteSignerKeyPathFlag,
flags.RemoteSignerCACertPathFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),
@@ -93,7 +91,6 @@ var Commands = &cli.Command{
flags.Mnemonic25thWordFileFlag,
flags.SkipMnemonic25thWordCheckFlag,
features.Mainnet,
features.PyrmontTestnet,
features.PraterTestnet,
cmd.AcceptTosFlag,
}),

View File

@@ -35,9 +35,6 @@ const disabledFeatureFlag = "Disabled feature flag"
// Flags is a struct to represent which features the client will perform on runtime.
type Flags struct {
// Testnet Flags.
PyrmontTestnet bool // PyrmontTestnet defines the flag through which we can enable the node to run on the Pyrmont testnet.
// Feature related flags.
RemoteSlasherProtection bool // RemoteSlasherProtection utilizes a beacon node with --slasher mode for validator slashing protection.
WriteSSZStateTransitions bool // WriteSSZStateTransitions to tmp directory.
@@ -121,13 +118,8 @@ func InitWithReset(c *Flags) func() {
}
// configureTestnet sets the config according to specified testnet flag
func configureTestnet(ctx *cli.Context, cfg *Flags) {
if ctx.Bool(PyrmontTestnet.Name) {
log.Warn("Running on Pyrmont Testnet")
params.UsePyrmontConfig()
params.UsePyrmontNetworkConfig()
cfg.PyrmontTestnet = true
} else if ctx.Bool(PraterTestnet.Name) {
func configureTestnet(ctx *cli.Context) {
if ctx.Bool(PraterTestnet.Name) {
log.Warn("Running on the Prater Testnet")
params.UsePraterConfig()
params.UsePraterNetworkConfig()
@@ -145,7 +137,7 @@ func ConfigureBeaconChain(ctx *cli.Context) {
if ctx.Bool(devModeFlag.Name) {
enableDevModeFlags(ctx)
}
configureTestnet(ctx, cfg)
configureTestnet(ctx)
if ctx.Bool(writeSSZStateTransitionsFlag.Name) {
logEnabled(writeSSZStateTransitionsFlag)
@@ -240,7 +232,7 @@ func ConfigureBeaconChain(ctx *cli.Context) {
func ConfigureValidator(ctx *cli.Context) {
complainOnDeprecatedFlags(ctx)
cfg := &Flags{}
configureTestnet(ctx, cfg)
configureTestnet(ctx)
if ctx.Bool(enableExternalSlasherProtectionFlag.Name) {
log.Fatal(
"Remote slashing protection has currently been disabled in Prysm due to safety concerns. " +

View File

@@ -11,41 +11,41 @@ import (
func TestInitFeatureConfig(t *testing.T) {
defer Init(&Flags{})
cfg := &Flags{
PyrmontTestnet: true,
EnablePeerScorer: true,
}
Init(cfg)
c := Get()
assert.Equal(t, true, c.PyrmontTestnet)
assert.Equal(t, true, c.EnablePeerScorer)
// Reset back to false for the follow up tests.
cfg = &Flags{PyrmontTestnet: false}
cfg = &Flags{RemoteSlasherProtection: false}
Init(cfg)
}
func TestInitWithReset(t *testing.T) {
defer Init(&Flags{})
Init(&Flags{
PyrmontTestnet: true,
EnablePeerScorer: true,
})
assert.Equal(t, true, Get().PyrmontTestnet)
assert.Equal(t, true, Get().EnablePeerScorer)
// Overwrite previously set value (value that didn't come by default).
resetCfg := InitWithReset(&Flags{
PyrmontTestnet: false,
EnablePeerScorer: false,
})
assert.Equal(t, false, Get().PyrmontTestnet)
assert.Equal(t, false, Get().EnablePeerScorer)
// Reset must get to previously set configuration (not to default config values).
resetCfg()
assert.Equal(t, true, Get().PyrmontTestnet)
assert.Equal(t, true, Get().EnablePeerScorer)
}
func TestConfigureBeaconConfig(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.Bool(PyrmontTestnet.Name, true, "test")
set.Bool(enablePeerScorer.Name, true, "test")
context := cli.NewContext(&app, set, nil)
ConfigureBeaconChain(context)
c := Get()
assert.Equal(t, true, c.PyrmontTestnet)
assert.Equal(t, true, c.EnablePeerScorer)
}

View File

@@ -70,6 +70,11 @@ var (
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedPyrmontTestnet = &cli.BoolFlag{
Name: "pyrmont",
Usage: deprecatedUsage,
Hidden: true,
}
)
var deprecatedFlags = []cli.Flag{
@@ -84,4 +89,5 @@ var deprecatedFlags = []cli.Flag{
deprecatedDisableNextSlotStateCache,
deprecatedAttestationAggregationStrategy,
deprecatedForceOptMaxCoverAggregationStategy,
deprecatedPyrmontTestnet,
}

View File

@@ -7,11 +7,6 @@ import (
)
var (
// PyrmontTestnet flag for the multiclient Ethereum consensus testnet.
PyrmontTestnet = &cli.BoolFlag{
Name: "pyrmont",
Usage: "This defines the flag through which we can run on the Pyrmont Multiclient Testnet",
}
// PraterTestnet flag for the multiclient Ethereum consensus testnet.
PraterTestnet = &cli.BoolFlag{
Name: "prater",
@@ -149,6 +144,7 @@ var devModeFlags = []cli.Flag{
enablePeerScorer,
enableVecHTR,
enableForkChoiceDoublyLinkedTree,
enableNativeState,
}
// ValidatorFlags contains a list of all the feature flags that apply to the validator client.
@@ -156,7 +152,6 @@ var ValidatorFlags = append(deprecatedFlags, []cli.Flag{
writeWalletPasswordOnWebOnboarding,
enableExternalSlasherProtectionFlag,
disableAttestingHistoryDBCache,
PyrmontTestnet,
PraterTestnet,
Mainnet,
dynamicKeyReloadDebounceInterval,
@@ -175,7 +170,6 @@ var BeaconChainFlags = append(deprecatedFlags, []cli.Flag{
devModeFlag,
writeSSZStateTransitionsFlag,
disableGRPCConnectionLogging,
PyrmontTestnet,
PraterTestnet,
Mainnet,
enablePeerScorer,

View File

@@ -13,7 +13,6 @@ go_library(
"network_config.go",
"testnet_e2e_config.go",
"testnet_prater_config.go",
"testnet_pyrmont_config.go",
"testutils.go",
"values.go",
],
@@ -48,6 +47,7 @@ go_test(
"@consensus_spec_tests_mainnet//:test_data",
"@consensus_spec_tests_minimal//:test_data",
"@eth2_networks//:configs",
"testdata/e2e_config.yaml",
],
gotags = ["develop"],
race = "on",
@@ -61,3 +61,9 @@ go_test(
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
],
)
filegroup(
name = "custom_configs",
srcs = glob(["testdata/*.yaml"]),
visibility = ["//testing:__subpackages__"],
)

View File

@@ -26,15 +26,6 @@ func BeaconConfig() *BeaconChainConfig {
func OverrideBeaconConfig(c *BeaconChainConfig) {
beaconConfigLock.Lock()
defer beaconConfigLock.Unlock()
c.InitializeForkSchedule()
name, ok := reverseConfigNames[c.ConfigName]
// if name collides with an existing config name, override it, because the fork versions probably conflict
if !ok {
// otherwise define it as the special "Dynamic" name, ie for a config loaded from a file at runtime
name = Dynamic
}
KnownConfigs[name] = func() *BeaconChainConfig { return c }
rebuildKnownForkVersions()
beaconConfig = c
}

View File

@@ -19,15 +19,6 @@ func BeaconConfig() *BeaconChainConfig {
// OverrideBeaconConfig(c). Any subsequent calls to params.BeaconConfig() will
// return this new configuration.
func OverrideBeaconConfig(c *BeaconChainConfig) {
c.InitializeForkSchedule()
name, ok := reverseConfigNames[c.ConfigName]
// if name collides with an existing config name, override it, because the fork versions probably conflict
if !ok {
// otherwise define it as the special "Dynamic" name, ie for a config loaded from a file at runtime
name = Dynamic
}
KnownConfigs[name] = func() *BeaconChainConfig { return c }
rebuildKnownForkVersions()
beaconConfig = c
}

View File

@@ -70,6 +70,7 @@ func LoadChainConfigFile(chainConfigFileName string, conf *BeaconChainConfig) {
// recompute SqrRootSlotsPerEpoch constant to handle non-standard values of SlotsPerEpoch
conf.SqrRootSlotsPerEpoch = types.Slot(math.IntegerSquareRoot(uint64(conf.SlotsPerEpoch)))
log.Debugf("Config file values: %+v", conf)
conf.InitializeForkSchedule()
OverrideBeaconConfig(conf)
}

View File

@@ -18,94 +18,94 @@ import (
var placeholderFields = []string{"UPDATE_TIMEOUT", "INTERVALS_PER_SLOT"}
func TestLoadConfigFileMainnet(t *testing.T) {
func TestLoadConfigFile(t *testing.T) {
// See https://media.githubusercontent.com/media/ethereum/consensus-spec-tests/master/tests/minimal/config/phase0.yaml
assertVals := func(name string, fields []string, c1, c2 *params.BeaconChainConfig) {
assertVals := func(name string, fields []string, expected, actual *params.BeaconChainConfig) {
// Misc params.
assert.Equal(t, c1.MaxCommitteesPerSlot, c2.MaxCommitteesPerSlot, "%s: MaxCommitteesPerSlot", name)
assert.Equal(t, c1.TargetCommitteeSize, c2.TargetCommitteeSize, "%s: TargetCommitteeSize", name)
assert.Equal(t, c1.MaxValidatorsPerCommittee, c2.MaxValidatorsPerCommittee, "%s: MaxValidatorsPerCommittee", name)
assert.Equal(t, c1.MinPerEpochChurnLimit, c2.MinPerEpochChurnLimit, "%s: MinPerEpochChurnLimit", name)
assert.Equal(t, c1.ChurnLimitQuotient, c2.ChurnLimitQuotient, "%s: ChurnLimitQuotient", name)
assert.Equal(t, c1.ShuffleRoundCount, c2.ShuffleRoundCount, "%s: ShuffleRoundCount", name)
assert.Equal(t, c1.MinGenesisActiveValidatorCount, c2.MinGenesisActiveValidatorCount, "%s: MinGenesisActiveValidatorCount", name)
assert.Equal(t, c1.MinGenesisTime, c2.MinGenesisTime, "%s: MinGenesisTime", name)
assert.Equal(t, c1.HysteresisQuotient, c2.HysteresisQuotient, "%s: HysteresisQuotient", name)
assert.Equal(t, c1.HysteresisDownwardMultiplier, c2.HysteresisDownwardMultiplier, "%s: HysteresisDownwardMultiplier", name)
assert.Equal(t, c1.HysteresisUpwardMultiplier, c2.HysteresisUpwardMultiplier, "%s: HysteresisUpwardMultiplier", name)
assert.Equal(t, expected.MaxCommitteesPerSlot, actual.MaxCommitteesPerSlot, "%s: MaxCommitteesPerSlot", name)
assert.Equal(t, expected.TargetCommitteeSize, actual.TargetCommitteeSize, "%s: TargetCommitteeSize", name)
assert.Equal(t, expected.MaxValidatorsPerCommittee, actual.MaxValidatorsPerCommittee, "%s: MaxValidatorsPerCommittee", name)
assert.Equal(t, expected.MinPerEpochChurnLimit, actual.MinPerEpochChurnLimit, "%s: MinPerEpochChurnLimit", name)
assert.Equal(t, expected.ChurnLimitQuotient, actual.ChurnLimitQuotient, "%s: ChurnLimitQuotient", name)
assert.Equal(t, expected.ShuffleRoundCount, actual.ShuffleRoundCount, "%s: ShuffleRoundCount", name)
assert.Equal(t, expected.MinGenesisActiveValidatorCount, actual.MinGenesisActiveValidatorCount, "%s: MinGenesisActiveValidatorCount", name)
assert.Equal(t, expected.MinGenesisTime, actual.MinGenesisTime, "%s: MinGenesisTime", name)
assert.Equal(t, expected.HysteresisQuotient, actual.HysteresisQuotient, "%s: HysteresisQuotient", name)
assert.Equal(t, expected.HysteresisDownwardMultiplier, actual.HysteresisDownwardMultiplier, "%s: HysteresisDownwardMultiplier", name)
assert.Equal(t, expected.HysteresisUpwardMultiplier, actual.HysteresisUpwardMultiplier, "%s: HysteresisUpwardMultiplier", name)
// Fork Choice params.
assert.Equal(t, c1.SafeSlotsToUpdateJustified, c2.SafeSlotsToUpdateJustified, "%s: SafeSlotsToUpdateJustified", name)
assert.Equal(t, expected.SafeSlotsToUpdateJustified, actual.SafeSlotsToUpdateJustified, "%s: SafeSlotsToUpdateJustified", name)
// Validator params.
assert.Equal(t, c1.Eth1FollowDistance, c2.Eth1FollowDistance, "%s: Eth1FollowDistance", name)
assert.Equal(t, c1.TargetAggregatorsPerCommittee, c2.TargetAggregatorsPerCommittee, "%s: TargetAggregatorsPerCommittee", name)
assert.Equal(t, c1.RandomSubnetsPerValidator, c2.RandomSubnetsPerValidator, "%s: RandomSubnetsPerValidator", name)
assert.Equal(t, c1.EpochsPerRandomSubnetSubscription, c2.EpochsPerRandomSubnetSubscription, "%s: EpochsPerRandomSubnetSubscription", name)
assert.Equal(t, c1.SecondsPerETH1Block, c2.SecondsPerETH1Block, "%s: SecondsPerETH1Block", name)
assert.Equal(t, expected.Eth1FollowDistance, actual.Eth1FollowDistance, "%s: Eth1FollowDistance", name)
assert.Equal(t, expected.TargetAggregatorsPerCommittee, actual.TargetAggregatorsPerCommittee, "%s: TargetAggregatorsPerCommittee", name)
assert.Equal(t, expected.RandomSubnetsPerValidator, actual.RandomSubnetsPerValidator, "%s: RandomSubnetsPerValidator", name)
assert.Equal(t, expected.EpochsPerRandomSubnetSubscription, actual.EpochsPerRandomSubnetSubscription, "%s: EpochsPerRandomSubnetSubscription", name)
assert.Equal(t, expected.SecondsPerETH1Block, actual.SecondsPerETH1Block, "%s: SecondsPerETH1Block", name)
// Deposit contract.
assert.Equal(t, c1.DepositChainID, c2.DepositChainID, "%s: DepositChainID", name)
assert.Equal(t, c1.DepositNetworkID, c2.DepositNetworkID, "%s: DepositNetworkID", name)
assert.Equal(t, c1.DepositContractAddress, c2.DepositContractAddress, "%s: DepositContractAddress", name)
assert.Equal(t, expected.DepositChainID, actual.DepositChainID, "%s: DepositChainID", name)
assert.Equal(t, expected.DepositNetworkID, actual.DepositNetworkID, "%s: DepositNetworkID", name)
assert.Equal(t, expected.DepositContractAddress, actual.DepositContractAddress, "%s: DepositContractAddress", name)
// Gwei values.
assert.Equal(t, c1.MinDepositAmount, c2.MinDepositAmount, "%s: MinDepositAmount", name)
assert.Equal(t, c1.MaxEffectiveBalance, c2.MaxEffectiveBalance, "%s: MaxEffectiveBalance", name)
assert.Equal(t, c1.EjectionBalance, c2.EjectionBalance, "%s: EjectionBalance", name)
assert.Equal(t, c1.EffectiveBalanceIncrement, c2.EffectiveBalanceIncrement, "%s: EffectiveBalanceIncrement", name)
assert.Equal(t, expected.MinDepositAmount, actual.MinDepositAmount, "%s: MinDepositAmount", name)
assert.Equal(t, expected.MaxEffectiveBalance, actual.MaxEffectiveBalance, "%s: MaxEffectiveBalance", name)
assert.Equal(t, expected.EjectionBalance, actual.EjectionBalance, "%s: EjectionBalance", name)
assert.Equal(t, expected.EffectiveBalanceIncrement, actual.EffectiveBalanceIncrement, "%s: EffectiveBalanceIncrement", name)
// Initial values.
assert.DeepEqual(t, c1.GenesisForkVersion, c2.GenesisForkVersion, "%s: GenesisForkVersion", name)
assert.DeepEqual(t, c1.BLSWithdrawalPrefixByte, c2.BLSWithdrawalPrefixByte, "%s: BLSWithdrawalPrefixByte", name)
assert.DeepEqual(t, expected.GenesisForkVersion, actual.GenesisForkVersion, "%s: GenesisForkVersion", name)
assert.DeepEqual(t, expected.BLSWithdrawalPrefixByte, actual.BLSWithdrawalPrefixByte, "%s: BLSWithdrawalPrefixByte", name)
// Time parameters.
assert.Equal(t, c1.GenesisDelay, c2.GenesisDelay, "%s: GenesisDelay", name)
assert.Equal(t, c1.SecondsPerSlot, c2.SecondsPerSlot, "%s: SecondsPerSlot", name)
assert.Equal(t, c1.MinAttestationInclusionDelay, c2.MinAttestationInclusionDelay, "%s: MinAttestationInclusionDelay", name)
assert.Equal(t, c1.SlotsPerEpoch, c2.SlotsPerEpoch, "%s: SlotsPerEpoch", name)
assert.Equal(t, c1.MinSeedLookahead, c2.MinSeedLookahead, "%s: MinSeedLookahead", name)
assert.Equal(t, c1.MaxSeedLookahead, c2.MaxSeedLookahead, "%s: MaxSeedLookahead", name)
assert.Equal(t, c1.EpochsPerEth1VotingPeriod, c2.EpochsPerEth1VotingPeriod, "%s: EpochsPerEth1VotingPeriod", name)
assert.Equal(t, c1.SlotsPerHistoricalRoot, c2.SlotsPerHistoricalRoot, "%s: SlotsPerHistoricalRoot", name)
assert.Equal(t, c1.MinValidatorWithdrawabilityDelay, c2.MinValidatorWithdrawabilityDelay, "%s: MinValidatorWithdrawabilityDelay", name)
assert.Equal(t, c1.ShardCommitteePeriod, c2.ShardCommitteePeriod, "%s: ShardCommitteePeriod", name)
assert.Equal(t, c1.MinEpochsToInactivityPenalty, c2.MinEpochsToInactivityPenalty, "%s: MinEpochsToInactivityPenalty", name)
assert.Equal(t, expected.GenesisDelay, actual.GenesisDelay, "%s: GenesisDelay", name)
assert.Equal(t, expected.SecondsPerSlot, actual.SecondsPerSlot, "%s: SecondsPerSlot", name)
assert.Equal(t, expected.MinAttestationInclusionDelay, actual.MinAttestationInclusionDelay, "%s: MinAttestationInclusionDelay", name)
assert.Equal(t, expected.SlotsPerEpoch, actual.SlotsPerEpoch, "%s: SlotsPerEpoch", name)
assert.Equal(t, expected.MinSeedLookahead, actual.MinSeedLookahead, "%s: MinSeedLookahead", name)
assert.Equal(t, expected.MaxSeedLookahead, actual.MaxSeedLookahead, "%s: MaxSeedLookahead", name)
assert.Equal(t, expected.EpochsPerEth1VotingPeriod, actual.EpochsPerEth1VotingPeriod, "%s: EpochsPerEth1VotingPeriod", name)
assert.Equal(t, expected.SlotsPerHistoricalRoot, actual.SlotsPerHistoricalRoot, "%s: SlotsPerHistoricalRoot", name)
assert.Equal(t, expected.MinValidatorWithdrawabilityDelay, actual.MinValidatorWithdrawabilityDelay, "%s: MinValidatorWithdrawabilityDelay", name)
assert.Equal(t, expected.ShardCommitteePeriod, actual.ShardCommitteePeriod, "%s: ShardCommitteePeriod", name)
assert.Equal(t, expected.MinEpochsToInactivityPenalty, actual.MinEpochsToInactivityPenalty, "%s: MinEpochsToInactivityPenalty", name)
// State vector lengths.
assert.Equal(t, c1.EpochsPerHistoricalVector, c2.EpochsPerHistoricalVector, "%s: EpochsPerHistoricalVector", name)
assert.Equal(t, c1.EpochsPerSlashingsVector, c2.EpochsPerSlashingsVector, "%s: EpochsPerSlashingsVector", name)
assert.Equal(t, c1.HistoricalRootsLimit, c2.HistoricalRootsLimit, "%s: HistoricalRootsLimit", name)
assert.Equal(t, c1.ValidatorRegistryLimit, c2.ValidatorRegistryLimit, "%s: ValidatorRegistryLimit", name)
assert.Equal(t, expected.EpochsPerHistoricalVector, actual.EpochsPerHistoricalVector, "%s: EpochsPerHistoricalVector", name)
assert.Equal(t, expected.EpochsPerSlashingsVector, actual.EpochsPerSlashingsVector, "%s: EpochsPerSlashingsVector", name)
assert.Equal(t, expected.HistoricalRootsLimit, actual.HistoricalRootsLimit, "%s: HistoricalRootsLimit", name)
assert.Equal(t, expected.ValidatorRegistryLimit, actual.ValidatorRegistryLimit, "%s: ValidatorRegistryLimit", name)
// Reward and penalty quotients.
assert.Equal(t, c1.BaseRewardFactor, c2.BaseRewardFactor, "%s: BaseRewardFactor", name)
assert.Equal(t, c1.WhistleBlowerRewardQuotient, c2.WhistleBlowerRewardQuotient, "%s: WhistleBlowerRewardQuotient", name)
assert.Equal(t, c1.ProposerRewardQuotient, c2.ProposerRewardQuotient, "%s: ProposerRewardQuotient", name)
assert.Equal(t, c1.InactivityPenaltyQuotient, c2.InactivityPenaltyQuotient, "%s: InactivityPenaltyQuotient", name)
assert.Equal(t, c1.InactivityPenaltyQuotientAltair, c2.InactivityPenaltyQuotientAltair, "%s: InactivityPenaltyQuotientAltair", name)
assert.Equal(t, c1.MinSlashingPenaltyQuotient, c2.MinSlashingPenaltyQuotient, "%s: MinSlashingPenaltyQuotient", name)
assert.Equal(t, c1.MinSlashingPenaltyQuotientAltair, c2.MinSlashingPenaltyQuotientAltair, "%s: MinSlashingPenaltyQuotientAltair", name)
assert.Equal(t, c1.ProportionalSlashingMultiplier, c2.ProportionalSlashingMultiplier, "%s: ProportionalSlashingMultiplier", name)
assert.Equal(t, c1.ProportionalSlashingMultiplierAltair, c2.ProportionalSlashingMultiplierAltair, "%s: ProportionalSlashingMultiplierAltair", name)
assert.Equal(t, expected.BaseRewardFactor, actual.BaseRewardFactor, "%s: BaseRewardFactor", name)
assert.Equal(t, expected.WhistleBlowerRewardQuotient, actual.WhistleBlowerRewardQuotient, "%s: WhistleBlowerRewardQuotient", name)
assert.Equal(t, expected.ProposerRewardQuotient, actual.ProposerRewardQuotient, "%s: ProposerRewardQuotient", name)
assert.Equal(t, expected.InactivityPenaltyQuotient, actual.InactivityPenaltyQuotient, "%s: InactivityPenaltyQuotient", name)
assert.Equal(t, expected.InactivityPenaltyQuotientAltair, actual.InactivityPenaltyQuotientAltair, "%s: InactivityPenaltyQuotientAltair", name)
assert.Equal(t, expected.MinSlashingPenaltyQuotient, actual.MinSlashingPenaltyQuotient, "%s: MinSlashingPenaltyQuotient", name)
assert.Equal(t, expected.MinSlashingPenaltyQuotientAltair, actual.MinSlashingPenaltyQuotientAltair, "%s: MinSlashingPenaltyQuotientAltair", name)
assert.Equal(t, expected.ProportionalSlashingMultiplier, actual.ProportionalSlashingMultiplier, "%s: ProportionalSlashingMultiplier", name)
assert.Equal(t, expected.ProportionalSlashingMultiplierAltair, actual.ProportionalSlashingMultiplierAltair, "%s: ProportionalSlashingMultiplierAltair", name)
// Max operations per block.
assert.Equal(t, c1.MaxProposerSlashings, c2.MaxProposerSlashings, "%s: MaxProposerSlashings", name)
assert.Equal(t, c1.MaxAttesterSlashings, c2.MaxAttesterSlashings, "%s: MaxAttesterSlashings", name)
assert.Equal(t, c1.MaxAttestations, c2.MaxAttestations, "%s: MaxAttestations", name)
assert.Equal(t, c1.MaxDeposits, c2.MaxDeposits, "%s: MaxDeposits", name)
assert.Equal(t, c1.MaxVoluntaryExits, c2.MaxVoluntaryExits, "%s: MaxVoluntaryExits", name)
assert.Equal(t, expected.MaxProposerSlashings, actual.MaxProposerSlashings, "%s: MaxProposerSlashings", name)
assert.Equal(t, expected.MaxAttesterSlashings, actual.MaxAttesterSlashings, "%s: MaxAttesterSlashings", name)
assert.Equal(t, expected.MaxAttestations, actual.MaxAttestations, "%s: MaxAttestations", name)
assert.Equal(t, expected.MaxDeposits, actual.MaxDeposits, "%s: MaxDeposits", name)
assert.Equal(t, expected.MaxVoluntaryExits, actual.MaxVoluntaryExits, "%s: MaxVoluntaryExits", name)
// Signature domains.
assert.Equal(t, c1.DomainBeaconProposer, c2.DomainBeaconProposer, "%s: DomainBeaconProposer", name)
assert.Equal(t, c1.DomainBeaconAttester, c2.DomainBeaconAttester, "%s: DomainBeaconAttester", name)
assert.Equal(t, c1.DomainRandao, c2.DomainRandao, "%s: DomainRandao", name)
assert.Equal(t, c1.DomainDeposit, c2.DomainDeposit, "%s: DomainDeposit", name)
assert.Equal(t, c1.DomainVoluntaryExit, c2.DomainVoluntaryExit, "%s: DomainVoluntaryExit", name)
assert.Equal(t, c1.DomainSelectionProof, c2.DomainSelectionProof, "%s: DomainSelectionProof", name)
assert.Equal(t, c1.DomainAggregateAndProof, c2.DomainAggregateAndProof, "%s: DomainAggregateAndProof", name)
assert.Equal(t, expected.DomainBeaconProposer, actual.DomainBeaconProposer, "%s: DomainBeaconProposer", name)
assert.Equal(t, expected.DomainBeaconAttester, actual.DomainBeaconAttester, "%s: DomainBeaconAttester", name)
assert.Equal(t, expected.DomainRandao, actual.DomainRandao, "%s: DomainRandao", name)
assert.Equal(t, expected.DomainDeposit, actual.DomainDeposit, "%s: DomainDeposit", name)
assert.Equal(t, expected.DomainVoluntaryExit, actual.DomainVoluntaryExit, "%s: DomainVoluntaryExit", name)
assert.Equal(t, expected.DomainSelectionProof, actual.DomainSelectionProof, "%s: DomainSelectionProof", name)
assert.Equal(t, expected.DomainAggregateAndProof, actual.DomainAggregateAndProof, "%s: DomainAggregateAndProof", name)
assertYamlFieldsMatch(t, name, fields, c1, c2)
assertYamlFieldsMatch(t, name, fields, expected, actual)
}
t.Run("mainnet", func(t *testing.T) {
@@ -129,6 +129,17 @@ func TestLoadConfigFileMainnet(t *testing.T) {
fields := fieldsFromYamls(t, append(minimalPresetsFiles, minimalConfigFile))
assertVals("minimal", fields, params.MinimalSpecConfig(), params.BeaconConfig())
})
t.Run("e2e", func(t *testing.T) {
minimalPresetsFiles := presetsFilePath(t, "minimal")
for _, fp := range minimalPresetsFiles {
params.LoadChainConfigFile(fp, nil)
}
configFile := "testdata/e2e_config.yaml"
params.LoadChainConfigFile(configFile, nil)
fields := fieldsFromYamls(t, append(minimalPresetsFiles, configFile))
assertVals("e2e", fields, params.E2ETestConfig(), params.BeaconConfig())
})
}
func TestLoadConfigFile_OverwriteCorrectly(t *testing.T) {

118
config/params/testdata/e2e_config.yaml vendored Normal file
View File

@@ -0,0 +1,118 @@
# e2e config
# Extends the minimal preset
PRESET_BASE: 'minimal'
# Transition
# ---------------------------------------------------------------
# TBD, 2**256-2**10 is a placeholder, e2e is 600
TERMINAL_TOTAL_DIFFICULTY: 600
# By default, don't use these params
#TERMINAL_BLOCK_HASH: 0x0000000000000000000000000000000000000000000000000000000000000000
#TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH: 18446744073709551615
# Genesis
# ---------------------------------------------------------------
# [customized]
MIN_GENESIS_ACTIVE_VALIDATOR_COUNT: 256 # Override for e2e tests
# Jan 3, 2020
MIN_GENESIS_TIME: 1578009600
# Highest byte set to 0x01 to avoid collisions with mainnet versioning
GENESIS_FORK_VERSION: 0x000000fd
# [customized] Faster to spin up testnets, but does not give validator reasonable warning time for genesis
GENESIS_DELAY: 10 # Override for e2e tests
# Forking
# ---------------------------------------------------------------
# Values provided for illustrative purposes.
# Individual tests/testnets may set different values.
# Altair
ALTAIR_FORK_VERSION: 0x010000fd
ALTAIR_FORK_EPOCH: 6 # Override for e2e
# Bellatrix
BELLATRIX_FORK_VERSION: 0x020000fd
BELLATRIX_FORK_EPOCH: 8
# Sharding
SHARDING_FORK_VERSION: 0x030000fd
SHARDING_FORK_EPOCH: 18446744073709551615
# Time parameters
# ---------------------------------------------------------------
# [customized] Faster for testing purposes
SECONDS_PER_SLOT: 10 # Override for e2e tests
# 14 (estimate from Eth1 mainnet)
SECONDS_PER_ETH1_BLOCK: 2 # Override for e2e tests
# 2**8 (= 256) epochs
MIN_VALIDATOR_WITHDRAWABILITY_DELAY: 256
# [customized] higher frequency of committee turnover and faster time to acceptable voluntary exit
SHARD_COMMITTEE_PERIOD: 4 # Override for e2e tests
# [customized] process deposits more quickly, but insecure
ETH1_FOLLOW_DISTANCE: 4 # Override for e2e tests
# Validator cycle
# ---------------------------------------------------------------
# 2**2 (= 4)
INACTIVITY_SCORE_BIAS: 4
# 2**4 (= 16)
INACTIVITY_SCORE_RECOVERY_RATE: 16
# 2**4 * 10**9 (= 16,000,000,000) Gwei
EJECTION_BALANCE: 16000000000
# 2**2 (= 4)
MIN_PER_EPOCH_CHURN_LIMIT: 4
# [customized] scale queue churn at much lower validator counts for testing
CHURN_LIMIT_QUOTIENT: 65536
# Fork choice
# ---------------------------------------------------------------
# 70%
PROPOSER_SCORE_BOOST: 70
# Deposit contract
# ---------------------------------------------------------------
# Ethereum Goerli testnet
DEPOSIT_CHAIN_ID: 1337 # Override for e2e tests
DEPOSIT_NETWORK_ID: 1337 # Override for e2e tests
# Configured on a per testnet basis
DEPOSIT_CONTRACT_ADDRESS: 0x1234567890123456789012345678901234567890
# Updated penalty values
# ---------------------------------------------------------------
# 3 * 2**24 (= 50,331,648)
INACTIVITY_PENALTY_QUOTIENT_ALTAIR: 50331648
# 2**6 (= 64)
MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR: 64
# 2
PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR: 2
# Sync committee
# ---------------------------------------------------------------
# [customized]
SYNC_COMMITTEE_SIZE: 32
# [customized]
EPOCHS_PER_SYNC_COMMITTEE_PERIOD: 8
# Sync protocol
# ---------------------------------------------------------------
# 1
MIN_SYNC_COMMITTEE_PARTICIPANTS: 1
# Other e2e overrides
# ---------------------------------------------------------------
CONFIG_NAME: "end-to-end"
SLOTS_PER_EPOCH: 6
EPOCHS_PER_ETH1_VOTING_PERIOD: 2
MAX_SEED_LOOKAHEAD: 1

View File

@@ -45,7 +45,7 @@ func E2ETestConfig() *BeaconChainConfig {
e2eConfig.DepositChainID = 1337 // Chain ID of eth1 dev net.
e2eConfig.DepositNetworkID = 1337 // Network ID of eth1 dev net.
// Altair Fork Parameters.
// Fork Parameters.
e2eConfig.AltairForkEpoch = altairE2EForkEpoch
e2eConfig.BellatrixForkEpoch = bellatrixE2EForkEpoch

View File

@@ -1,43 +0,0 @@
package params
import "math"
// UsePyrmontNetworkConfig uses the Pyrmont specific
// network config.
func UsePyrmontNetworkConfig() {
cfg := BeaconNetworkConfig().Copy()
cfg.ContractDeploymentBlock = 3743587
cfg.BootstrapNodes = []string{
"enr:-Ku4QOA5OGWObY8ep_x35NlGBEj7IuQULTjkgxC_0G1AszqGEA0Wn2RNlyLFx9zGTNB1gdFBA6ZDYxCgIza1uJUUOj4Dh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDVTPWXAAAgCf__________gmlkgnY0gmlwhDQPSjiJc2VjcDI1NmsxoQM6yTQB6XGWYJbI7NZFBjp4Yb9AYKQPBhVrfUclQUobb4N1ZHCCIyg",
"enr:-Ku4QOksdA2tabOGrfOOr6NynThMoio6Ggka2oDPqUuFeWCqcRM2alNb8778O_5bK95p3EFt0cngTUXm2H7o1jkSJ_8Dh2F0dG5ldHOIAAAAAAAAAACEZXRoMpDVTPWXAAAgCf__________gmlkgnY0gmlwhDaa13aJc2VjcDI1NmsxoQKdNQJvnohpf0VO0ZYCAJxGjT0uwJoAHbAiBMujGjK0SoN1ZHCCIyg",
}
OverrideBeaconNetworkConfig(cfg)
}
// UsePyrmontConfig sets the main beacon chain
// config for Pyrmont.
func UsePyrmontConfig() {
beaconConfig = PyrmontConfig()
}
// PyrmontConfig defines the config for the
// Pyrmont testnet.
func PyrmontConfig() *BeaconChainConfig {
cfg := MainnetConfig().Copy()
cfg.MinGenesisTime = 1605700800
cfg.GenesisDelay = 432000
cfg.ConfigName = ConfigNames[Pyrmont]
cfg.GenesisForkVersion = []byte{0x00, 0x00, 0x20, 0x09}
cfg.AltairForkVersion = []byte{0x01, 0x00, 0x20, 0x09}
cfg.AltairForkEpoch = 61650
cfg.BellatrixForkVersion = []byte{0x02, 0x00, 0x20, 0x09}
cfg.BellatrixForkEpoch = math.MaxUint64
cfg.ShardingForkVersion = []byte{0x03, 0x00, 0x20, 0x09}
cfg.ShardingForkEpoch = math.MaxUint64
cfg.SecondsPerETH1Block = 14
cfg.DepositChainID = 5
cfg.DepositNetworkID = 5
cfg.DepositContractAddress = "0x8c5fecdC472E27Bc447696F431E425D02dd46a8c"
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -12,10 +12,8 @@ const (
Mainnet ConfigName = iota
Minimal
EndToEnd
Pyrmont
Prater
EndToEndMainnet
Dynamic
)
// ConfigName enum describes the type of known network in use.
@@ -34,18 +32,14 @@ var ConfigNames = map[ConfigName]string{
Mainnet: "mainnet",
Minimal: "minimal",
EndToEnd: "end-to-end",
Pyrmont: "pyrmont",
Prater: "prater",
EndToEndMainnet: "end-to-end-mainnet",
Dynamic: "dynamic",
}
var reverseConfigNames map[string]ConfigName
// KnownConfigs provides an index of all known BeaconChainConfig values.
var KnownConfigs = map[ConfigName]func() *BeaconChainConfig{
Mainnet: MainnetConfig,
Prater: PraterConfig,
Pyrmont: PyrmontConfig,
Minimal: MinimalSpecConfig,
EndToEnd: E2ETestConfig,
EndToEndMainnet: E2EMainnetTestConfig,
@@ -66,18 +60,6 @@ func ConfigForVersion(version [fieldparams.VersionLength]byte) (*BeaconChainConf
}
func init() {
rebuildKnownForkVersions()
buildReverseConfigName()
}
func buildReverseConfigName() {
reverseConfigNames = make(map[string]ConfigName)
for cn, s := range ConfigNames {
reverseConfigNames[s] = cn
}
}
func rebuildKnownForkVersions() {
knownForkVersions = make(map[[fieldparams.VersionLength]byte]ConfigName)
for n, cfunc := range KnownConfigs {
cfg := cfunc()

View File

@@ -1274,12 +1274,6 @@ def prysm_deps():
sum = "h1:utua3L2IbQJmauC5IXdEA547bcoU5dozgQAfc8Onsg4=",
version = "v0.0.0-20181222135242-d2cdd8c08219",
)
go_repository(
name = "com_github_gomarkdown_markdown",
importpath = "github.com/gomarkdown/markdown",
sum = "h1:YVvt637ygnOO9qjLBVmPOvrUmCz/i8YECSu/8UlOQW0=",
version = "v0.0.0-20220310201231-552c6011c0b8",
)
go_repository(
name = "com_github_google_btree",
@@ -3762,6 +3756,13 @@ def prysm_deps():
sum = "h1:qph92Y649prgesehzOrQjdWyxFOp/QVM+6imKHad91M=",
version = "v2.3.0",
)
go_repository(
name = "com_github_uudashr_gocognit",
importpath = "github.com/uudashr/gocognit",
sum = "h1:rrSex7oHr3/pPLQ0xoWq108XMU8s678FJcQ+aSfOHa4=",
version = "v1.0.5",
)
go_repository(
name = "com_github_valyala_bytebufferpool",
importpath = "github.com/valyala/bytebufferpool",

View File

@@ -81,40 +81,40 @@ func FromForkVersion(cv [fieldparams.VersionLength]byte) (*VersionedUnmarshaler,
// UnmarshalBeaconState uses internal knowledge in the VersionedUnmarshaler to pick the right concrete BeaconState type,
// then Unmarshal()s the type and returns an instance of state.BeaconState if successful.
func (cf *VersionedUnmarshaler) UnmarshalBeaconState(marshaled []byte) (s state.BeaconState, err error) {
info := fmt.Sprintf("fork=%s, config=%s", version.String(cf.Fork), cf.Config.ConfigName)
forkName := version.String(cf.Fork)
switch fork := cf.Fork; fork {
case version.Phase0:
st := &ethpb.BeaconState{}
err = st.UnmarshalSSZ(marshaled)
if err != nil {
return nil, errors.Wrapf(err, "failed to unmarshal state, detected info=%s", info)
return nil, errors.Wrapf(err, "failed to unmarshal state, detected fork=%s", forkName)
}
s, err = v1.InitializeFromProtoUnsafe(st)
if err != nil {
return nil, errors.Wrapf(err, "failed to init state trie from state, detected info=%s", info)
return nil, errors.Wrapf(err, "failed to init state trie from state, detected fork=%s", forkName)
}
case version.Altair:
st := &ethpb.BeaconStateAltair{}
err = st.UnmarshalSSZ(marshaled)
if err != nil {
return nil, errors.Wrapf(err, "failed to unmarshal state, detected info=%s", info)
return nil, errors.Wrapf(err, "failed to unmarshal state, detected fork=%s", forkName)
}
s, err = v2.InitializeFromProtoUnsafe(st)
if err != nil {
return nil, errors.Wrapf(err, "failed to init state trie from state, detected info=%s", info)
return nil, errors.Wrapf(err, "failed to init state trie from state, detected fork=%s", forkName)
}
case version.Bellatrix:
st := &ethpb.BeaconStateBellatrix{}
err = st.UnmarshalSSZ(marshaled)
if err != nil {
return nil, errors.Wrapf(err, "failed to unmarshal state, detected info=%s", info)
return nil, errors.Wrapf(err, "failed to unmarshal state, detected fork=%s", forkName)
}
s, err = v3.InitializeFromProtoUnsafe(st)
if err != nil {
return nil, errors.Wrapf(err, "failed to init state trie from state, detected info=%s", info)
return nil, errors.Wrapf(err, "failed to init state trie from state, detected fork=%s", forkName)
}
default:
return nil, fmt.Errorf("unable to initialize BeaconState for info=%s", info)
return nil, fmt.Errorf("unable to initialize BeaconState for fork version=%s", forkName)
}
return s, nil

2
go.mod
View File

@@ -77,6 +77,7 @@ require (
github.com/trailofbits/go-mutexasserts v0.0.0-20200708152505-19999e7d3cef
github.com/tyler-smith/go-bip39 v1.1.0
github.com/urfave/cli/v2 v2.3.0
github.com/uudashr/gocognit v1.0.5
github.com/wealdtech/go-bytesutil v1.1.1
github.com/wealdtech/go-eth2-util v1.6.3
github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4 v1.1.3
@@ -255,7 +256,6 @@ require (
github.com/go-logr/logr v0.2.1 // indirect
github.com/go-ole/go-ole v1.2.5 // indirect
github.com/go-playground/validator/v10 v10.10.0
github.com/gomarkdown/markdown v0.0.0-20220310201231-552c6011c0b8
github.com/peterh/liner v1.2.0 // indirect
github.com/prometheus/tsdb v0.10.0 // indirect
github.com/prysmaticlabs/gohashtree v0.0.1-alpha.0.20220303211031-f753e083138c

Some files were not shown because too many files have changed in this diff Show More