Compare commits

...

40 Commits

Author SHA1 Message Date
Kasey Kirkham
05d977a8fc minimize syscalls in pruning routine 2024-01-05 12:30:52 -06:00
terence tsao
099a01a63b More err to the end 2024-01-04 07:10:04 -08:00
terence tsao
3f7c54c563 Merge branch 'use-afero-walk' of github.com:prysmaticlabs/prysm into use-afero-walk 2024-01-04 07:05:41 -08:00
terence tsao
f6e641f3fc Use wrap 2024-01-04 07:05:34 -08:00
terence
423089aa0a Merge branch 'develop' into use-afero-walk 2024-01-04 06:51:26 -08:00
terence tsao
59d4f61ebb Return err 2024-01-04 06:42:33 -08:00
terence tsao
e8cada76b6 Merge branch 'develop' of github.com:prysmaticlabs/prysm into use-afero-walk 2024-01-04 06:34:48 -08:00
james-prysm
7781eb60f4 Add rpc trigger for blob sidecar event (#13411)
* adding missed rpc trigger for blob sidecar event

* fixing unit tests

* moving event feed to after receive blob call to prioritize db
2024-01-04 14:22:24 +00:00
Potuz
396b8bf970 Simplify fcu 4 (#13403)
* send two FCU when proposing

* compute voting window at runtime
2024-01-04 13:43:57 +00:00
Nishant Das
d5107942a1 update it (#13415) 2024-01-04 11:22:23 +00:00
terence
bd4a520013 Initialize blob storage without pruning (#13412) 2024-01-04 05:56:38 +00:00
terence tsao
4a4dbba067 Merge branch 'develop' of github.com:prysmaticlabs/prysm into use-afero-walk 2024-01-03 14:42:27 -08:00
Sammy Rosso
a0ff1351a0 Fix batch pruning errors (#13355)
* Add compareAndSwap

* Update lastPrunedEpoch before prune

* Fix and test

* Remove debug log

* Kasey' review

* Fix tests

* Address Kasey's comments

* Fix prune before slot

* Rename

* Fix bad test

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-03 20:52:07 +00:00
terence tsao
42309121da Use Afero walk 2024-01-03 10:04:19 -08:00
Nishant Das
7e6fd5fd8b Make Reorging Of Late Blocks The Permanent Default (#13405)
* make it the permanent default

* gaz

* fix merge conflicts
2024-01-03 14:46:58 +00:00
Nishant Das
d984210baa fix it (#13404) 2024-01-03 13:54:18 +00:00
Potuz
31c72672d7 Remove the getPayloadAttribute call from updateForkchoiceWithExecution (#13402)
* Remove the getPayloadAttribute call from updateForkchoiceWithExecution

* Move log
2024-01-03 12:43:40 +00:00
Potuz
8c1e180dd1 Simplify fcu 2 (#13400)
* change getPayloadAttribute signature

* Unify different FCU arguments
2024-01-02 22:45:55 +00:00
Manu NALEPA
886d76fe7c Refactor validator client help. (#13401)
* Define `cli.App` without mutation.

No functional change.

* `usage.go`:  Clean `appHelpTemplate`.

No functional change is added.
Modifications consist in adding prefix/suffix `-` to improve readability of
the template without adding new lines in template inference.

We now see some inconsistencies of the template:
- `if .App.Version` is around the `AUTHOR` section.
- `if .App.Copyright` is around both `COPYRIGHT` and `VERSION` sections.
- `if len .App.Authors` is around nothing.

* `usage.go`: Surround version and author correctly.

* `usage.go`: `AUTHOR` ==> `AUTHORS`

* `usage.go`: `GLOBAL` --> `global`.

* `--grpc-max-msg-size`: Remove double default.

* VC: Standardize help message.

- Flags help begin with a capital letter and end with a period.
- If a flag help begins with a verb, it is conjugated.
- Expermitemtal, danger etc... mentions are between parenthesis.

* VC help message: Wrap too long lines.
2024-01-02 18:02:28 +00:00
Potuz
a602acf492 Remove getPayloadAttributes from FCU call (#13399) 2024-01-02 17:37:18 +00:00
terence
1b6547de6a Add Goerli Deneb Fork Epoch (#13390)
* Add deneb fork epoch

* Fix test
2024-01-02 15:31:57 +00:00
Nishant Das
88685bb3bd Fix Up Builder Evaluator (#13395)
* fix it up

* fix evaluator

* fix evaluator again

* fix it

* gaz
2024-01-02 10:40:26 +00:00
Nishant Das
2319b7d4bd increase params (#13398) 2024-01-02 10:19:59 +00:00
Manu NALEPA
82b2840d68 --validatorS-registration-batch-size (add s) (#13396) 2024-01-02 09:52:14 +00:00
Manu NALEPA
cf221d0f4c Validator client: Always use the --datadir value. (#13392)
Fix https://github.com/prysmaticlabs/prysm/issues/13391
2024-01-02 09:24:24 +00:00
Preston Van Loon
0956e3a657 Update libp2p/go-libp2p-asn-util to v0.4.1 (#13370)
* Update go-libp2p-asn-util to v0.4.1

* fix go mod

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
2024-01-02 07:30:50 +00:00
terence
351ed1c511 Check kzg commitment count from builder (#13394) 2024-01-02 06:50:23 +00:00
Potuz
9809f5ac77 Simplify fcu 1 (#13387)
* Remove unsafe proposer indices cache

* Simplify FCU #1

This PR starts the process of gradually simplifying FCU
It removes the responsibility of getting the state and block from this
function and informing if head has changed. It is only called when the
imported block has actually become head.

* Add a call to FCU in edge cases
2023-12-30 12:20:20 +00:00
Potuz
cff5e2b5fe Remove unsafe proposer indices cache (#13385) 2023-12-30 12:20:02 +00:00
terence
dd15f9e0cc Rewrite ProposeBlock endpoint (#13380)
* Init

* Tests

* Init

* Tests

* Radek's feedback

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* More Radek's feedback

* Potuz feedback

* Use inline copy

* Fix conflict

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-29 23:32:58 +00:00
terence
1c9ded4684 Remove blind field from block type (#13389)
* Init

* Init

* Fix tests
2023-12-29 21:28:19 +00:00
Potuz
d4cc6fcf4a update shuffling caches before calling FCU on epoch boundaries (#13383)
* update shuffling caches before calling FCU on epoch boundaries

* Terence's review
2023-12-28 15:19:09 +00:00
Radosław Kapka
49c16f1a71 Return SignedBeaconBlock from ReadOnlySignedBeaconBlock.Copy (#13386) 2023-12-28 08:55:45 +00:00
terence
e70b606e78 Replace validator count with validator indices in update fee recipient log (#13384)
* Add validator count to updated fee recipient address log

* Add validator count to updated fee recipient address log

* Replace
2023-12-27 16:46:15 +00:00
Potuz
0e8b37c317 Log value of local payload when proposing (#13381) 2023-12-27 14:43:32 +00:00
Potuz
e80db9554d Use advanced epoch cache when preparing proposals (#13377) 2023-12-27 12:42:51 +00:00
Radosław Kapka
d0bf03e863 Simplify error handling for JsonRestHandler (#13369)
* Simplify error handling for `JsonRestHandler`

* POST

* reduce complexity

* review feedback

* uncomment route

* fix rest of tests
2023-12-22 22:39:20 +00:00
Potuz
b7e0819f00 refactor Payload Id caches (#12987)
* init

- getLocalPayload does not use the proposer ID from the cache but takes
  it from the block

- Fixed tests in blockchain package
- Fixed tests in the RPC package
- Fixed spectests

EpochProposers takes 256 bytes that can be avoided to be copied, but
this optimization is not clear to be worth it.

assginmentStatus can be optimized to use the cached version from the
TrackedValidatorsCache

We shouldn't cache the proposer duties when calling getDuties but when
we update the epoch boundary instead

* track validators on prepare proposers

* more rpc tests

* more rpc tests

* initialize grpc caches

* Add back fcu log

Also fix two existing bugs wrong parent hash on pre Capella and wrong
blockhashes on altair

* use beacon default fee recipient if there is none in the vc

* fix validator test

* radek's review

* push always proposer settings even if no flag is specified in the VC

* Only register with the builder if the VC flag is set

Great find by @terencechain

* add regression test

* Radek's review

* change signature of registration builder
2023-12-22 18:47:51 +00:00
Radosław Kapka
7d64104003 block publishing (#13376) 2023-12-22 18:15:00 +00:00
Nishant Das
b1e8a9ea3d fix it with regression (#13375) 2023-12-22 12:33:23 +00:00
169 changed files with 3098 additions and 2958 deletions

View File

@@ -322,6 +322,22 @@ filegroup(
url = "https://github.com/eth-clients/eth2-networks/archive/7b4897888cebef23801540236f73123e21774954.tar.gz",
)
http_archive(
name = "goerli_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"prater/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
sha256 = "43fc0f55ddff7b511713e2de07aa22846a67432df997296fb4fc09cd8ed1dcdb",
strip_prefix = "goerli-6522ac6684693740cd4ddcc2a0662e03702aa4a1",
url = "https://github.com/eth-clients/goerli/archive/6522ac6684693740cd4ddcc2a0662e03702aa4a1.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """

View File

@@ -275,7 +275,24 @@ func TestClient_GetHeader(t *testing.T) {
require.Equal(t, len(kcgCommitments[i]) == 48, true)
}
})
t.Run("deneb, too many kzg commitments", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseDenebTooManyBlobs)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorContains(t, "could not extract proto message from header: too many blob commitments: 7", err)
})
t.Run("unsupported version", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
@@ -412,7 +429,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
require.ErrorContains(t, "not a bellatrix payload", err)
})
t.Run("not blinded", func(t *testing.T) {
sbb, err := blocks.NewSignedBeaconBlock(&eth.SignedBeaconBlockBellatrix{Block: &eth.BeaconBlockBellatrix{Body: &eth.BeaconBlockBodyBellatrix{}}})
sbb, err := blocks.NewSignedBeaconBlock(&eth.SignedBeaconBlockBellatrix{Block: &eth.BeaconBlockBellatrix{Body: &eth.BeaconBlockBodyBellatrix{ExecutionPayload: &v1.ExecutionPayload{}}}})
require.NoError(t, err)
_, _, err = (&Client{}).SubmitBlindedBlock(ctx, sbb)
require.ErrorIs(t, err, errNotBlinded)

View File

@@ -908,6 +908,9 @@ func (bb *BuilderBidDeneb) ToProto() (*eth.BuilderBidDeneb, error) {
if err != nil {
return nil, err
}
if len(bb.BlobKzgCommitments) > fieldparams.MaxBlobsPerBlock {
return nil, fmt.Errorf("too many blob commitments: %d", len(bb.BlobKzgCommitments))
}
kzgCommitments := make([][]byte, len(bb.BlobKzgCommitments))
for i, commit := range bb.BlobKzgCommitments {
if len(commit) != fieldparams.BLSPubkeyLength {

View File

@@ -209,6 +209,39 @@ var testExampleHeaderResponseUnknownVersion = `{
}
}`
var testExampleHeaderResponseDenebTooManyBlobs = `{
"version": "deneb",
"data": {
"message": {
"header": {
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"withdrawals_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"blob_gas_used": "1",
"excess_blob_gas": "2"
},
"blob_kzg_commitments": [
"","","","","","",""
],
"value": "652312848583266388373324160190187140051835877600158453279131187530910662656",
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}`
func TestExecutionHeaderResponseUnmarshal(t *testing.T) {
hr := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponse), hr))

View File

@@ -26,6 +26,7 @@ go_library(
"receive_blob.go",
"receive_block.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain",
@@ -52,7 +53,6 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",

View File

@@ -7,11 +7,11 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
@@ -32,21 +32,18 @@ const blobCommitmentVersionKZG uint8 = 0x01
var defaultLatestValidHash = bytesutil.PadTo([]byte{0xff}, 32)
// notifyForkchoiceUpdateArg is the argument for the forkchoice update notification `notifyForkchoiceUpdate`.
type notifyForkchoiceUpdateArg struct {
headState state.BeaconState
headRoot [32]byte
headBlock interfaces.ReadOnlyBeaconBlock
}
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkchoiceUpdateArg) (*enginev1.PayloadIDBytes, error) {
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdate")
defer span.End()
headBlk := arg.headBlock
if arg.headBlock.IsNil() {
log.Error("Head block is nil")
return nil, nil
}
headBlk := arg.headBlock.Block()
if headBlk == nil || headBlk.IsNil() || headBlk.Body().IsNil() {
log.Error("Head block is nil")
return nil, nil
@@ -72,11 +69,10 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
hasAttr, attr, proposerId := s.getPayloadAttribute(ctx, arg.headState, nextSlot, arg.headRoot[:])
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attr)
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch err {
case execution.ErrAcceptedSyncingPayloadStatus:
@@ -122,10 +118,11 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
log.WithError(err).Error("Could not get head state")
return nil, nil
}
pid, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: st,
headRoot: r,
headBlock: b.Block(),
pid, err := s.notifyForkchoiceUpdate(ctx, &fcuConfig{
headState: st,
headRoot: r,
headBlock: b,
attributes: arg.attributes,
})
if err != nil {
return nil, err // Returning err because it's recursive here.
@@ -153,7 +150,9 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
log.WithError(err).Error("Could not set head root to valid")
return nil, nil
}
// If the forkchoice update call has an attribute, update the proposer payload ID cache.
// If the forkchoice update call has an attribute, update the payload ID cache.
hasAttr := arg.attributes != nil && !arg.attributes.IsEmpty()
nextSlot := s.CurrentSlot() + 1
if hasAttr && payloadID != nil {
var pId [8]byte
copy(pId[:], payloadID[:])
@@ -162,7 +161,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
"headSlot": headBlk.Slot(),
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
s.cfg.PayloadIDCache.Set(nextSlot, arg.headRoot, pId)
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
@@ -277,56 +276,50 @@ func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, lvh [
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) (bool, payloadattribute.Attributer, primitives.ValidatorIndex) {
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) payloadattribute.Attributer {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
// Root is `[32]byte{}` since we are retrieving proposer ID of a given slot. During insertion at assignment the root was not known.
proposerID, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, [32]byte{} /* root */)
if !ok && !features.Get().PrepareAllPayloads { // There's no need to build attribute if there is no proposer for slot.
return false, emptyAttri, 0
}
// Get previous randao.
// If it is an epoch boundary then process slots to get the right
// shuffling before checking if the proposer is tracked. Otherwise
// perform this check before. This is cheap as the NSC has already been updated.
var val cache.TrackedValidator
var ok bool
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(st.Slot())
if e == stateEpoch {
val, ok = s.trackedProposer(st, slot)
if !ok {
return emptyAttri
}
}
st = st.Copy()
if slot > st.Slot() {
var err error
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, slot)
if err != nil {
log.WithError(err).Error("Could not process slots to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
}
if e > stateEpoch {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
val, ok = s.trackedProposer(st, slot)
if !ok {
return emptyAttri
}
}
// Get previous randao.
prevRando, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
log.WithError(err).Error("Could not get randao mix to get payload attribute")
return false, emptyAttri, 0
}
// Get fee recipient.
feeRecipient := params.BeaconConfig().DefaultFeeRecipient
recipient, err := s.cfg.BeaconDB.FeeRecipientByValidatorID(ctx, proposerID)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
if feeRecipient.String() == params.BeaconConfig().EthBurnAddressHex {
logrus.WithFields(logrus.Fields{
"validatorIndex": proposerID,
"burnAddress": params.BeaconConfig().EthBurnAddressHex,
}).Warn("Fee recipient is currently using the burn address, " +
"you will not be rewarded transaction fees on this setting. " +
"Please set a different eth address as the fee recipient. " +
"Please refer to our documentation for instructions")
}
case err != nil:
log.WithError(err).Error("Could not get fee recipient to get payload attribute")
return false, emptyAttri, 0
default:
feeRecipient = recipient
return emptyAttri
}
// Get timestamp.
t, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
if err != nil {
log.WithError(err).Error("Could not get timestamp to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
var attr payloadattribute.Attributer
@@ -335,51 +328,51 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
ParentBeaconBlockRoot: headRoot,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
case version.Capella:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
case version.Bellatrix:
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
default:
log.WithField("version", st.Version()).Error("Could not get payload attribute due to unknown state version")
return false, emptyAttri, 0
return emptyAttri
}
return true, attr, proposerID
return attr
}
// removeInvalidBlockAndState removes the invalid block, blob and its corresponding state from the cache and DB.

View File

@@ -26,11 +26,10 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -57,11 +56,14 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
sb := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
},
})
}
b, err := consensusblocks.NewSignedBeaconBlock(sb)
require.NoError(t, err)
pid := &v1.PayloadIDBytes{1}
@@ -73,20 +75,20 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
// Intentionally generate a bad state such that `hash_tree_root` fails during `process_slot`
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
arg := &notifyForkchoiceUpdateArg{
arg := &fcuConfig{
headState: s,
headRoot: [32]byte{},
headBlock: b,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(1, 0, [8]byte{}, [32]byte{})
service.cfg.PayloadIDCache.Set(1, [32]byte{}, [8]byte{})
got, err := service.notifyForkchoiceUpdate(ctx, arg)
require.NoError(t, err)
require.DeepEqual(t, got, pid) // We still get a payload ID even though the state is bad. This means it returns until the end.
}
func Test_NotifyForkchoiceUpdate(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -114,7 +116,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
tests := []struct {
name string
blk interfaces.ReadOnlyBeaconBlock
blk interfaces.ReadOnlySignedBeaconBlock
headRoot [32]byte
finalizedRoot [32]byte
justifiedRoot [32]byte
@@ -123,24 +125,24 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}{
{
name: "phase0 block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}})
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}})
require.NoError(t, err)
return b
}(),
},
{
name: "altair block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockAltair{Body: &ethpb.BeaconBlockBodyAltair{}})
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockAltair{Block: &ethpb.BeaconBlockAltair{Body: &ethpb.BeaconBlockBodyAltair{}}})
require.NoError(t, err)
return b
}(),
},
{
name: "not execution block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
@@ -153,19 +155,19 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
BlockHash: make([]byte, fieldparams.RootLength),
},
},
})
}})
require.NoError(t, err)
return b
}(),
},
{
name: "happy case: finalized root is altair block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -174,12 +176,12 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
},
{
name: "happy case: finalized root is bellatrix block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -188,12 +190,12 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
},
{
name: "forkchoice updated with optimistic block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -203,12 +205,12 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
},
{
name: "forkchoice updated with invalid block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -226,7 +228,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, beaconDB.SaveState(ctx, st, tt.finalizedRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, tt.finalizedRoot))
arg := &notifyForkchoiceUpdateArg{
arg := &fcuConfig{
headState: st,
headRoot: tt.headRoot,
headBlock: tt.blk,
@@ -246,7 +248,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}
func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
// Prepare blocks
@@ -306,9 +308,9 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
a := &fcuConfig{
headState: st,
headBlock: wbd.Block(),
headBlock: wbd,
headRoot: brd,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
@@ -334,7 +336,7 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
// Prepare blocks
@@ -443,9 +445,9 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
a := &fcuConfig{
headState: st,
headBlock: wbg.Block(),
headBlock: wbg,
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
@@ -467,7 +469,7 @@ func Test_NotifyNewPayload(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = "2"
params.OverrideBeaconConfig(cfg)
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
phase0State, _ := util.DeterministicGenesisState(t, 1)
@@ -709,7 +711,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
cfg.TerminalTotalDifficulty = "2"
params.OverrideBeaconConfig(cfg)
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
@@ -777,83 +779,70 @@ func Test_reportInvalidBlock(t *testing.T) {
}
func Test_GetPayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
suggestedVid := primitives.ValidatorIndex(1)
slot := primitives.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []primitives.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
}
func Test_GetPayloadAttribute_PrepareAllPayloads(t *testing.T) {
hook := logTest.NewGlobal()
resetCfg := features.InitWithReset(&features.Flags{
PrepareAllPayloads: true,
})
defer resetCfg()
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
}
func Test_GetPayloadAttributeV2(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateCapella(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
suggestedVid := primitives.ValidatorIndex(1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
slot := primitives.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []primitives.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)
@@ -861,35 +850,30 @@ func Test_GetPayloadAttributeV2(t *testing.T) {
}
func Test_GetPayloadAttributeDeneb(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateDeneb(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
suggestedVid := primitives.ValidatorIndex(1)
slot := primitives.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []primitives.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)

View File

@@ -8,20 +8,15 @@ import (
"github.com/pkg/errors"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v4/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
func (s *Service) isNewProposer(slot primitives.Slot) bool {
_, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, [32]byte{} /* root */)
return ok || features.Get().PrepareAllPayloads
}
func (s *Service) isNewHead(r [32]byte) bool {
s.headLock.RLock()
defer s.headLock.RUnlock()
@@ -49,48 +44,34 @@ func (s *Service) getStateAndBlock(ctx context.Context, r [32]byte) (state.Beaco
return headState, newHeadBlock, nil
}
type fcuConfig struct {
headState state.BeaconState
headBlock interfaces.ReadOnlySignedBeaconBlock
headRoot [32]byte
proposingSlot primitives.Slot
attributes payloadattribute.Attributer
}
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
// it returns true if the new head is updated
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, newHeadRoot [32]byte, proposingSlot primitives.Slot) (bool, error) {
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) error {
_, span := trace.StartSpan(ctx, "beacon-chain.blockchain.forkchoiceUpdateWithExecution")
defer span.End()
// Note: Use the service context here to avoid the parent context being ended during a forkchoice update.
ctx = trace.NewContext(s.ctx, span)
isNewHead := s.isNewHead(newHeadRoot)
if !isNewHead {
return false, nil
}
isNewProposer := s.isNewProposer(proposingSlot)
if isNewProposer && !features.Get().DisableReorgLateBlocks {
if s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return false, nil
}
}
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
_, err := s.notifyForkchoiceUpdate(ctx, args)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return false, nil
return errors.Wrap(err, "could not notify forkchoice update")
}
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock.Block(),
})
if err != nil {
return false, errors.Wrap(err, "could not notify forkchoice update")
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
if err := s.saveHead(ctx, args.headRoot, args.headBlock, args.headState); err != nil {
log.WithError(err).Error("could not save head")
}
// Only need to prune attestations from pool if the head has changed.
if err := s.pruneAttsFromPool(headBlock); err != nil {
if err := s.pruneAttsFromPool(args.headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return true, nil
return nil
}
// shouldOverrideFCU checks whether the incoming block is still subject to being

View File

@@ -17,15 +17,6 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestService_isNewProposer(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
require.Equal(t, false, service.isNewProposer(service.CurrentSlot()+1))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(service.CurrentSlot()+1, 0, [8]byte{}, [32]byte{} /* root */)
require.Equal(t, true, service.isNewProposer(service.CurrentSlot()+1))
}
func TestService_isNewHead(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
@@ -67,33 +58,14 @@ func TestService_getHeadStateAndBlock(t *testing.T) {
}
func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
_, err = service.forkchoiceUpdateWithExecution(ctx, service.headRoot(), service.CurrentSlot()+1)
require.NoError(t, err)
hookErr := "could not notify forkchoice update"
invalidStateErr := "could not get state summary: could not find block in DB"
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
gb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
require.NoError(t, service.saveInitSyncBlock(ctx, [32]byte{'a'}, gb))
_, err = service.forkchoiceUpdateWithExecution(ctx, [32]byte{'a'}, service.CurrentSlot()+1)
require.NoError(t, err)
require.LogsContain(t, hook, invalidStateErr)
service.cfg.PayloadIDCache = cache.NewPayloadIDCache()
service.cfg.TrackedValidatorsCache = cache.NewTrackedValidatorsCache()
hook.Reset()
service.head = &head{
root: [32]byte{'a'},
block: nil, /* should not panic if notify head uses correct head */
}
// Block in Cache
b := util.NewBeaconBlock()
b.Block.Slot = 2
wsb, err := blocks.NewSignedBeaconBlock(b)
@@ -107,13 +79,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
block: wsb,
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1}, [32]byte{2})
_, err = service.forkchoiceUpdateWithExecution(ctx, r1, service.CurrentSlot())
require.NoError(t, err)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
// Block in DB
service.cfg.PayloadIDCache.Set(2, [32]byte{2}, [8]byte{1})
b = util.NewBeaconBlock()
b.Block.Slot = 3
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
@@ -125,25 +91,22 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
block: wsb,
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1}, [32]byte{2})
_, err = service.forkchoiceUpdateWithExecution(ctx, r1, service.CurrentSlot()+1)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
vId, payloadID, has := service.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(2, [32]byte{2})
require.Equal(t, true, has)
require.Equal(t, primitives.ValidatorIndex(1), vId)
require.Equal(t, [8]byte{1}, payloadID)
service.cfg.PayloadIDCache.Set(2, [32]byte{2}, [8]byte{1})
args := &fcuConfig{
headState: st,
headRoot: r1,
headBlock: wsb,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
// Test zero headRoot returns immediately.
headRoot := service.headRoot()
_, err = service.forkchoiceUpdateWithExecution(ctx, [32]byte{}, service.CurrentSlot()+1)
require.NoError(t, err)
require.Equal(t, service.headRoot(), headRoot)
payloadID, has := service.cfg.PayloadIDCache.PayloadID(2, [32]byte{2})
require.Equal(t, true, has)
require.Equal(t, primitives.PayloadID{1}, payloadID)
}
func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testing.T) {
service, tr := minimalTestService(t)
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -182,10 +145,14 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
service.head.root = r
service.head.block = sb
service.head.state = st
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(service.CurrentSlot()+1, 0, [8]byte{}, [32]byte{} /* root */)
_, err = service.forkchoiceUpdateWithExecution(ctx, r, service.CurrentSlot()+1)
require.NoError(t, err)
service.cfg.PayloadIDCache.Set(service.CurrentSlot()+1, [32]byte{} /* root */, [8]byte{})
args := &fcuConfig{
headState: st,
headBlock: sb,
headRoot: r,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
}
func TestShouldOverrideFCU(t *testing.T) {

View File

@@ -69,10 +69,18 @@ func WithDepositCache(c cache.DepositCache) Option {
}
}
// WithProposerIdsCache for proposer id cache.
func WithProposerIdsCache(c *cache.ProposerPayloadIDsCache) Option {
// WithPayloadIDCache for payload ID cache.
func WithPayloadIDCache(c *cache.PayloadIDCache) Option {
return func(s *Service) error {
s.cfg.ProposerSlotIndexCache = c
s.cfg.PayloadIDCache = c
return nil
}
}
// WithTrackedValidatorsCache for tracked validators cache.
func WithTrackedValidatorsCache(c *cache.TrackedValidatorsCache) Option {
return func(s *Service) error {
s.cfg.TrackedValidatorsCache = c
return nil
}
}

View File

@@ -73,6 +73,9 @@ func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOn
if err != nil {
log.WithError(err).Warn("Could not update head")
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
proposingSlot := s.CurrentSlot() + 1
var fcuArgs *fcuConfig
if blockRoot != headRoot {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
@@ -88,21 +91,63 @@ func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOn
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
headState, headBlock, err := s.getStateAndBlock(ctx, headRoot)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
fcuArgs = &fcuConfig{
headState: headState,
headBlock: headBlock,
headRoot: headRoot,
proposingSlot: proposingSlot,
}
} else {
fcuArgs = &fcuConfig{
headState: postState,
headBlock: signed,
headRoot: headRoot,
proposingSlot: proposingSlot,
}
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
// verify conditions for FCU, notifies FCU, and saves the new head.
// This function also prunes attestations, other similar operations happen in prunePostBlockOperationPools.
if _, err := s.forkchoiceUpdateWithExecution(ctx, headRoot, s.CurrentSlot()+1); err != nil {
return err
isEarly := slots.WithinVotingWindow(uint64(s.genesisTime.Unix()))
shouldOverrideFCU := false
slot := postState.Slot()
if s.isNewHead(headRoot) {
// if the block is early send FCU without any payload attributes
if isEarly {
if err := s.forkchoiceUpdateWithExecution(ctx, fcuArgs); err != nil {
return err
}
} else {
// if the block is late lock and update the caches
if blockRoot == headRoot {
if err := transition.UpdateNextSlotCache(ctx, blockRoot[:], postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if slots.IsEpochEnd(slot) {
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
return errors.Wrap(err, "could not handle epoch boundary")
}
}
}
_, tracked := s.trackedProposer(fcuArgs.headState, proposingSlot)
if tracked {
shouldOverrideFCU = s.shouldOverrideFCU(headRoot, proposingSlot)
fcuArgs.attributes = s.getPayloadAttribute(ctx, fcuArgs.headState, proposingSlot, headRoot[:])
}
if !shouldOverrideFCU {
if err := s.forkchoiceUpdateWithExecution(ctx, fcuArgs); err != nil {
return err
}
}
}
}
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
@@ -114,32 +159,30 @@ func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOn
Optimistic: optimistic,
},
})
defer reportAttestationInclusion(b)
if headRoot == blockRoot {
// Updating next slot state cache can happen in the background
// except in the epoch boundary in which case we lock to handle
// the shuffling and proposer caches updates.
// We handle these caches only on canonical
// blocks, otherwise this will be handled by lateBlockTasks
slot := postState.Slot()
if slots.IsEpochEnd(slot) {
if err := transition.UpdateNextSlotCache(ctx, blockRoot[:], postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
if blockRoot == headRoot && isEarly {
go func() {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Error("could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
return errors.Wrap(err, "could not handle epoch boundary")
}
} else {
go func() {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Error("could not update next slot state cache")
if slots.IsEpochEnd(slot) {
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
log.WithError(err).Error("could not handle epoch boundary")
}
}()
}
}
if _, tracked := s.trackedProposer(fcuArgs.headState, proposingSlot); !tracked {
return
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, fcuArgs.headState, proposingSlot, headRoot[:])
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if _, err := s.notifyForkchoiceUpdate(ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice with payload attributes for proposal")
}
}()
}
defer reportAttestationInclusion(b)
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
@@ -322,10 +365,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
arg := &notifyForkchoiceUpdateArg{
arg := &fcuConfig{
headState: preState,
headRoot: lastBR,
headBlock: lastB.Block(),
headBlock: lastB,
}
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return err
@@ -381,9 +424,6 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
if err := helpers.UpdateCommitteeCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Could not update committee cache")
}
if err := helpers.UpdateUnsafeProposerIndicesInCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Failed to cache next epoch proposers")
}
}()
// The latest block header is from the previous epoch
r, err := st.LatestBlockHeader().HashTreeRoot()
@@ -674,13 +714,17 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
s.cfg.ForkChoiceStore.RUnlock()
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if (!has && !features.Get().PrepareAllPayloads) || id != [8]byte{} {
_, tracked := s.trackedProposer(headState, s.CurrentSlot()+1)
// return early if we are not proposing next slot.
if !tracked {
return
}
// return early if we already started building a block for the current
// head root
_, has := s.cfg.PayloadIDCache.PayloadID(s.CurrentSlot()+1, headRoot)
if has {
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
@@ -690,11 +734,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
}
s.headLock.RUnlock()
s.cfg.ForkChoiceStore.RLock()
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock.Block(),
})
headBlock: headBlock,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
s.cfg.ForkChoiceStore.RUnlock()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")

View File

@@ -895,7 +895,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
cfg.TerminalBlockHash = params.BeaconConfig().ZeroHash
params.OverrideBeaconConfig(cfg)
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
aHash := common.BytesToHash([]byte("a"))
@@ -2045,7 +2045,7 @@ func TestFillMissingBlockPayloadId_PrepareAllPayloads(t *testing.T) {
// Helper function to simulate the block being on time or delayed for proposer
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(s *Service, slot, delay int64) {
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) - delay
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) + delay
s.SetGenesisTime(time.Unix(time.Now().Unix()-offset, 0))
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
@@ -122,35 +121,44 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
defer s.cfg.ForkChoiceStore.Unlock()
// This function is only called at 10 seconds or 0 seconds into the slot
disparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
if !features.Get().DisableReorgLateBlocks {
disparity += reorgLateBlockCountAttestations
}
disparity += reorgLateBlockCountAttestations
s.processAttestations(ctx, disparity)
processAttsElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
start = time.Now()
// return early if we haven't changed head
newHeadRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
log.WithError(err).Error("Could not compute head from new attestations")
// Fallback to our current head root in the event of a failure.
s.headLock.RLock()
newHeadRoot = s.headRoot()
s.headLock.RUnlock()
return
}
if !s.isNewHead(newHeadRoot) {
return
}
log.WithField("newHeadRoot", fmt.Sprintf("%#x", newHeadRoot)).Debug("Head changed due to attestations")
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("could not get head block")
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
changed, err := s.forkchoiceUpdateWithExecution(s.ctx, newHeadRoot, proposingSlot)
if err != nil {
log.WithError(err).Error("could not update forkchoice")
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
}
if changed {
s.headLock.RLock()
log.WithFields(logrus.Fields{
"oldHeadRoot": fmt.Sprintf("%#x", s.headRoot()),
"newHeadRoot": fmt.Sprintf("%#x", newHeadRoot),
}).Debug("Head changed due to attestations")
s.headLock.RUnlock()
_, tracked := s.trackedProposer(headState, proposingSlot)
if tracked {
if s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice")
}
}

View File

@@ -7,6 +7,7 @@ import (
"time"
blockchainTesting "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -130,7 +131,9 @@ func TestService_ReceiveBlock(t *testing.T) {
s, tr := minimalTestService(t,
WithFinalizedStateAtStartUp(genesis),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}))
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
)
beaconDB := tr.db
genesisBlockRoot := bytesutil.ToBytes32(nil)

View File

@@ -73,7 +73,8 @@ type config struct {
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
@@ -167,7 +168,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
blobNotifiers: bn,
cfg: &config{ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache()},
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
}
for _, opt := range opts {

View File

@@ -100,7 +100,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithForkChoiceStore(fc),
WithAttestationService(attService),
WithStateGen(stateGen),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithClockSynchronizer(startup.NewClockSynchronizer()),
}

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v4/async/event"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
@@ -114,6 +115,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithAttestationService(req.attSrv),
WithBLSToExecPool(req.blsPool),
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
// trackedProposer returns whether the beacon node was informed, via the
// validators/prepare_proposer endpoint, of the proposer at the given slot.
// It only returns true if the tracked proposer is present and active.
func (s *Service) trackedProposer(st state.ReadOnlyBeaconState, slot primitives.Slot) (cache.TrackedValidator, bool) {
if features.Get().PrepareAllPayloads {
return cache.TrackedValidator{Active: true}, true
}
id, err := helpers.BeaconProposerIndexAtSlot(s.ctx, st, slot)
if err != nil {
return cache.TrackedValidator{}, false
}
val, ok := s.cfg.TrackedValidatorsCache.Validator(id)
if !ok {
return cache.TrackedValidator{}, false
}
return val, val.Active
}

View File

@@ -25,6 +25,7 @@ go_library(
"sync_committee_disabled.go", # keep
"sync_committee_head_state.go",
"sync_subnet_ids.go",
"tracked_validators.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/cache",
visibility = [

View File

@@ -1,94 +1,63 @@
package cache
import (
"bytes"
"sync"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
)
const keyLength = 40
const vIdLength = 8
const pIdLength = 8
const vpIdsLength = vIdLength + pIdLength
// RootToPayloadIDMap is a map with keys the head root and values the
// corresponding PayloadID
type RootToPayloadIDMap map[[32]byte]primitives.PayloadID
// ProposerPayloadIDsCache is a cache of proposer payload IDs.
// The key is the concatenation of the slot and the block root.
// The value is the concatenation of the proposer and payload IDs, 8 bytes each.
type ProposerPayloadIDsCache struct {
slotToProposerAndPayloadIDs map[[keyLength]byte][vpIdsLength]byte
sync.RWMutex
// PayloadIDCache is a cache that keeps track of the prepared payload ID for the
// given slot and with the given head root.
type PayloadIDCache struct {
slotToPayloadID map[primitives.Slot]RootToPayloadIDMap
sync.Mutex
}
// NewProposerPayloadIDsCache creates a new proposer payload IDs cache.
func NewProposerPayloadIDsCache() *ProposerPayloadIDsCache {
return &ProposerPayloadIDsCache{
slotToProposerAndPayloadIDs: make(map[[keyLength]byte][vpIdsLength]byte),
}
// NewPayloadIDCache returns a new payload ID cache
func NewPayloadIDCache() *PayloadIDCache {
return &PayloadIDCache{slotToPayloadID: make(map[primitives.Slot]RootToPayloadIDMap)}
}
// GetProposerPayloadIDs returns the proposer and payload IDs for the given slot and head root to build the block.
func (f *ProposerPayloadIDsCache) GetProposerPayloadIDs(
slot primitives.Slot,
r [fieldparams.RootLength]byte,
) (primitives.ValidatorIndex, [pIdLength]byte, bool) {
f.RLock()
defer f.RUnlock()
ids, ok := f.slotToProposerAndPayloadIDs[idKey(slot, r)]
// PayloadID returns the payload ID for the given slot and parent block root
func (p *PayloadIDCache) PayloadID(slot primitives.Slot, root [32]byte) (primitives.PayloadID, bool) {
p.Lock()
defer p.Unlock()
inner, ok := p.slotToPayloadID[slot]
if !ok {
return 0, [pIdLength]byte{}, false
return primitives.PayloadID{}, false
}
vId := ids[:vIdLength]
b := ids[vIdLength:]
var pId [pIdLength]byte
copy(pId[:], b)
return primitives.ValidatorIndex(bytesutil.BytesToUint64BigEndian(vId)), pId, true
pid, ok := inner[root]
if !ok {
return primitives.PayloadID{}, false
}
return pid, true
}
// SetProposerAndPayloadIDs sets the proposer and payload IDs for the given slot and head root to build block.
func (f *ProposerPayloadIDsCache) SetProposerAndPayloadIDs(
slot primitives.Slot,
vId primitives.ValidatorIndex,
pId [pIdLength]byte,
r [fieldparams.RootLength]byte,
) {
f.Lock()
defer f.Unlock()
var vIdBytes [vIdLength]byte
copy(vIdBytes[:], bytesutil.Uint64ToBytesBigEndian(uint64(vId)))
var bs [vpIdsLength]byte
copy(bs[:], append(vIdBytes[:], pId[:]...))
k := idKey(slot, r)
ids, ok := f.slotToProposerAndPayloadIDs[k]
// Ok to overwrite if the slot is already set but the cached payload ID is not set.
// This combats the re-org case where payload assignment could change at the start of the epoch.
var byte8 [vIdLength]byte
if !ok || (ok && bytes.Equal(ids[vIdLength:], byte8[:])) {
f.slotToProposerAndPayloadIDs[k] = bs
// SetPayloadID updates the payload ID for the given slot and head root
// it also prunes older entries in the cache
func (p *PayloadIDCache) Set(slot primitives.Slot, root [32]byte, pid primitives.PayloadID) {
p.Lock()
defer p.Unlock()
if slot > 1 {
p.prune(slot - 2)
}
inner, ok := p.slotToPayloadID[slot]
if !ok {
inner = make(RootToPayloadIDMap)
p.slotToPayloadID[slot] = inner
}
inner[root] = pid
}
// PrunePayloadIDs removes the payload ID entries older than input slot.
func (f *ProposerPayloadIDsCache) PrunePayloadIDs(slot primitives.Slot) {
f.Lock()
defer f.Unlock()
for k := range f.slotToProposerAndPayloadIDs {
s := primitives.Slot(bytesutil.BytesToUint64BigEndian(k[:8]))
if slot > s {
delete(f.slotToProposerAndPayloadIDs, k)
// Prune prunes old payload IDs. Requires a Lock in the cache
func (p *PayloadIDCache) prune(slot primitives.Slot) {
for key := range p.slotToPayloadID {
if key < slot {
delete(p.slotToPayloadID, key)
}
}
}
func idKey(slot primitives.Slot, r [fieldparams.RootLength]byte) [keyLength]byte {
var k [keyLength]byte
copy(k[:], append(bytesutil.Uint64ToBytesBigEndian(uint64(slot)), r[:]...))
return k
}

View File

@@ -8,65 +8,54 @@ import (
)
func TestValidatorPayloadIDsCache_GetAndSaveValidatorPayloadIDs(t *testing.T) {
cache := NewProposerPayloadIDsCache()
cache := NewPayloadIDCache()
var r [32]byte
i, p, ok := cache.GetProposerPayloadIDs(0, r)
p, ok := cache.PayloadID(0, r)
require.Equal(t, false, ok)
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, [pIdLength]byte{}, p)
require.Equal(t, primitives.PayloadID{}, p)
slot := primitives.Slot(1234)
vid := primitives.ValidatorIndex(34234324)
pid := [8]byte{1, 2, 3, 3, 7, 8, 7, 8}
pid := primitives.PayloadID{1, 2, 3, 3, 7, 8, 7, 8}
r = [32]byte{1, 2, 3}
cache.SetProposerAndPayloadIDs(slot, vid, pid, r)
i, p, ok = cache.GetProposerPayloadIDs(slot, r)
cache.Set(slot, r, pid)
p, ok = cache.PayloadID(slot, r)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, pid, p)
slot = primitives.Slot(9456456)
vid = primitives.ValidatorIndex(6786745)
r = [32]byte{4, 5, 6}
cache.SetProposerAndPayloadIDs(slot, vid, [pIdLength]byte{}, r)
i, p, ok = cache.GetProposerPayloadIDs(slot, r)
cache.Set(slot, r, primitives.PayloadID{})
p, ok = cache.PayloadID(slot, r)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, [pIdLength]byte{}, p)
require.Equal(t, primitives.PayloadID{}, p)
// reset cache without pid
slot = primitives.Slot(9456456)
vid = primitives.ValidatorIndex(11111)
r = [32]byte{7, 8, 9}
pid = [8]byte{3, 2, 3, 33, 72, 8, 7, 8}
cache.SetProposerAndPayloadIDs(slot, vid, pid, r)
i, p, ok = cache.GetProposerPayloadIDs(slot, r)
cache.Set(slot, r, pid)
p, ok = cache.PayloadID(slot, r)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, pid, p)
// Forked chain
r = [32]byte{1, 2, 3}
i, p, ok = cache.GetProposerPayloadIDs(slot, r)
p, ok = cache.PayloadID(slot, r)
require.Equal(t, false, ok)
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, [pIdLength]byte{}, p)
require.Equal(t, primitives.PayloadID{}, p)
// existing pid - no change in cache
// existing pid - change the cache
slot = primitives.Slot(9456456)
vid = primitives.ValidatorIndex(11111)
r = [32]byte{7, 8, 9}
newPid := [8]byte{1, 2, 3, 33, 72, 8, 7, 1}
cache.SetProposerAndPayloadIDs(slot, vid, newPid, r)
i, p, ok = cache.GetProposerPayloadIDs(slot, r)
newPid := primitives.PayloadID{1, 2, 3, 33, 72, 8, 7, 1}
cache.Set(slot, r, newPid)
p, ok = cache.PayloadID(slot, r)
require.Equal(t, true, ok)
require.Equal(t, vid, i)
require.Equal(t, pid, p)
require.Equal(t, newPid, p)
// remove cache entry
cache.PrunePayloadIDs(slot + 1)
i, p, ok = cache.GetProposerPayloadIDs(slot, r)
cache.prune(slot + 1)
p, ok = cache.PayloadID(slot, r)
require.Equal(t, false, ok)
require.Equal(t, primitives.ValidatorIndex(0), i)
require.Equal(t, [pIdLength]byte{}, p)
require.Equal(t, primitives.PayloadID{}, p)
}

View File

@@ -42,17 +42,15 @@ var (
// root would be for slot 32 if present.
type ProposerIndicesCache struct {
sync.Mutex
indices map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
unsafeIndices map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
rootMap map[forkchoicetypes.Checkpoint][32]byte // A map from checkpoint root to state root
indices map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
rootMap map[forkchoicetypes.Checkpoint][32]byte // A map from checkpoint root to state root
}
// NewProposerIndicesCache returns a newly created cache
func NewProposerIndicesCache() *ProposerIndicesCache {
return &ProposerIndicesCache{
indices: make(map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex),
unsafeIndices: make(map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex),
rootMap: make(map[forkchoicetypes.Checkpoint][32]byte),
indices: make(map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex),
rootMap: make(map[forkchoicetypes.Checkpoint][32]byte),
}
}
@@ -74,18 +72,6 @@ func (p *ProposerIndicesCache) ProposerIndices(epoch primitives.Epoch, root [32]
return indices, exists
}
// UnsafeProposerIndices returns the proposer indices (unsafe) for the given root
func (p *ProposerIndicesCache) UnsafeProposerIndices(epoch primitives.Epoch, root [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
p.Lock()
defer p.Unlock()
inner, ok := p.unsafeIndices[epoch]
if !ok {
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
}
indices, exists := inner[root]
return indices, exists
}
// Prune resets the ProposerIndicesCache to its initial state
func (p *ProposerIndicesCache) Prune(epoch primitives.Epoch) {
p.Lock()
@@ -95,11 +81,6 @@ func (p *ProposerIndicesCache) Prune(epoch primitives.Epoch) {
delete(p.indices, key)
}
}
for key := range p.unsafeIndices {
if key < epoch {
delete(p.unsafeIndices, key)
}
}
for key := range p.rootMap {
if key.Epoch+1 < epoch {
delete(p.rootMap, key)
@@ -120,18 +101,6 @@ func (p *ProposerIndicesCache) Set(epoch primitives.Epoch, root [32]byte, indice
inner[root] = indices
}
// SetUnsafe sets the unsafe proposer indices for the given root as key
func (p *ProposerIndicesCache) SetUnsafe(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
p.Lock()
defer p.Unlock()
inner, ok := p.unsafeIndices[epoch]
if !ok {
inner = make(map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex)
p.unsafeIndices[epoch] = inner
}
inner[root] = indices
}
// SetCheckpoint updates the map from checkpoints to state roots
func (p *ProposerIndicesCache) SetCheckpoint(c forkchoicetypes.Checkpoint, root [32]byte) {
p.Lock()

View File

@@ -4,11 +4,26 @@
package cache
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
var (
// ProposerIndicesCacheMiss tracks the number of proposerIndices requests that aren't present in the cache.
ProposerIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "proposer_indices_cache_miss",
Help: "The number of proposer indices requests that aren't present in the cache.",
})
// ProposerIndicesCacheHit tracks the number of proposerIndices requests that are in the cache.
ProposerIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "proposer_indices_cache_hit",
Help: "The number of proposer indices requests that are present in the cache.",
})
)
// FakeProposerIndicesCache is a struct with 1 queue for looking up proposer indices by root.
type FakeProposerIndicesCache struct {
}

View File

@@ -34,29 +34,6 @@ func TestProposerCache_Set(t *testing.T) {
require.Equal(t, emptyIndices, received)
}
func TestProposerCache_SetUnsafe(t *testing.T) {
cache := NewProposerIndicesCache()
bRoot := [32]byte{'A'}
indices, ok := cache.UnsafeProposerIndices(0, bRoot)
require.Equal(t, false, ok)
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
require.Equal(t, indices, emptyIndices, "Expected committee count not to exist in empty cache")
emptyIndices[0] = 1
cache.SetUnsafe(0, bRoot, emptyIndices)
received, ok := cache.UnsafeProposerIndices(0, bRoot)
require.Equal(t, true, ok)
require.Equal(t, received, emptyIndices)
newRoot := [32]byte{'B'}
copy(emptyIndices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
cache.SetUnsafe(0, newRoot, emptyIndices)
received, ok = cache.UnsafeProposerIndices(0, newRoot)
require.Equal(t, true, ok)
require.Equal(t, emptyIndices, received)
}
func TestProposerCache_CheckpointAndPrune(t *testing.T) {
cache := NewProposerIndicesCache()
indices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
@@ -65,7 +42,6 @@ func TestProposerCache_CheckpointAndPrune(t *testing.T) {
copy(indices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
for i := 1; i < 10; i++ {
cache.Set(primitives.Epoch(i), root, indices)
cache.SetUnsafe(primitives.Epoch(i), root, indices)
cache.SetCheckpoint(forkchoicetypes.Checkpoint{Epoch: primitives.Epoch(i - 1), Root: cpRoot}, root)
}
received, ok := cache.ProposerIndices(1, root)
@@ -80,18 +56,6 @@ func TestProposerCache_CheckpointAndPrune(t *testing.T) {
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.UnsafeProposerIndices(1, root)
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.UnsafeProposerIndices(4, root)
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.UnsafeProposerIndices(9, root)
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: cpRoot})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
@@ -123,18 +87,6 @@ func TestProposerCache_CheckpointAndPrune(t *testing.T) {
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.UnsafeProposerIndices(1, root)
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.UnsafeProposerIndices(4, root)
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.UnsafeProposerIndices(9, root)
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: cpRoot})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)

View File

@@ -0,0 +1,43 @@
package cache
import (
"sync"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
type TrackedValidator struct {
Active bool
FeeRecipient primitives.ExecutionAddress
Index primitives.ValidatorIndex
}
type TrackedValidatorsCache struct {
sync.Mutex
trackedValidators map[primitives.ValidatorIndex]TrackedValidator
}
func NewTrackedValidatorsCache() *TrackedValidatorsCache {
return &TrackedValidatorsCache{
trackedValidators: make(map[primitives.ValidatorIndex]TrackedValidator),
}
}
func (t *TrackedValidatorsCache) Validator(index primitives.ValidatorIndex) (TrackedValidator, bool) {
t.Lock()
defer t.Unlock()
val, ok := t.trackedValidators[index]
return val, ok
}
func (t *TrackedValidatorsCache) Set(val TrackedValidator) {
t.Lock()
defer t.Unlock()
t.trackedValidators[val.Index] = val
}
func (t *TrackedValidatorsCache) Prune() {
t.Lock()
defer t.Unlock()
t.trackedValidators = make(map[primitives.ValidatorIndex]TrackedValidator)
}

View File

@@ -346,7 +346,7 @@ func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaco
if err != nil {
return err
}
root, err := state.StateRootAtIndex(uint64(slot % params.BeaconConfig().SlotsPerHistoricalRoot))
root, err := StateRootAtSlot(state, slot)
if err != nil {
return err
}
@@ -391,49 +391,6 @@ func UpdateCachedCheckpointToStateRoot(state state.ReadOnlyBeaconState, cp *fork
return nil
}
// UpdateUnsafeProposerIndicesInCache updates proposer indices entry of the
// cache one epoch in advance.
// Input state is used to retrieve active validator indices.
// Input root is to use as key in the cache.
// Input epoch is the epoch to retrieve proposer indices for.
func UpdateUnsafeProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
// The cache uses the state root at the end of (current epoch - 2) as key.
// (e.g. for epoch 2, the key is root at slot 31)
if epoch <= params.BeaconConfig().GenesisEpoch+2*params.BeaconConfig().MinSeedLookahead {
return nil
}
slot, err := slots.EpochEnd(epoch - 2)
if err != nil {
return err
}
root, err := state.StateRootAtIndex(uint64(slot % params.BeaconConfig().SlotsPerHistoricalRoot))
if err != nil {
return err
}
// Skip cache update if the key already exists
_, ok := proposerIndicesCache.UnsafeProposerIndices(epoch, [32]byte(root))
if ok {
return nil
}
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return err
}
proposerIndices, err := precomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return errors.New("invalid proposer length returned from state")
}
// This is here to deal with tests only
var indicesArray [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
copy(indicesArray[:], proposerIndices)
proposerIndicesCache.Prune(epoch - 2)
proposerIndicesCache.SetUnsafe(epoch, [32]byte(root), indicesArray)
return nil
}
// ClearCache clears the beacon committee cache and sync committee cache.
func ClearCache() {
committeeCache.Clear()

View File

@@ -20,10 +20,14 @@ import (
"go.opencensus.io/trace"
)
var CommitteeCacheInProgressHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "committee_cache_in_progress_hit",
Help: "The number of committee requests that are present in the cache.",
})
var (
CommitteeCacheInProgressHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "committee_cache_in_progress_hit",
Help: "The number of committee requests that are present in the cache.",
})
errProposerIndexMiss = errors.New("propoposer index not found in cache")
)
// IsActiveValidator returns the boolean value on whether the validator
// is active or not.
@@ -259,10 +263,32 @@ func ValidatorActivationChurnLimitDeneb(activeValidatorCount uint64) uint64 {
// indices = get_active_validator_indices(state, epoch)
// return compute_proposer_index(state, indices, seed)
func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (primitives.ValidatorIndex, error) {
e := time.CurrentEpoch(state)
return BeaconProposerIndexAtSlot(ctx, state, state.Slot())
}
// cachedProposerIndexAtSlot returns the proposer index at the given slot from
// the cache at the given root key.
func cachedProposerIndexAtSlot(slot primitives.Slot, root [32]byte) (primitives.ValidatorIndex, error) {
proposerIndices, has := proposerIndicesCache.ProposerIndices(slots.ToEpoch(slot), root)
if !has {
cache.ProposerIndicesCacheMiss.Inc()
return 0, errProposerIndexMiss
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
cache.ProposerIndicesCacheMiss.Inc()
return 0, errProposerIndexMiss
}
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
// point of view of the given state as head state
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
e := slots.ToEpoch(slot)
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch.
if e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
wantedEpoch := time.PrevEpoch(state)
s, err := slots.EpochEnd(wantedEpoch)
s, err := slots.EpochEnd(e - 1)
if err != nil {
return 0, err
}
@@ -271,12 +297,16 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
return 0, err
}
if r != nil && !bytes.Equal(r, params.BeaconConfig().ZeroHash[:]) {
proposerIndices, ok := proposerIndicesCache.ProposerIndices(wantedEpoch+1, bytesutil.ToBytes32(r))
if ok {
return proposerIndices[state.Slot()%params.BeaconConfig().SlotsPerEpoch], nil
pid, err := cachedProposerIndexAtSlot(slot, [32]byte(r))
if err == nil {
return pid, nil
}
if err := UpdateProposerIndicesInCache(ctx, state, time.CurrentEpoch(state)); err != nil {
return 0, errors.Wrap(err, "could not update committee cache")
if err := UpdateProposerIndicesInCache(ctx, state, e); err != nil {
return 0, errors.Wrap(err, "could not update proposer index cache")
}
pid, err = cachedProposerIndexAtSlot(slot, [32]byte(r))
if err == nil {
return pid, nil
}
}
}
@@ -286,7 +316,7 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
return 0, errors.Wrap(err, "could not generate seed")
}
seedWithSlot := append(seed[:], bytesutil.Bytes8(uint64(state.Slot()))...)
seedWithSlot := append(seed[:], bytesutil.Bytes8(uint64(slot))...)
seedWithSlotHash := hash.Hash(seedWithSlot)
indices, err := ActiveValidatorIndices(ctx, state, e)

View File

@@ -6,6 +6,7 @@ go_library(
"blob.go",
"ephemeral.go",
"metrics.go",
"pruner.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
@@ -19,7 +20,6 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/logging:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -40,7 +40,6 @@ go_test(
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_spf13_afero//:go_default_library",

View File

@@ -1,27 +1,21 @@
package filesystem
import (
"encoding/binary"
"fmt"
"io"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/io/file"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/logging"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
@@ -34,18 +28,21 @@ const (
sszExt = "ssz"
partExt = "part"
firstPruneEpoch = 0
bufferEpochs = 2
directoryPermissions = 0700
)
// BlobStorageOption is a functional option for configuring a BlobStorage.
type BlobStorageOption func(*BlobStorage)
type BlobStorageOption func(*BlobStorage) error
// WithBlobRetentionEpochs is an option that changes the number of epochs blobs will be persisted.
func WithBlobRetentionEpochs(e primitives.Epoch) BlobStorageOption {
return func(b *BlobStorage) {
b.retentionEpochs = e
return func(b *BlobStorage) error {
pruner, err := newblobPruner(b.fs, e)
if err != nil {
return err
}
b.pruner = pruner
return nil
}
}
@@ -59,21 +56,20 @@ func NewBlobStorage(base string, opts ...BlobStorageOption) (*BlobStorage, error
}
fs := afero.NewBasePathFs(afero.NewOsFs(), base)
b := &BlobStorage{
fs: fs,
retentionEpochs: params.BeaconConfig().MinEpochsForBlobsSidecarsRequest,
lastPrunedEpoch: firstPruneEpoch,
fs: fs,
}
for _, o := range opts {
o(b)
if err := o(b); err != nil {
return nil, fmt.Errorf("failed to create blob storage at %s: %w", base, err)
}
}
return b, nil
}
// BlobStorage is the concrete implementation of the filesystem backend for saving and retrieving BlobSidecars.
type BlobStorage struct {
fs afero.Fs
retentionEpochs primitives.Epoch
lastPrunedEpoch primitives.Epoch
fs afero.Fs
pruner *blobPruner
}
// Save saves blobs given a list of sidecars.
@@ -89,13 +85,8 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
log.WithFields(logging.BlobFields(sidecar.ROBlob)).Debug("ignoring a duplicate blob sidecar Save attempt")
return nil
}
if bs.shouldPrune(sidecar.Slot()) {
go func() {
err := bs.pruneOlderThan(sidecar.Slot())
if err != nil {
log.WithError(err).Errorf("failed to prune blobs from slot %d", sidecar.Slot())
}
}()
if bs.pruner != nil {
bs.pruner.try(sidecar.BlockRoot(), sidecar.Slot())
}
// Serialize the ethpb.BlobSidecar to binary data using SSZ.
@@ -220,7 +211,7 @@ func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
}
func (p blobNamer) dir() string {
return fmt.Sprintf("%#x", p.root)
return rootString(p.root)
}
func (p blobNamer) fname(ext string) string {
@@ -235,117 +226,6 @@ func (p blobNamer) path() string {
return p.fname(sszExt)
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (bs *BlobStorage) Prune(currentSlot primitives.Slot) error {
t := time.Now()
retentionSlots, err := slots.EpochStart(bs.retentionEpochs + bufferEpochs)
if err != nil {
return err
}
if currentSlot < retentionSlots {
return nil // Overflow would occur
}
log.Debug("Pruning old blobs")
folders, err := afero.ReadDir(bs.fs, ".")
if err != nil {
return err
}
var totalPruned int
for _, folder := range folders {
if folder.IsDir() {
num, err := bs.processFolder(folder, currentSlot, retentionSlots)
if err != nil {
return err
}
blobsPrunedCounter.Add(float64(num))
blobsTotalGauge.Add(-float64(num))
totalPruned += num
}
}
pruneTime := time.Since(t)
log.WithFields(log.Fields{
"lastPrunedEpoch": slots.ToEpoch(currentSlot - retentionSlots),
"pruneTime": pruneTime,
"numberBlobsPruned": totalPruned,
}).Debug("Pruned old blobs")
return nil
}
// processFolder will delete the folder of blobs if the blob slot is outside the
// retention period. We determine the slot by looking at the first blob in the folder.
func (bs *BlobStorage) processFolder(folder os.FileInfo, currentSlot, retentionSlots primitives.Slot) (int, error) {
f, err := bs.fs.Open(filepath.Join(folder.Name(), "0."+sszExt))
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
slot, err := slotFromBlob(f)
if err != nil {
return 0, err
}
var num int
if slot < (currentSlot - retentionSlots) {
num, err = bs.countFiles(folder.Name())
if err != nil {
return 0, err
}
if err = bs.fs.RemoveAll(folder.Name()); err != nil {
return 0, errors.Wrapf(err, "failed to delete blob %s", f.Name())
}
}
return num, nil
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
// Delete removes the directory matching the provided block root and all the blobs it contains.
func (bs *BlobStorage) Delete(root [32]byte) error {
if err := bs.fs.RemoveAll(hexutil.Encode(root[:])); err != nil {
return fmt.Errorf("failed to delete blobs for root %#x: %w", root, err)
}
return nil
}
// shouldPrune checks whether pruning should be triggered based on the given slot.
func (bs *BlobStorage) shouldPrune(slot primitives.Slot) bool {
if slots.SinceEpochStarts(slot) < params.BeaconConfig().SlotsPerEpoch/2 {
return false
}
if slots.ToEpoch(slot) == bs.lastPrunedEpoch {
return false
}
return true
}
// pruneOlderThan prunes blobs in the base directory based on the retention epoch and current slot.
func (bs *BlobStorage) pruneOlderThan(slot primitives.Slot) error {
err := bs.Prune(slot)
if err != nil {
return err
}
// Update lastPrunedEpoch to the current epoch.
bs.lastPrunedEpoch = slots.ToEpoch(slot)
return nil
func rootString(root [32]byte) string {
return fmt.Sprintf("%#x", root)
}

View File

@@ -7,7 +7,6 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
@@ -79,7 +78,7 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
// Slot in first half of epoch therefore should not prune
require.Equal(t, false, bs.shouldPrune(testSidecars[0].Slot()))
bs.pruner.try(testSidecars[0].BlockRoot(), testSidecars[0].Slot())
err = bs.Save(testSidecars[0])
require.NoError(t, err)
actual, err := bs.Get(testSidecars[0].BlockRoot(), testSidecars[0].Index)
@@ -92,7 +91,7 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
testSidecars1, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
// Slot in first half of epoch therefore should not prune
require.Equal(t, false, bs.shouldPrune(testSidecars1[0].Slot()))
bs.pruner.try(testSidecars1[0].BlockRoot(), testSidecars1[0].Slot())
err = bs.Save(testSidecars1[0])
require.NoError(t, err)
// Check previous saved sidecar was not pruned
@@ -107,13 +106,13 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
_, sidecars = util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 131187, fieldparams.MaxBlobsPerBlock)
testSidecars, err = verification.BlobSidecarSliceNoop(sidecars)
testSidecars2, err := verification.BlobSidecarSliceNoop(sidecars)
// Slot in second half of epoch therefore should prune
require.Equal(t, true, bs.shouldPrune(testSidecars[0].Slot()))
bs.pruner.try(testSidecars2[0].BlockRoot(), testSidecars2[0].Slot())
require.NoError(t, err)
err = bs.Save(testSidecars[0])
err = bs.Save(testSidecars2[0])
require.NoError(t, err)
err = pollUntil(t, fs, 1)
err = pollUntil(t, fs, 3)
require.NoError(t, err)
})
}
@@ -143,26 +142,6 @@ func pollUntil(t *testing.T, fs afero.Fs, expected int) error {
return nil
}
func TestShouldPrune(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
bs.lastPrunedEpoch = 5
// Slot is before the midpoint of the epoch
slot1 := primitives.Slot(100)
p1 := bs.shouldPrune(slot1)
require.Equal(t, false, p1)
// Slot is after the midpoint of the epoch, but same epoch as last pruning
slot2 := primitives.Slot(178)
p2 := bs.shouldPrune(slot2)
require.Equal(t, false, p2)
// Slot is after the midpoint of the epoch
slot3 := primitives.Slot(8018)
p3 := bs.shouldPrune(slot3)
require.Equal(t, true, p3)
}
func TestBlobIndicesBounds(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
@@ -208,7 +187,7 @@ func TestBlobStoragePrune(t *testing.T) {
require.NoError(t, bs.Save(sidecar))
}
require.NoError(t, bs.Prune(currentSlot))
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.retain))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
@@ -228,7 +207,7 @@ func TestBlobStoragePrune(t *testing.T) {
slot += 10000
}
require.NoError(t, bs.Prune(currentSlot))
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.retain))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
@@ -257,41 +236,11 @@ func BenchmarkPruning(b *testing.B) {
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := bs.Prune(currentSlot)
err := bs.pruner.prune(currentSlot)
require.NoError(b, err)
}
}
func TestBlobStorageDelete(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
rawRoot := "0xcf9bb70c98f58092c9d6459227c9765f984d240be9690e85179bc5a6f60366ad"
blockRoot, err := hexutil.Decode(rawRoot)
require.NoError(t, err)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
for _, sidecar := range testSidecars {
require.NoError(t, bs.Save(sidecar))
}
exists, err := afero.DirExists(fs, hexutil.Encode(blockRoot))
require.NoError(t, err)
require.Equal(t, true, exists)
// Delete the directory corresponding to the block root
require.NoError(t, bs.Delete(bytesutil.ToBytes32(blockRoot)))
// Ensure that the directory no longer exists after deletion
exists, err = afero.DirExists(fs, hexutil.Encode(blockRoot))
require.NoError(t, err)
require.Equal(t, false, exists)
// Deleting a non-existent root does not return an error.
require.NoError(t, bs.Delete(bytesutil.ToBytes32([]byte{0x1})))
}
func TestNewBlobStorage(t *testing.T) {
_, err := NewBlobStorage(path.Join(t.TempDir(), "good"))
require.NoError(t, err)

View File

@@ -10,16 +10,24 @@ import (
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(_ testing.TB) *BlobStorage {
return &BlobStorage{fs: afero.NewMemMapFs()}
func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
fs := afero.NewMemMapFs()
pruner, err := newblobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
if err != nil {
t.Fatal("test setup issue", err)
}
return &BlobStorage{fs: fs, pruner: pruner}
}
// NewEphemeralBlobStorageWithFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageWithFs(_ testing.TB) (afero.Fs, *BlobStorage, error) {
func NewEphemeralBlobStorageWithFs(t testing.TB) (afero.Fs, *BlobStorage, error) {
fs := afero.NewMemMapFs()
retentionEpoch := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
return fs, &BlobStorage{fs: fs, retentionEpochs: retentionEpoch}, nil
pruner, err := newblobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
if err != nil {
t.Fatal("test setup issue", err)
}
return fs, &BlobStorage{fs: fs, pruner: pruner}, nil
}
type BlobMocker struct {

View File

@@ -5,7 +5,6 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/spf13/afero"
)
@@ -31,10 +30,7 @@ var (
})
)
func (bs *BlobStorage) Initialize(lastFinalizedSlot primitives.Slot) error {
if err := bs.Prune(lastFinalizedSlot); err != nil {
return fmt.Errorf("failed to prune from finalized slot %d: %w", lastFinalizedSlot, err)
}
func (bs *BlobStorage) Initialize() error {
if err := bs.collectTotalBlobMetric(); err != nil {
return fmt.Errorf("failed to initialize blob metrics: %w", err)
}

View File

@@ -0,0 +1,248 @@
package filesystem
import (
"encoding/binary"
"io"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
const retentionBuffer primitives.Epoch = 2
var (
errBlobSlotUnknown = errors.New("could not determine blob slot from files in storage")
errPruningFailures = errors.New("blobs could not be pruned for some roots")
)
type blobPruner struct {
sync.Mutex
slotMap *slotForRoot
retain primitives.Slot
fs afero.Fs
prunedBefore atomic.Uint64
}
func newblobPruner(fs afero.Fs, retain primitives.Epoch) (*blobPruner, error) {
r, err := slots.EpochStart(retain + retentionBuffer)
if err != nil {
return nil, errors.Wrap(err, "could not set retentionSlots")
}
return &blobPruner{fs: fs, retain: r, slotMap: newSlotForRoot()}, nil
}
// tryPrune checks whether we should prune and then calls prune in a goroutine.
func (p *blobPruner) try(root [32]byte, latest primitives.Slot) {
p.slotMap.ensure(rootString(root), latest)
pruned := uint64(pruneBefore(latest, p.retain))
if p.prunedBefore.Swap(pruned) == pruned {
return
}
go func() {
if err := p.prune(primitives.Slot(pruned)); err != nil {
log.WithError(err).Errorf("failed to prune blobs from slot %d", latest)
}
}()
}
func pruneBefore(latest primitives.Slot, offset primitives.Slot) primitives.Slot {
// Safely compute the first slot in the epoch for the latest slot
latest = latest - latest%params.BeaconConfig().SlotsPerEpoch
if latest < offset {
return 0
}
return latest - offset
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (p *blobPruner) prune(pruneBefore primitives.Slot) error {
log.Debug("Pruning old blobs")
start := time.Now()
totalPruned, totalErr := 0, 0
defer func() {
log.WithFields(log.Fields{
"lastPrunedEpoch": slots.ToEpoch(pruneBefore),
"pruneTime": time.Since(start).String(),
"numberBlobsPruned": totalPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(totalPruned))
}()
dirs, err := p.listDir(".", filterRoot)
if err != nil {
return errors.Wrap(err, "unable to list root blobs directory")
}
for _, dir := range dirs {
pruned, err := p.tryPruneDir(dir, pruneBefore)
if err != nil {
totalErr += 1
log.WithError(err).WithField("directory", dir).Error("unable to prune directory")
}
totalPruned += pruned
}
if totalErr > 0 {
return errors.Wrapf(errPruningFailures, "pruning failed for %d root directories", totalErr)
}
return nil
}
// directoryMeta tries a few different ways to determine the slot for the given directory.
// The seconds argument will be nil if the function did not need to list the directory, or
// non-nil with a list of files if it did.
func (p *blobPruner) directoryMeta(dir string) (primitives.Slot, []string, error) {
root := filepath.Base(dir) // end of the path should be the blob directory, named by hex encoding of root
// First try the cheap map lookup.
slot, ok := p.slotMap.slot(root)
if ok {
return slot, nil, nil
}
// Next try constructing the path to the zero index blob, which will always be present unless
// the blob directory has been damaged by something like a restart during RemoveAll.
slot, err := slotFromFile(filepath.Join(dir, "0."+sszExt), p.fs)
if err == nil {
p.slotMap.ensure(root, slot)
return slot, nil, nil
}
// Fall back if getting the slot from index zero failed -- look for any ssz file.
files, err := p.listDir(dir, filterSsz)
if err != nil {
return 0, nil, errors.Wrapf(err, "failed to list blobs in directory %s", dir)
}
if len(files) == 0 {
return 0, files, errors.Wrapf(errBlobSlotUnknown, "contained no blob files")
}
slot, err = slotFromFile(files[0], p.fs)
if err != nil {
return 0, nil, errors.Wrapf(err, "slot could not be read from blob file %s", files[0])
}
p.slotMap.ensure(root, slot)
return slot, files, nil
}
// tryPruneDir will delete the directory of blobs if the blob slot is outside the
// retention period. We determine the slot by looking at the first blob in the directory.
func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int, error) {
slot, files, err := p.directoryMeta(dir)
if err != nil {
return 0, errors.Wrapf(err, "could not determine slot for directory %s", dir)
}
if slot >= pruneBefore {
return 0, nil
}
if len(files) == 0 {
files, err = p.listDir(dir, filterSsz)
if err != nil {
return 0, errors.Wrapf(err, "failed to list blobs in directory %s", dir)
}
}
if err = p.fs.RemoveAll(dir); err != nil {
return 0, errors.Wrapf(err, "failed to delete blobs in %s", dir)
}
return len(files), nil
}
func slotFromFile(file string, fs afero.Fs) (primitives.Slot, error) {
f, err := fs.Open(file)
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
return slotFromBlob(f)
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
func (p *blobPruner) listDir(dir string, filter func(string) bool) ([]string, error) {
top, err := p.fs.Open(dir)
defer func() {
if err := top.Close(); err != nil {
log.WithError(err).Errorf("Could not close file %s", dir)
}
}()
if err != nil {
return nil, errors.Wrap(err, "failed to open directory descriptor")
}
dirs, err := top.Readdirnames(-1)
if err != nil {
return nil, errors.Wrap(err, "failed to read directory listing")
}
if filter != nil {
filtered := make([]string, 0, len(dirs))
for i := range dirs {
if filter(dirs[i]) {
filtered = append(filtered, dirs[i])
}
}
return filtered, nil
}
return dirs, nil
}
func filterRoot(s string) bool {
return strings.HasPrefix(s, "0x")
}
func filterSsz(s string) bool {
return filepath.Ext(s) == sszExt
}
func newSlotForRoot() *slotForRoot {
return &slotForRoot{
cache: make(map[string]primitives.Slot, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest*fieldparams.SlotsPerEpoch),
}
}
type slotForRoot struct {
sync.RWMutex
cache map[string]primitives.Slot
}
func (s *slotForRoot) ensure(key string, slot primitives.Slot) {
s.Lock()
defer s.Unlock()
s.cache[key] = slot
}
func (s *slotForRoot) slot(key string) (primitives.Slot, bool) {
s.RLock()
defer s.RUnlock()
slot, ok := s.cache[key]
return slot, ok
}
func (s *slotForRoot) evict(key string) {
s.Lock()
defer s.Unlock()
delete(s.cache, key)
}

View File

@@ -28,7 +28,6 @@ go_library(
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/forkchoice:go_default_library",

View File

@@ -3,7 +3,6 @@ package doublylinkedtree
import (
"time"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
@@ -98,9 +97,6 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// This function needs to be called only when proposing a block and all
// attestation processing has already happened.
func (f *ForkChoice) GetProposerHead() [32]byte {
if features.Get().DisableReorgLateBlocks {
return f.CachedHeadRoot()
}
head := f.store.headNode
if head == nil {
return [32]byte{}

View File

@@ -98,7 +98,8 @@ type BeaconNode struct {
syncCommitteePool synccommittee.Pool
blsToExecPool blstoexec.PoolManager
depositCache cache.DepositCache
proposerIdsCache *cache.ProposerPayloadIDsCache
trackedValidatorsCache *cache.TrackedValidatorsCache
payloadIDCache *cache.PayloadIDCache
stateFeed *event.Feed
blockFeed *event.Feed
opFeed *event.Feed
@@ -179,10 +180,11 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
slashingsPool: slashings.NewPool(),
syncCommitteePool: synccommittee.NewPool(),
blsToExecPool: blstoexec.NewPool(),
trackedValidatorsCache: cache.NewTrackedValidatorsCache(),
payloadIDCache: cache.NewPayloadIDCache(),
slasherBlockHeadersFeed: new(event.Feed),
slasherAttestationsFeed: new(event.Feed),
serviceFlagOpts: &serviceFlagOpts{},
proposerIdsCache: cache.NewProposerPayloadIDsCache(),
}
beacon.initialSyncComplete = make(chan struct{})
@@ -226,12 +228,8 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
return nil, err
}
if beacon.finalizedStateAtStartUp != nil {
if err := beacon.BlobStorage.Initialize(beacon.finalizedStateAtStartUp.Slot()); err != nil {
return nil, fmt.Errorf("failed to initialize blob storage: %w", err)
}
} else {
log.Warn("No finalized beacon state at startup, cannot prune blobs")
if err := beacon.BlobStorage.Initialize(); err != nil {
return nil, fmt.Errorf("failed to initialize blob storage: %w", err)
}
log.Debugln("Registering P2P Service")
@@ -659,10 +657,11 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithStateGen(b.stateGen),
blockchain.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
blockchain.WithProposerIdsCache(b.proposerIdsCache),
blockchain.WithClockSynchronizer(gs),
blockchain.WithSyncComplete(syncComplete),
blockchain.WithBlobStorage(b.BlobStorage),
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
blockchain.WithPayloadIDCache(b.payloadIDCache),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
@@ -893,11 +892,12 @@ func (b *BeaconNode) registerRPCService(router *mux.Router) error {
StateGen: b.stateGen,
EnableDebugRPCEndpoints: enableDebugRPCEndpoints,
MaxMsgSize: maxMsgSize,
ProposerIdsCache: b.proposerIdsCache,
BlockBuilder: b.fetchBuilderService(),
Router: router,
ClockWaiter: b.clockWaiter,
BlobStorage: b.BlobStorage,
TrackedValidatorsCache: b.trackedValidatorsCache,
PayloadIDCache: b.payloadIDCache,
})
return b.services.RegisterService(rpcService)

View File

@@ -23,6 +23,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/metadata:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/metadata"
)
@@ -45,17 +46,17 @@ func InitializeDataMaps() {
},
bytesutil.ToBytes4(params.BeaconConfig().BellatrixForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{Body: &ethpb.BeaconBlockBodyBellatrix{}}},
&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{Body: &ethpb.BeaconBlockBodyBellatrix{ExecutionPayload: &enginev1.ExecutionPayload{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockCapella{Block: &ethpb.BeaconBlockCapella{Body: &ethpb.BeaconBlockBodyCapella{}}},
&ethpb.SignedBeaconBlockCapella{Block: &ethpb.BeaconBlockCapella{Body: &ethpb.BeaconBlockBodyCapella{ExecutionPayload: &enginev1.ExecutionPayloadCapella{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockDeneb{Block: &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{}}},
&ethpb.SignedBeaconBlockDeneb{Block: &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
}

View File

@@ -955,9 +955,9 @@ func (s *Server) PublishBlindedBlock(w http.ResponseWriter, r *http.Request) {
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
s.publishBlindedBlockSSZ(ctx, w, r)
s.publishBlindedBlockSSZ(ctx, w, r, false)
} else {
s.publishBlindedBlock(ctx, w, r)
s.publishBlindedBlock(ctx, w, r, false)
}
}
@@ -980,20 +980,25 @@ func (s *Server) PublishBlindedBlockV2(w http.ResponseWriter, r *http.Request) {
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
s.publishBlindedBlockSSZ(ctx, w, r)
s.publishBlindedBlockSSZ(ctx, w, r, true)
} else {
s.publishBlindedBlock(ctx, w, r)
s.publishBlindedBlock(ctx, w, r, true)
}
}
func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWriter, r *http.Request) {
func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWriter, r *http.Request, versionRequired bool) {
body, err := io.ReadAll(r.Body)
if err != nil {
httputil.HandleError(w, "Could not read request body: "+err.Error(), http.StatusInternalServerError)
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionRequired && versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
}
denebBlock := &eth.SignedBlindedBeaconBlockDeneb{}
if err := denebBlock.UnmarshalSSZ(body); err == nil {
if err = denebBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedDeneb{
BlindedDeneb: denebBlock,
@@ -1006,8 +1011,17 @@ func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWrit
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Deneb) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Deneb), err.Error()),
http.StatusBadRequest,
)
return
}
capellaBlock := &eth.SignedBlindedBeaconBlockCapella{}
if err := capellaBlock.UnmarshalSSZ(body); err == nil {
if err = capellaBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedCapella{
BlindedCapella: capellaBlock,
@@ -1020,8 +1034,17 @@ func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWrit
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Capella) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Capella), err.Error()),
http.StatusBadRequest,
)
return
}
bellatrixBlock := &eth.SignedBlindedBeaconBlockBellatrix{}
if err := bellatrixBlock.UnmarshalSSZ(body); err == nil {
if err = bellatrixBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{
BlindedBellatrix: bellatrixBlock,
@@ -1034,10 +1057,17 @@ func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWrit
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Bellatrix) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Bellatrix), err.Error()),
http.StatusBadRequest,
)
return
}
// blinded is not supported before bellatrix hardfork
altairBlock := &eth.SignedBeaconBlockAltair{}
if err := altairBlock.UnmarshalSSZ(body); err == nil {
if err = altairBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: altairBlock,
@@ -1050,8 +1080,17 @@ func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWrit
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Altair) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Altair), err.Error()),
http.StatusBadRequest,
)
return
}
phase0Block := &eth.SignedBeaconBlock{}
if err := phase0Block.UnmarshalSSZ(body); err == nil {
if err = phase0Block.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: phase0Block,
@@ -1064,20 +1103,34 @@ func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWrit
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Phase0) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Phase0), err.Error()),
http.StatusBadRequest,
)
return
}
httputil.HandleError(w, "Body does not represent a valid block type", http.StatusBadRequest)
}
func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter, r *http.Request) {
func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter, r *http.Request, versionRequired bool) {
body, err := io.ReadAll(r.Body)
if err != nil {
httputil.HandleError(w, "Could not read request body", http.StatusInternalServerError)
return
}
versionHeader := r.Header.Get(api.VersionHeader)
var blockVersionError string
if versionRequired && versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
}
var consensusBlock *eth.GenericSignedBeaconBlock
var denebBlock *shared.SignedBlindedBeaconBlockDeneb
if err = unmarshalStrict(body, &denebBlock); err == nil {
consensusBlock, err := denebBlock.ToGeneric()
consensusBlock, err = denebBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1086,14 +1139,19 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Deneb) {
blockVersionError = fmt.Sprintf("could not decode %s request body into consensus block: %v", version.String(version.Deneb), err.Error())
}
}
if versionHeader == version.String(version.Deneb) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Deneb), err.Error()),
http.StatusBadRequest,
)
return
}
var capellaBlock *shared.SignedBlindedBeaconBlockCapella
if err = unmarshalStrict(body, &capellaBlock); err == nil {
consensusBlock, err := capellaBlock.ToGeneric()
consensusBlock, err = capellaBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1102,14 +1160,19 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Capella) {
blockVersionError = fmt.Sprintf("could not decode %s request body into consensus block: %v", version.String(version.Capella), err.Error())
}
}
if versionHeader == version.String(version.Capella) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Capella), err.Error()),
http.StatusBadRequest,
)
return
}
var bellatrixBlock *shared.SignedBlindedBeaconBlockBellatrix
if err = unmarshalStrict(body, &bellatrixBlock); err == nil {
consensusBlock, err := bellatrixBlock.ToGeneric()
consensusBlock, err = bellatrixBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1118,13 +1181,19 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Bellatrix) {
blockVersionError = fmt.Sprintf("could not decode %s request body into consensus block: %v", version.String(version.Bellatrix), err.Error())
}
}
if versionHeader == version.String(version.Bellatrix) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Bellatrix), err.Error()),
http.StatusBadRequest,
)
return
}
var altairBlock *shared.SignedBeaconBlockAltair
if err = unmarshalStrict(body, &altairBlock); err == nil {
consensusBlock, err := altairBlock.ToGeneric()
consensusBlock, err = altairBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1133,13 +1202,19 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Altair) {
blockVersionError = fmt.Sprintf("could not decode %s request body into consensus block: %v", version.String(version.Altair), err.Error())
}
}
if versionHeader == version.String(version.Altair) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Altair), err.Error()),
http.StatusBadRequest,
)
return
}
var phase0Block *shared.SignedBeaconBlock
if err = unmarshalStrict(body, &phase0Block); err == nil {
consensusBlock, err := phase0Block.ToGeneric()
consensusBlock, err = phase0Block.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1148,14 +1223,17 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Phase0) {
blockVersionError = fmt.Sprintf("could not decode %s request body into consensus block: %v", version.String(version.Phase0), err.Error())
}
}
if versionHeader == "" {
blockVersionError = fmt.Sprintf("please add the api header %s to see specific type errors", api.VersionHeader)
if versionHeader == version.String(version.Phase0) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Phase0), err.Error()),
http.StatusBadRequest,
)
return
}
httputil.HandleError(w, "Body does not represent a valid block type: "+blockVersionError, http.StatusBadRequest)
httputil.HandleError(w, "Body does not represent a valid block type", http.StatusBadRequest)
}
// PublishBlock instructs the beacon node to broadcast a newly signed beacon block to the beacon network,
@@ -1174,9 +1252,9 @@ func (s *Server) PublishBlock(w http.ResponseWriter, r *http.Request) {
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
s.publishBlockSSZ(ctx, w, r)
s.publishBlockSSZ(ctx, w, r, false)
} else {
s.publishBlock(ctx, w, r)
s.publishBlock(ctx, w, r, false)
}
}
@@ -1197,20 +1275,26 @@ func (s *Server) PublishBlockV2(w http.ResponseWriter, r *http.Request) {
}
isSSZ := httputil.SszRequested(r)
if isSSZ {
s.publishBlockSSZ(ctx, w, r)
s.publishBlockSSZ(ctx, w, r, true)
} else {
s.publishBlock(ctx, w, r)
s.publishBlock(ctx, w, r, true)
}
}
func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *http.Request) {
func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *http.Request, versionRequired bool) {
body, err := io.ReadAll(r.Body)
if err != nil {
httputil.HandleError(w, "Could not read request body", http.StatusInternalServerError)
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionRequired && versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
denebBlock := &eth.SignedBeaconBlockContentsDeneb{}
if err := denebBlock.UnmarshalSSZ(body); err == nil {
if err = denebBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Deneb{
Deneb: denebBlock,
@@ -1223,8 +1307,17 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Deneb) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Deneb), err.Error()),
http.StatusBadRequest,
)
return
}
capellaBlock := &eth.SignedBeaconBlockCapella{}
if err := capellaBlock.UnmarshalSSZ(body); err == nil {
if err = capellaBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Capella{
Capella: capellaBlock,
@@ -1237,8 +1330,17 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Capella) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Capella), err.Error()),
http.StatusBadRequest,
)
return
}
bellatrixBlock := &eth.SignedBeaconBlockBellatrix{}
if err := bellatrixBlock.UnmarshalSSZ(body); err == nil {
if err = bellatrixBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Bellatrix{
Bellatrix: bellatrixBlock,
@@ -1251,8 +1353,17 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Bellatrix) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Bellatrix), err.Error()),
http.StatusBadRequest,
)
return
}
altairBlock := &eth.SignedBeaconBlockAltair{}
if err := altairBlock.UnmarshalSSZ(body); err == nil {
if err = altairBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Altair{
Altair: altairBlock,
@@ -1265,8 +1376,17 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Altair) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Altair), err.Error()),
http.StatusBadRequest,
)
return
}
phase0Block := &eth.SignedBeaconBlock{}
if err := phase0Block.UnmarshalSSZ(body); err == nil {
if err = phase0Block.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: phase0Block,
@@ -1279,20 +1399,35 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Phase0) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Phase0), err.Error()),
http.StatusBadRequest,
)
return
}
httputil.HandleError(w, "Body does not represent a valid block type", http.StatusBadRequest)
}
func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *http.Request) {
func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *http.Request, versionRequired bool) {
body, err := io.ReadAll(r.Body)
if err != nil {
httputil.HandleError(w, "Could not read request body", http.StatusInternalServerError)
return
}
versionHeader := r.Header.Get(api.VersionHeader)
var blockVersionError string
if versionRequired && versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
var consensusBlock *eth.GenericSignedBeaconBlock
var denebBlockContents *shared.SignedBeaconBlockContentsDeneb
if err = unmarshalStrict(body, &denebBlockContents); err == nil {
consensusBlock, err := denebBlockContents.ToGeneric()
consensusBlock, err = denebBlockContents.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1301,13 +1436,19 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Deneb) {
blockVersionError = fmt.Sprintf(": could not decode %s request body into consensus block: %v", version.String(version.Deneb), err.Error())
}
}
if versionHeader == version.String(version.Deneb) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Deneb), err.Error()),
http.StatusBadRequest,
)
return
}
var capellaBlock *shared.SignedBeaconBlockCapella
if err = unmarshalStrict(body, &capellaBlock); err == nil {
consensusBlock, err := capellaBlock.ToGeneric()
consensusBlock, err = capellaBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1316,13 +1457,19 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Capella) {
blockVersionError = fmt.Sprintf(": could not decode %s request body into consensus block: %v", version.String(version.Capella), err.Error())
}
}
if versionHeader == version.String(version.Capella) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Capella), err.Error()),
http.StatusBadRequest,
)
return
}
var bellatrixBlock *shared.SignedBeaconBlockBellatrix
if err = unmarshalStrict(body, &bellatrixBlock); err == nil {
consensusBlock, err := bellatrixBlock.ToGeneric()
consensusBlock, err = bellatrixBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1331,13 +1478,19 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Bellatrix) {
blockVersionError = fmt.Sprintf(": could not decode %s request body into consensus block: %v", version.String(version.Bellatrix), err.Error())
}
}
if versionHeader == version.String(version.Bellatrix) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Bellatrix), err.Error()),
http.StatusBadRequest,
)
return
}
var altairBlock *shared.SignedBeaconBlockAltair
if err = unmarshalStrict(body, &altairBlock); err == nil {
consensusBlock, err := altairBlock.ToGeneric()
consensusBlock, err = altairBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1346,13 +1499,19 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Altair) {
blockVersionError = fmt.Sprintf(": could not decode %s request body into consensus block: %v", version.String(version.Altair), err.Error())
}
}
if versionHeader == version.String(version.Altair) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Altair), err.Error()),
http.StatusBadRequest,
)
return
}
var phase0Block *shared.SignedBeaconBlock
if err = unmarshalStrict(body, &phase0Block); err == nil {
consensusBlock, err := phase0Block.ToGeneric()
consensusBlock, err = phase0Block.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
@@ -1361,14 +1520,17 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
s.proposeBlock(ctx, w, consensusBlock)
return
}
if versionHeader == version.String(version.Phase0) {
blockVersionError = fmt.Sprintf(": Could not decode %s request body into consensus block: %v", version.String(version.Phase0), err.Error())
}
}
if versionHeader == "" {
blockVersionError = fmt.Sprintf(": please add the api header %s to see specific type errors", api.VersionHeader)
if versionHeader == version.String(version.Phase0) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Phase0), err.Error()),
http.StatusBadRequest,
)
return
}
httputil.HandleError(w, "Body does not represent a valid block type"+blockVersionError, http.StatusBadRequest)
httputil.HandleError(w, "Body does not represent a valid block type", http.StatusBadRequest)
}
func (s *Server) proposeBlock(ctx context.Context, w http.ResponseWriter, blk *eth.GenericSignedBeaconBlock) {

File diff suppressed because it is too large Load Diff

View File

@@ -19,7 +19,6 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/p2p:go_default_library",
@@ -41,7 +40,6 @@ go_library(
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
@@ -91,7 +89,6 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_mock//gomock:go_default_library",
"@com_github_gorilla_mux//:go_default_library",

View File

@@ -11,14 +11,12 @@ import (
"strconv"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
rpchelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
@@ -543,9 +541,6 @@ func (s *Server) RegisterValidator(w http.ResponseWriter, r *http.Request) {
// PrepareBeaconProposer endpoint saves the fee recipient given a validator index, this is used when proposing a block.
func (s *Server) PrepareBeaconProposer(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.PrepareBeaconProposer")
defer span.End()
var jsonFeeRecipients []*shared.FeeRecipient
err := json.NewDecoder(r.Body).Decode(&jsonFeeRecipients)
switch {
@@ -556,7 +551,6 @@ func (s *Server) PrepareBeaconProposer(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
var feeRecipients []common.Address
var validatorIndices []primitives.ValidatorIndex
// filter for found fee recipients
for _, r := range jsonFeeRecipients {
@@ -568,28 +562,25 @@ func (s *Server) PrepareBeaconProposer(w http.ResponseWriter, r *http.Request) {
if !valid {
return
}
f, err := s.BeaconDB.FeeRecipientByValidatorID(ctx, primitives.ValidatorIndex(validatorIndex))
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
feeRecipients = append(feeRecipients, common.BytesToAddress(bytesutil.SafeCopyBytes(feeRecipientBytes)))
validatorIndices = append(validatorIndices, primitives.ValidatorIndex(validatorIndex))
case err != nil:
httputil.HandleError(w, fmt.Sprintf("Could not get fee recipient by validator index: %v", err), http.StatusInternalServerError)
return
default:
if common.BytesToAddress(feeRecipientBytes) != f {
feeRecipients = append(feeRecipients, common.BytesToAddress(bytesutil.SafeCopyBytes(feeRecipientBytes)))
validatorIndices = append(validatorIndices, primitives.ValidatorIndex(validatorIndex))
// Use default address if the burn address is return
feeRecipient := primitives.ExecutionAddress(feeRecipientBytes)
if feeRecipient == primitives.ExecutionAddress([20]byte{}) {
feeRecipient = primitives.ExecutionAddress(params.BeaconConfig().DefaultFeeRecipient)
if feeRecipient == primitives.ExecutionAddress([20]byte{}) {
log.WithField("validatorIndex", validatorIndex).Warn("fee recipient is the burn address")
}
}
val := cache.TrackedValidator{
Active: true, // TODO: either check or add the field in the request
Index: primitives.ValidatorIndex(validatorIndex),
FeeRecipient: feeRecipient,
}
s.TrackedValidatorsCache.Set(val)
validatorIndices = append(validatorIndices, primitives.ValidatorIndex(validatorIndex))
}
if len(validatorIndices) == 0 {
return
}
if err := s.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, validatorIndices, feeRecipients); err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not save fee recipients: %v", err), http.StatusInternalServerError)
return
}
log.WithFields(log.Fields{
"validatorIndices": validatorIndices,
}).Info("Updated fee recipient addresses")
@@ -812,7 +803,6 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
pubkey48 := val.PublicKey()
pubkey := pubkey48[:]
for _, slot := range proposalSlots {
s.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, index, [8]byte{} /* payloadID */, [32]byte{} /* head root */)
duties = append(duties, &ProposerDuty{
Pubkey: hexutil.Encode(pubkey),
ValidatorIndex: strconv.FormatUint(uint64(index), 10),
@@ -821,8 +811,6 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
}
}
s.ProposerSlotIndexCache.PrunePayloadIDs(epochStartSlot)
dependentRoot, err := proposalDependentRoot(st, requestedEpoch)
if err != nil {
httputil.HandleError(w, "Could not get dependent root: "+err.Error(), http.StatusInternalServerError)

View File

@@ -12,7 +12,6 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/pkg/errors"
@@ -1660,7 +1659,8 @@ func TestGetProposerDuties(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
@@ -1681,9 +1681,6 @@ func TestGetProposerDuties(t *testing.T) {
expectedDuty = duty
}
}
vid, _, has := s.ProposerSlotIndexCache.GetProposerPayloadIDs(11, [32]byte{})
require.Equal(t, true, has)
require.Equal(t, primitives.ValidatorIndex(12289), vid)
require.NotNil(t, expectedDuty, "Expected duty for slot 11 not found")
assert.Equal(t, "12289", expectedDuty.ValidatorIndex)
assert.Equal(t, hexutil.Encode(pubKeys[12289]), expectedDuty.Pubkey)
@@ -1702,7 +1699,8 @@ func TestGetProposerDuties(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
@@ -1723,52 +1721,10 @@ func TestGetProposerDuties(t *testing.T) {
expectedDuty = duty
}
}
vid, _, has := s.ProposerSlotIndexCache.GetProposerPayloadIDs(43, [32]byte{})
require.Equal(t, true, has)
require.Equal(t, primitives.ValidatorIndex(1360), vid)
require.NotNil(t, expectedDuty, "Expected duty for slot 43 not found")
assert.Equal(t, "1360", expectedDuty.ValidatorIndex)
assert.Equal(t, hexutil.Encode(pubKeys[1360]), expectedDuty.Pubkey)
})
t.Run("prune payload ID cache", func(t *testing.T) {
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err, "Could not set up genesis state")
require.NoError(t, bs.SetSlot(params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, bs.SetBlockRoots(roots))
chainSlot := params.BeaconConfig().SlotsPerEpoch
chain := &mockChain.ChainService{
State: bs, Root: genesisRoot[:], Slot: &chainSlot,
}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{params.BeaconConfig().SlotsPerEpoch: bs}},
HeadFetcher: chain,
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
s.ProposerSlotIndexCache.SetProposerAndPayloadIDs(1, 1, [8]byte{1}, [32]byte{2})
s.ProposerSlotIndexCache.SetProposerAndPayloadIDs(31, 2, [8]byte{2}, [32]byte{3})
s.ProposerSlotIndexCache.SetProposerAndPayloadIDs(32, 4309, [8]byte{3}, [32]byte{4})
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
request = mux.SetURLVars(request, map[string]string{"epoch": "1"})
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetProposerDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
vid, _, has := s.ProposerSlotIndexCache.GetProposerPayloadIDs(1, [32]byte{})
require.Equal(t, false, has)
require.Equal(t, primitives.ValidatorIndex(0), vid)
vid, _, has = s.ProposerSlotIndexCache.GetProposerPayloadIDs(2, [32]byte{})
require.Equal(t, false, has)
require.Equal(t, primitives.ValidatorIndex(0), vid)
vid, _, has = s.ProposerSlotIndexCache.GetProposerPayloadIDs(32, [32]byte{})
require.Equal(t, true, has)
require.Equal(t, primitives.ValidatorIndex(10565), vid)
})
t.Run("epoch out of bounds", func(t *testing.T) {
bs, err := transition.GenesisBeaconState(context.Background(), deposits, 0, eth1Data)
require.NoError(t, err, "Could not set up genesis state")
@@ -1785,7 +1741,8 @@ func TestGetProposerDuties(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
currentEpoch := slots.ToEpoch(bs.Slot())
@@ -1828,7 +1785,8 @@ func TestGetProposerDuties(t *testing.T) {
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
@@ -2267,21 +2225,24 @@ func TestPrepareBeaconProposer(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, url, &body)
writer := httptest.NewRecorder()
db := dbutil.SetupDB(t)
ctx := context.Background()
server := &Server{
BeaconDB: db,
BeaconDB: db,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
}
server.PrepareBeaconProposer(writer, request)
require.Equal(t, tt.code, writer.Code)
if tt.wantErr != "" {
require.Equal(t, strings.Contains(writer.Body.String(), tt.wantErr), true)
} else {
require.NoError(t, err)
address, err := server.BeaconDB.FeeRecipientByValidatorID(ctx, 1)
require.NoError(t, err)
feebytes, err := hexutil.Decode(tt.request[0].FeeRecipient)
require.NoError(t, err)
require.Equal(t, common.BytesToAddress(feebytes), address)
index, err := strconv.ParseUint(tt.request[0].ValidatorIndex, 10, 64)
require.NoError(t, err)
val, tracked := server.TrackedValidatorsCache.Validator(primitives.ValidatorIndex(index))
require.Equal(t, true, tracked)
require.Equal(t, primitives.ExecutionAddress(feebytes), val.FeeRecipient)
}
})
}
@@ -2292,7 +2253,11 @@ func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
db := dbutil.SetupDB(t)
// New validator
proposerServer := &Server{BeaconDB: db}
proposerServer := &Server{
BeaconDB: db,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
}
req := []*shared.FeeRecipient{{
FeeRecipient: hexutil.Encode(bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)),
ValidatorIndex: "1",
@@ -2318,7 +2283,7 @@ func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
writer = httptest.NewRecorder()
proposerServer.PrepareBeaconProposer(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses")
require.LogsContain(t, hook, "Updated fee recipient addresses")
// Same validator with different fee recipient
hook.Reset()
@@ -2368,13 +2333,16 @@ func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
writer = httptest.NewRecorder()
proposerServer.PrepareBeaconProposer(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses")
require.LogsContain(t, hook, "Updated fee recipient addresses")
}
func BenchmarkServer_PrepareBeaconProposer(b *testing.B) {
db := dbutil.SetupDB(b)
proposerServer := &Server{BeaconDB: db}
proposerServer := &Server{
BeaconDB: db,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
}
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
recipients := make([]*shared.FeeRecipient, 0)
for i := 0; i < 10000; i++ {

View File

@@ -29,11 +29,12 @@ type Server struct {
OptimisticModeFetcher blockchain.OptimisticModeFetcher
SyncCommitteePool synccommittee.Pool
V1Alpha1Server eth.BeaconNodeValidatorServer
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
ChainInfoFetcher blockchain.ChainInfoFetcher
BeaconDB db.HeadAccessDatabase
BlockBuilder builder.BlockBuilder
OperationNotifier operation.Notifier
CoreService *core.Service
BlockRewardFetcher rewards.BlockRewardsFetcher
TrackedValidatorsCache *cache.TrackedValidatorsCache
PayloadIDCache *cache.PayloadIDCache
}

View File

@@ -201,7 +201,6 @@ go_test(
"status_mainnet_test.go",
"status_test.go",
"sync_committee_test.go",
"unblinder_test.go",
"validator_test.go",
],
embed = [":go_default_library"],

View File

@@ -135,7 +135,7 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
return nil, status.Errorf(codes.Internal, "Could not compute committee assignments: %v", err)
}
// Query the next epoch assignments for committee subnet subscriptions.
nextCommitteeAssignments, nextProposerIndexToSlots, err := helpers.CommitteeAssignments(ctx, s, req.Epoch+1)
nextCommitteeAssignments, _, err := helpers.CommitteeAssignments(ctx, s, req.Epoch+1)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not compute next committee assignments: %v", err)
}
@@ -178,15 +178,6 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
nextAssignment.AttesterSlot = ca.AttesterSlot
nextAssignment.CommitteeIndex = ca.CommitteeIndex
}
// Cache proposer assignment for the current epoch.
for _, slot := range proposerIndexToSlots[idx] {
// Head root is empty because it can't be known until slot - 1. Same with payload id.
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, idx, [8]byte{} /* payloadID */, [32]byte{} /* head root */)
}
// Cache proposer assignment for the next epoch.
for _, slot := range nextProposerIndexToSlots[idx] {
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, idx, [8]byte{} /* payloadID */, [32]byte{} /* head root */)
}
} else {
// If the validator isn't in the beacon state, try finding their deposit to determine their status.
// We don't need the lastActiveValidatorFn because we don't use the response in this.
@@ -228,9 +219,6 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
validatorAssignments = append(validatorAssignments, assignment)
nextValidatorAssignments = append(nextValidatorAssignments, nextAssignment)
}
// Prune payload ID cache for any slots before request slot.
vs.ProposerSlotIndexCache.PrunePayloadIDs(epochStartSlot)
return &ethpb.DutiesResponse{
Duties: validatorAssignments,
CurrentEpochDuties: validatorAssignments,

View File

@@ -60,10 +60,10 @@ func TestGetDuties_OK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
@@ -145,11 +145,11 @@ func TestGetAltairDuties_SyncCommitteeOK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
@@ -251,11 +251,11 @@ func TestGetBellatrixDuties_SyncCommitteeOK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now().Add(time.Duration(-1*int64(slot-1)) * time.Second),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
@@ -343,12 +343,12 @@ func TestGetAltairDuties_UnknownPubkey(t *testing.T) {
require.NoError(t, err)
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
DepositFetcher: depositCache,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
Eth1InfoFetcher: &mockExecution.Chain{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
DepositFetcher: depositCache,
PayloadIDCache: cache.NewPayloadIDCache(),
}
unknownPubkey := bytesutil.PadTo([]byte{'u'}, 48)
@@ -401,10 +401,10 @@ func TestGetDuties_CurrentEpoch_ShouldNotFail(t *testing.T) {
State: bState, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
@@ -440,10 +440,10 @@ func TestGetDuties_MultipleKeys_OK(t *testing.T) {
State: bs, Root: genesisRoot[:], Genesis: time.Now(),
}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
}
pubkey0 := deposits[0].Data.PublicKey
@@ -507,12 +507,12 @@ func TestStreamDuties_OK(t *testing.T) {
Genesis: time.Now(),
}
vs := &Server{
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
@@ -565,12 +565,12 @@ func TestStreamDuties_OK_ChainReorg(t *testing.T) {
Genesis: time.Now(),
}
vs := &Server{
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
Ctx: ctx,
HeadFetcher: &mockChain.ChainService{State: bs, Root: genesisRoot[:]},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: c,
StateNotifier: &mockChain.MockStateNotifier{},
PayloadIDCache: cache.NewPayloadIDCache(),
}
// Test the first validator in registry.
@@ -625,10 +625,9 @@ func BenchmarkCommitteeAssignment(b *testing.B) {
chain := &mockChain.ChainService{State: bs, Root: genesisRoot[:]}
vs := &Server{
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
HeadFetcher: chain,
TimeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
// Create request for all validators in the system.

View File

@@ -2,7 +2,6 @@ package validator
import (
"context"
"encoding/hex"
"fmt"
"strings"
"sync"
@@ -14,8 +13,10 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
@@ -37,9 +38,7 @@ import (
var eth1DataNotification bool
const (
// CouldNotDecodeBlock means that a signed beacon block couldn't be created from the block present in the request.
CouldNotDecodeBlock = "Could not decode block"
eth1dataTimeout = 2 * time.Second
eth1dataTimeout = 2 * time.Second
)
// GetBeaconBlock is called by a proposer during its assigned slot to request a block to sign
@@ -69,6 +68,11 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
parentRoot := vs.ForkchoiceFetcher.GetProposerHead()
if parentRoot != headRoot {
blockchain.LateBlockAttemptedReorgCount.Inc()
log.WithFields(logrus.Fields{
"slot": req.Slot,
"parentRoot": fmt.Sprintf("%#x", parentRoot),
"headRoot": fmt.Sprintf("%#x", headRoot),
}).Warn("late block attempted reorg failed")
}
// An optimistic validator MUST NOT produce a block (i.e., sign across the DOMAIN_BEACON_PROPOSER domain).
@@ -191,129 +195,171 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
return nil
}
// ProposeBeaconBlock is called by a proposer during its assigned slot to create a block in an attempt
// to get it processed by the beacon node as the canonical head.
// ProposeBeaconBlock handles the proposal of beacon blocks.
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock")
defer span.End()
blk, err := blocks.NewSignedBeaconBlock(req.Block)
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", CouldNotDecodeBlock, err)
if req == nil {
return nil, status.Errorf(codes.InvalidArgument, "empty request")
}
unblinder, err := newUnblinder(blk, vs.BlockBuilder)
block, err := blocks.NewSignedBeaconBlock(req.Block)
if err != nil {
return nil, errors.Wrap(err, "could not create unblinder")
}
blinded := unblinder.b.IsBlinded() //
var scs []*ethpb.BlobSidecar
blk, scs, err = unblinder.unblindBuilderBlock(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not unblind builder block")
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", "decode block failed", err)
}
// Broadcast the new block to the network.
blkPb, err := blk.Proto()
if err != nil {
return nil, errors.Wrap(err, "could not get protobuf block")
var sidecars []*ethpb.BlobSidecar
if block.IsBlinded() {
block, sidecars, err = vs.handleBlindedBlock(ctx, block)
} else {
sidecars, err = vs.handleUnblindedBlock(block, req)
}
if err := vs.P2P.Broadcast(ctx, blkPb); err != nil {
return nil, fmt.Errorf("could not broadcast block: %v", err)
if err != nil {
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
}
root, err := blk.Block().HashTreeRoot()
root, err := block.Block().HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not tree hash block: %v", err)
return nil, status.Errorf(codes.Internal, "Could not hash tree root: %v", err)
}
log.WithFields(logrus.Fields{
"blockRoot": hex.EncodeToString(root[:]),
}).Debug("Broadcasting block")
if blk.Version() >= version.Deneb {
if !blinded {
dbBlockContents := req.GetDeneb()
if dbBlockContents == nil {
return nil, errors.New("signed beacon block contents is empty")
}
scs, err = buildBlobSidecars(blk, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
if err != nil {
return nil, fmt.Errorf("could not build blob sidecars: %v", err)
}
}
for i, sc := range scs {
if err := vs.P2P.BroadcastBlob(ctx, uint64(i), sc); err != nil {
log.WithError(err).Error("Could not broadcast blob")
}
readOnlySc, err := blocks.NewROBlobWithRoot(sc, root)
if err != nil {
return nil, fmt.Errorf("could not create ROBlob: %v", err)
}
verifiedSc := blocks.NewVerifiedROBlob(readOnlySc)
if err := vs.BlobReceiver.ReceiveBlob(ctx, verifiedSc); err != nil {
log.WithError(err).Error("Could not receive blob")
}
var wg sync.WaitGroup
errChan := make(chan error, 1)
wg.Add(1)
go func() {
defer wg.Done()
if err := vs.broadcastReceiveBlock(ctx, block, root); err != nil {
errChan <- errors.Wrap(err, "broadcast/receive block failed")
return
}
errChan <- nil
}()
if err := vs.broadcastAndReceiveBlobs(ctx, sidecars, root); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive blobs: %v", err)
}
if err := vs.BlockReceiver.ReceiveBlock(ctx, blk, root); err != nil {
return nil, fmt.Errorf("could not process beacon block: %v", err)
wg.Wait()
if err := <-errChan; err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive block: %v", err)
}
log.WithField("slot", blk.Block().Slot()).Debugf(
"Block proposal received via RPC")
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
// handleBlindedBlock processes blinded beacon blocks.
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) {
if block.Version() < version.Bellatrix {
return nil, nil, errors.New("pre-Bellatrix blinded block")
}
if vs.BlockBuilder == nil || !vs.BlockBuilder.Configured() {
return nil, nil, errors.New("unconfigured block builder")
}
copiedBlock, err := block.Copy()
if err != nil {
return nil, nil, err
}
payload, bundle, err := vs.BlockBuilder.SubmitBlindedBlock(ctx, block)
if err != nil {
return nil, nil, errors.Wrap(err, "submit blinded block failed")
}
if err := copiedBlock.Unblind(payload); err != nil {
return nil, nil, errors.Wrap(err, "unblind failed")
}
sidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, errors.Wrap(err, "unblind sidecars failed")
}
return copiedBlock, sidecars, nil
}
// handleUnblindedBlock processes unblinded beacon blocks.
func (vs *Server) handleUnblindedBlock(block interfaces.SignedBeaconBlock, req *ethpb.GenericSignedBeaconBlock) ([]*ethpb.BlobSidecar, error) {
dbBlockContents := req.GetDeneb()
if dbBlockContents == nil {
return nil, nil
}
return buildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
}
// broadcastReceiveBlock broadcasts a block and handles its reception.
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [32]byte) error {
protoBlock, err := block.Proto()
if err != nil {
return errors.Wrap(err, "protobuf conversion failed")
}
if err := vs.P2P.Broadcast(ctx, protoBlock); err != nil {
return errors.Wrap(err, "broadcast failed")
}
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
Data: &blockfeed.ReceivedBlockData{SignedBlock: block},
})
return vs.BlockReceiver.ReceiveBlock(ctx, block, root)
}
return &ethpb.ProposeResponse{
BlockRoot: root[:],
}, nil
// broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars.
func (vs *Server) broadcastAndReceiveBlobs(ctx context.Context, sidecars []*ethpb.BlobSidecar, root [32]byte) error {
for i, sc := range sidecars {
if err := vs.P2P.BroadcastBlob(ctx, uint64(i), sc); err != nil {
return errors.Wrap(err, "broadcast blob failed")
}
readOnlySc, err := blocks.NewROBlobWithRoot(sc, root)
if err != nil {
return errors.Wrap(err, "ROBlob creation failed")
}
verifiedBlob := blocks.NewVerifiedROBlob(readOnlySc)
if err := vs.BlobReceiver.ReceiveBlob(ctx, verifiedBlob); err != nil {
return errors.Wrap(err, "receive blob failed")
}
vs.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.BlobSidecarReceived,
Data: &operation.BlobSidecarReceivedData{Blob: &verifiedBlob},
})
}
return nil
}
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
func (vs *Server) PrepareBeaconProposer(
ctx context.Context, request *ethpb.PrepareBeaconProposerRequest,
_ context.Context, request *ethpb.PrepareBeaconProposerRequest,
) (*emptypb.Empty, error) {
ctx, span := trace.StartSpan(ctx, "validator.PrepareBeaconProposer")
defer span.End()
var feeRecipients []common.Address
var validatorIndices []primitives.ValidatorIndex
newRecipients := make([]*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer, 0, len(request.Recipients))
for _, r := range request.Recipients {
f, err := vs.BeaconDB.FeeRecipientByValidatorID(ctx, r.ValidatorIndex)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
newRecipients = append(newRecipients, r)
case err != nil:
return nil, status.Errorf(codes.Internal, "Could not get fee recipient by validator index: %v", err)
default:
if common.BytesToAddress(r.FeeRecipient) != f {
newRecipients = append(newRecipients, r)
}
}
}
if len(newRecipients) == 0 {
return &emptypb.Empty{}, nil
}
for _, recipientContainer := range newRecipients {
recipient := hexutil.Encode(recipientContainer.FeeRecipient)
recipient := hexutil.Encode(r.FeeRecipient)
if !common.IsHexAddress(recipient) {
return nil, status.Errorf(codes.InvalidArgument, fmt.Sprintf("Invalid fee recipient address: %v", recipient))
}
feeRecipients = append(feeRecipients, common.BytesToAddress(recipientContainer.FeeRecipient))
validatorIndices = append(validatorIndices, recipientContainer.ValidatorIndex)
// Use default address if the burn address is return
feeRecipient := primitives.ExecutionAddress(r.FeeRecipient)
if feeRecipient == primitives.ExecutionAddress([20]byte{}) {
feeRecipient = primitives.ExecutionAddress(params.BeaconConfig().DefaultFeeRecipient)
if feeRecipient == primitives.ExecutionAddress([20]byte{}) {
log.WithField("validatorIndex", r.ValidatorIndex).Warn("fee recipient is the burn address")
}
}
val := cache.TrackedValidator{
Active: true, // TODO: either check or add the field in the request
Index: r.ValidatorIndex,
FeeRecipient: feeRecipient,
}
vs.TrackedValidatorsCache.Set(val)
validatorIndices = append(validatorIndices, r.ValidatorIndex)
}
if err := vs.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, validatorIndices, feeRecipients); err != nil {
return nil, status.Errorf(codes.Internal, "Could not save fee recipients: %v", err)
if len(validatorIndices) != 0 {
log.WithFields(logrus.Fields{
"validatorCount": len(validatorIndices),
}).Info("Updated fee recipient addresses for validator indices")
}
log.WithFields(logrus.Fields{
"validatorIndices": validatorIndices,
}).Info("Updated fee recipient addresses for validator indices")
return &emptypb.Empty{}, nil
}

View File

@@ -86,10 +86,8 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
// If we can't get the builder value, just use local block.
if higherValueBuilder && withdrawalsMatched { // Builder value is higher and withdrawals match.
blk.SetBlinded(true)
if err := setBuilderExecution(blk, builderPayload, builderKzgCommitments); err != nil {
log.WithError(err).Warn("Proposer: failed to set builder payload")
blk.SetBlinded(false)
return setLocalExecution(blk, localPayload)
} else {
return nil
@@ -110,10 +108,8 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
)
return setLocalExecution(blk, localPayload)
default: // Bellatrix case.
blk.SetBlinded(true)
if err := setBuilderExecution(blk, builderPayload, builderKzgCommitments); err != nil {
log.WithError(err).Warn("Proposer: failed to set builder payload")
blk.SetBlinded(false)
return setLocalExecution(blk, localPayload)
} else {
return nil
@@ -315,9 +311,6 @@ func setExecution(blk interfaces.SignedBeaconBlock, execution interfaces.Executi
return errors.New("execution is nil")
}
// Set the blinded status of the block
blk.SetBlinded(isBlinded)
// Set the execution data for the block
errMessage := "failed to set local execution"
if isBlinded {

View File

@@ -81,9 +81,10 @@ func TestServer_setExecutionData(t *testing.T) {
HeadFetcher: &blockchainTest.ChainService{State: capellaTransitionState},
FinalizationFetcher: &blockchainTest.ChainService{},
BeaconDB: beaconDB,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: true, Cfg: &builderTest.Config{BeaconDB: beaconDB}},
ForkchoiceFetcher: &blockchainTest.ChainService{},
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
t.Run("No builder configured. Use local block", func(t *testing.T) {

View File

@@ -57,7 +57,7 @@ func (c *blobsBundleCache) prune(minSlot primitives.Slot) {
}
// buildBlobSidecars given a block, builds the blob sidecars for the block.
func buildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgproofs [][]byte) ([]*ethpb.BlobSidecar, error) {
func buildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgProofs [][]byte) ([]*ethpb.BlobSidecar, error) {
if blk.Version() < version.Deneb {
return nil, nil // No blobs before deneb.
}
@@ -66,7 +66,7 @@ func buildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgproo
return nil, err
}
cLen := len(denebBlk.Block.Body.BlobKzgCommitments)
if cLen != len(blobs) || cLen != len(kzgproofs) {
if cLen != len(blobs) || cLen != len(kzgProofs) {
return nil, errors.New("blob KZG commitments don't match number of blobs or KZG proofs")
}
blobSidecars := make([]*ethpb.BlobSidecar, cLen)
@@ -84,7 +84,7 @@ func buildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgproo
Index: uint64(i),
Blob: blobs[i],
KzgCommitment: denebBlk.Block.Body.BlobKzgCommitments[i],
KzgProof: kzgproofs[i],
KzgProof: kzgProofs[i],
SignedBlockHeader: header,
CommitmentInclusionProof: proof,
}

View File

@@ -5,14 +5,12 @@ import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
@@ -53,44 +51,36 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
slot := blk.Slot()
vIdx := blk.ProposerIndex()
headRoot := blk.ParentRoot()
proposerID, payloadId, ok := vs.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, headRoot)
feeRecipient := params.BeaconConfig().DefaultFeeRecipient
recipient, err := vs.BeaconDB.FeeRecipientByValidatorID(ctx, vIdx)
switch err == nil {
case true:
feeRecipient = recipient
case errors.As(err, kv.ErrNotFoundFeeRecipient):
// If fee recipient is not found in DB and not set from beacon node CLI,
// use the burn address.
if feeRecipient.String() == params.BeaconConfig().EthBurnAddressHex {
logrus.WithFields(logrus.Fields{
"validatorIndex": vIdx,
"burnAddress": params.BeaconConfig().EthBurnAddressHex,
}).Warn("Fee recipient is currently using the burn address, " +
"you will not be rewarded transaction fees on this setting. " +
"Please set a different eth address as the fee recipient. " +
"Please refer to our documentation for instructions")
}
default:
return nil, false, errors.Wrap(err, "could not get fee recipient in db")
logFields := logrus.Fields{
"validatorIndex": vIdx,
"slot": slot,
"headRoot": fmt.Sprintf("%#x", headRoot),
}
payloadId, ok := vs.PayloadIDCache.PayloadID(slot, headRoot)
val, tracked := vs.TrackedValidatorsCache.Validator(vIdx)
if !tracked {
logrus.WithFields(logFields).Warn("could not find tracked proposer index")
}
if ok && proposerID == vIdx && payloadId != [8]byte{} { // Payload ID is cache hit. Return the cached payload ID.
var pid [8]byte
var err error
if ok && payloadId != [8]byte{} {
// Payload ID is cache hit. Return the cached payload ID.
var pid primitives.PayloadID
copy(pid[:], payloadId[:])
payloadIDCacheHit.Inc()
payload, bundle, overrideBuilder, err := vs.ExecutionEngineCaller.GetPayload(ctx, pid, slot)
switch {
case err == nil:
bundleCache.add(slot, bundle)
warnIfFeeRecipientDiffers(payload, feeRecipient)
warnIfFeeRecipientDiffers(payload, val.FeeRecipient)
return payload, overrideBuilder, nil
case errors.Is(err, context.DeadlineExceeded):
default:
return nil, false, errors.Wrap(err, "could not get cached payload from execution client")
}
}
log.WithFields(logFields).Debug("payload ID cache miss")
parentHash, err := vs.getParentBlockHash(ctx, st, slot)
switch {
case errors.Is(err, errActivationNotReached) || errors.Is(err, errNoTerminalBlockHash):
@@ -112,7 +102,7 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
finalizedBlockHash := [32]byte{}
justifiedBlockHash := [32]byte{}
// Blocks before Bellatrix don't have execution payloads. Use zeros as the hash.
if st.Version() >= version.Altair {
if st.Version() >= version.Bellatrix {
finalizedBlockHash = vs.FinalizationFetcher.FinalizedBlockHash()
justifiedBlockHash = vs.FinalizationFetcher.UnrealizedJustifiedPayloadBlockHash()
}
@@ -137,7 +127,7 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: uint64(t.Unix()),
PrevRandao: random,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
ParentBeaconBlockRoot: headRoot[:],
})
@@ -152,7 +142,7 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
PrevRandao: random,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
})
if err != nil {
@@ -162,7 +152,7 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: random,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
})
if err != nil {
return nil, false, err
@@ -182,13 +172,17 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
return nil, false, err
}
bundleCache.add(slot, bundle)
warnIfFeeRecipientDiffers(payload, feeRecipient)
warnIfFeeRecipientDiffers(payload, val.FeeRecipient)
localValueGwei, err := payload.ValueInGwei()
if err == nil {
log.WithField("value", localValueGwei).Debug("received execution payload from local engine")
}
return payload, overrideBuilder, nil
}
// warnIfFeeRecipientDiffers logs a warning if the fee recipient in the included payload does not
// match the requested one.
func warnIfFeeRecipientDiffers(payload interfaces.ExecutionData, feeRecipient common.Address) {
func warnIfFeeRecipientDiffers(payload interfaces.ExecutionData, feeRecipient primitives.ExecutionAddress) {
// Warn if the fee recipient is not the value we expect.
if payload != nil && !bytes.Equal(payload.FeeRecipient(), feeRecipient[:]) {
logrus.WithFields(logrus.Fields{
@@ -299,7 +293,7 @@ func getParentBlockHashPostMerge(st state.BeaconState) ([]byte, error) {
if err != nil {
return nil, errors.Wrap(err, "could not get post merge payload header")
}
return header.ParentHash(), nil
return header.BlockHash(), nil
}
// getParentBlockHashPreMerge retrieves the parent block hash before the merge has completed.

View File

@@ -71,8 +71,6 @@ func TestServer_getExecutionPayload(t *testing.T) {
Root: b2rCapella[:],
}))
require.NoError(t, beaconDB.SaveFeeRecipientsByValidatorIDs(context.Background(), []primitives.ValidatorIndex{0}, []common.Address{{}}))
tests := []struct {
name string
st state.BeaconState
@@ -86,9 +84,10 @@ func TestServer_getExecutionPayload(t *testing.T) {
wantedOverride bool
}{
{
name: "transition completed, nil payload id",
st: transitionSt,
errString: "nil payload with block hash",
name: "transition completed, nil payload id",
st: transitionSt,
validatorIndx: 2,
errString: "nil payload with block hash",
},
{
name: "transition completed, happy case (has fee recipient in Db)",
@@ -110,6 +109,7 @@ func TestServer_getExecutionPayload(t *testing.T) {
{
name: "transition completed, happy case, (payload ID cached)",
st: transitionSt,
payloadID: &pb.PayloadIDBytes{0x1},
validatorIndx: 100,
},
{
@@ -134,6 +134,7 @@ func TestServer_getExecutionPayload(t *testing.T) {
st: transitionSt,
validatorIndx: 100,
override: true,
payloadID: &pb.PayloadIDBytes{0x1},
wantedOverride: true,
},
}
@@ -149,9 +150,13 @@ func TestServer_getExecutionPayload(t *testing.T) {
HeadFetcher: &chainMock.ChainService{State: tt.st},
FinalizationFetcher: &chainMock.ChainService{},
BeaconDB: beaconDB,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
vs.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: tt.validatorIndx})
if tt.payloadID != nil {
vs.PayloadIDCache.Set(tt.st.Slot(), [32]byte{'a'}, [8]byte(*tt.payloadID))
}
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(tt.st.Slot(), 100, [8]byte{100}, [32]byte{'a'})
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = tt.st.Slot()
blk.Block.ProposerIndex = tt.validatorIndx
@@ -192,9 +197,10 @@ func TestServer_getExecutionPayloadContextTimeout(t *testing.T) {
ExecutionEngineCaller: &powtesting.EngineClient{PayloadIDBytes: &pb.PayloadIDBytes{}, ErrGetPayload: context.DeadlineExceeded, ExecutionPayload: &pb.ExecutionPayload{}},
HeadFetcher: &chainMock.ChainService{State: nonTransitionSt},
BeaconDB: beaconDB,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nonTransitionSt.Slot(), 100, [8]byte{100}, [32]byte{'a'})
vs.PayloadIDCache.Set(nonTransitionSt.Slot(), [32]byte{'a'}, [8]byte{100})
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = nonTransitionSt.Slot()
@@ -231,10 +237,6 @@ func TestServer_getExecutionPayload_UnexpectedFeeRecipient(t *testing.T) {
}))
feeRecipient := common.BytesToAddress([]byte("a"))
require.NoError(t, beaconDB.SaveFeeRecipientsByValidatorIDs(context.Background(), []primitives.ValidatorIndex{0}, []common.Address{
feeRecipient,
}))
payloadID := &pb.PayloadIDBytes{0x1}
payload := emptyPayload()
payload.FeeRecipient = feeRecipient[:]
@@ -246,8 +248,15 @@ func TestServer_getExecutionPayload_UnexpectedFeeRecipient(t *testing.T) {
HeadFetcher: &chainMock.ChainService{State: transitionSt},
FinalizationFetcher: &chainMock.ChainService{},
BeaconDB: beaconDB,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
val := cache.TrackedValidator{
Active: true,
FeeRecipient: primitives.ExecutionAddress(feeRecipient),
Index: 0,
}
vs.TrackedValidatorsCache.Set(val)
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = transitionSt.Slot()
@@ -257,6 +266,7 @@ func TestServer_getExecutionPayload_UnexpectedFeeRecipient(t *testing.T) {
gotPayload, _, err := vs.getLocalPayload(context.Background(), b.Block(), transitionSt)
require.NoError(t, err)
require.NotNil(t, gotPayload)
require.Equal(t, common.Address(gotPayload.FeeRecipient()), feeRecipient)
// We should NOT be getting the warning.
require.LogsDoNotContain(t, hook, "Fee recipient address from execution client is not what was expected")
@@ -264,7 +274,7 @@ func TestServer_getExecutionPayload_UnexpectedFeeRecipient(t *testing.T) {
evilRecipientAddress := common.BytesToAddress([]byte("evil"))
payload.FeeRecipient = evilRecipientAddress[:]
vs.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
vs.PayloadIDCache = cache.NewPayloadIDCache()
gotPayload, _, err = vs.getLocalPayload(context.Background(), b.Block(), transitionSt)
require.NoError(t, err)

View File

@@ -195,7 +195,6 @@ func TestServer_GetBeaconBlock_Altair(t *testing.T) {
func TestServer_GetBeaconBlock_Bellatrix(t *testing.T) {
db := dbutil.SetupDB(t)
ctx := context.Background()
hook := logTest.NewGlobal()
terminalBlockHash := bytesutil.PadTo([]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, 32)
@@ -306,7 +305,6 @@ func TestServer_GetBeaconBlock_Bellatrix(t *testing.T) {
assert.DeepEqual(t, randaoReveal, bellatrixBlk.Bellatrix.Body.RandaoReveal, "Expected block to have correct randao reveal")
assert.DeepEqual(t, req.Graffiti, bellatrixBlk.Bellatrix.Body.Graffiti, "Expected block to have correct Graffiti")
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
require.DeepEqual(t, payload, bellatrixBlk.Bellatrix.Body.ExecutionPayload) // Payload should equal.
// Operator sets default fee recipient to not be burned through beacon node cli.
@@ -599,7 +597,8 @@ func getProposerServer(db db.HeadAccessDatabase, headState state.BeaconState, he
TimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
BeaconDB: db,
BLSChangesPool: blstoexec.NewPool(),
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: true},
@@ -629,9 +628,10 @@ func injectSlashings(t *testing.T, st state.BeaconState, keys []bls.SecretKey, s
func TestProposer_ProposeBlock_OK(t *testing.T) {
tests := []struct {
name string
block func([32]byte) *ethpb.GenericSignedBeaconBlock
err string
name string
block func([32]byte) *ethpb.GenericSignedBeaconBlock
err string
useBuilder bool
}{
{
name: "phase0",
@@ -678,6 +678,24 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
blk := &ethpb.GenericSignedBeaconBlock_BlindedCapella{BlindedCapella: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
useBuilder: true,
},
{
name: "blind capella no builder",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBlindedBeaconBlockCapella()
blockToPropose.Block.Slot = 5
blockToPropose.Block.ParentRoot = parent[:]
txRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
withdrawalsRoot, err := ssz.WithdrawalSliceRoot([]*enginev1.Withdrawal{}, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
blockToPropose.Block.Body.ExecutionPayloadHeader.TransactionsRoot = txRoot[:]
blockToPropose.Block.Body.ExecutionPayloadHeader.WithdrawalsRoot = withdrawalsRoot[:]
blk := &ethpb.GenericSignedBeaconBlock_BlindedCapella{BlindedCapella: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
err: "unconfigured block builder",
},
{
name: "bellatrix",
@@ -699,6 +717,69 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
},
{
name: "deneb block some blobs",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBeaconBlockContentsDeneb()
blockToPropose.Block.Block.Slot = 5
blockToPropose.Block.Block.ParentRoot = parent[:]
blockToPropose.Blobs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.KzgProofs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.Block.Block.Body.BlobKzgCommitments = [][]byte{bytesutil.PadTo([]byte("kc"), 48), bytesutil.PadTo([]byte("kc1"), 48), bytesutil.PadTo([]byte("kc2"), 48)}
blk := &ethpb.GenericSignedBeaconBlock_Deneb{Deneb: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
},
{
name: "deneb block some blobs (kzg and blob count missmatch)",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBeaconBlockContentsDeneb()
blockToPropose.Block.Block.Slot = 5
blockToPropose.Block.Block.ParentRoot = parent[:]
blockToPropose.Blobs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.KzgProofs = [][]byte{{0x01}, {0x02}, {0x03}}
blk := &ethpb.GenericSignedBeaconBlock_Deneb{Deneb: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
err: "blob KZG commitments don't match number of blobs or KZG proofs",
},
{
name: "blind deneb block some blobs",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBlindedBeaconBlockDeneb()
blockToPropose.Message.Slot = 5
blockToPropose.Message.ParentRoot = parent[:]
txRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
withdrawalsRoot, err := ssz.WithdrawalSliceRoot([]*enginev1.Withdrawal{}, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
blockToPropose.Message.Body.ExecutionPayloadHeader.TransactionsRoot = txRoot[:]
blockToPropose.Message.Body.ExecutionPayloadHeader.WithdrawalsRoot = withdrawalsRoot[:]
blockToPropose.Message.Body.BlobKzgCommitments = [][]byte{bytesutil.PadTo([]byte{0x01}, 48)}
blk := &ethpb.GenericSignedBeaconBlock_BlindedDeneb{BlindedDeneb: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
useBuilder: true,
},
{
name: "blind deneb block some blobs (commitment value does not match blob)",
block: func(parent [32]byte) *ethpb.GenericSignedBeaconBlock {
blockToPropose := util.NewBlindedBeaconBlockDeneb()
blockToPropose.Message.Slot = 5
blockToPropose.Message.ParentRoot = parent[:]
txRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
withdrawalsRoot, err := ssz.WithdrawalSliceRoot([]*enginev1.Withdrawal{}, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
blockToPropose.Message.Body.ExecutionPayloadHeader.TransactionsRoot = txRoot[:]
blockToPropose.Message.Body.ExecutionPayloadHeader.WithdrawalsRoot = withdrawalsRoot[:]
blockToPropose.Message.Body.BlobKzgCommitments = [][]byte{bytesutil.PadTo([]byte("kc"), 48)}
blk := &ethpb.GenericSignedBeaconBlock_BlindedDeneb{BlindedDeneb: blockToPropose}
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
useBuilder: true,
err: "unblind sidecars failed: commitment value doesn't match block",
},
}
for _, tt := range tests {
@@ -716,8 +797,11 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
BlockReceiver: c,
BlockNotifier: c.BlockNotifier(),
P2P: mockp2p.NewTestP2P(t),
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: true, PayloadCapella: emptyPayloadCapella(), PayloadDeneb: emptyPayloadDeneb(), BlobBundle: &enginev1.BlobsBundle{KzgCommitments: [][]byte{{0x01}}, Proofs: [][]byte{{0x02}}, Blobs: [][]byte{{0x03}}}},
BeaconDB: db,
BlockBuilder: &builderTest.MockBuilderService{HasConfigured: tt.useBuilder, PayloadCapella: emptyPayloadCapella(), PayloadDeneb: emptyPayloadDeneb(),
BlobBundle: &enginev1.BlobsBundle{KzgCommitments: [][]byte{bytesutil.PadTo([]byte{0x01}, 48)}, Proofs: [][]byte{{0x02}}, Blobs: [][]byte{{0x03}}}},
BeaconDB: db,
BlobReceiver: c,
OperationNotifier: c.OperationNotifier(),
}
blockToPropose := tt.block(bsRoot)
res, err := proposerServer.ProposeBeaconBlock(context.Background(), blockToPropose)
@@ -2609,16 +2693,19 @@ func TestProposer_PrepareBeaconProposer(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
db := dbutil.SetupDB(t)
ctx := context.Background()
proposerServer := &Server{BeaconDB: db}
proposerServer := &Server{
BeaconDB: db,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
_, err := proposerServer.PrepareBeaconProposer(ctx, tt.args.request)
if tt.wantErr != "" {
require.ErrorContains(t, tt.wantErr, err)
return
}
require.NoError(t, err)
address, err := proposerServer.BeaconDB.FeeRecipientByValidatorID(ctx, 1)
require.NoError(t, err)
require.Equal(t, common.BytesToAddress(tt.args.request.Recipients[0].FeeRecipient), address)
val, tracked := proposerServer.TrackedValidatorsCache.Validator(1)
require.Equal(t, true, tracked)
require.Equal(t, primitives.ExecutionAddress(tt.args.request.Recipients[0].FeeRecipient), val.FeeRecipient)
})
}
@@ -2628,7 +2715,10 @@ func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
hook := logTest.NewGlobal()
db := dbutil.SetupDB(t)
ctx := context.Background()
proposerServer := &Server{BeaconDB: db}
proposerServer := &Server{
BeaconDB: db,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
// New validator
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
@@ -2645,7 +2735,7 @@ func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
hook.Reset()
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses for validator indices")
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
// Same validator with different fee recipient
hook.Reset()
@@ -2676,14 +2766,16 @@ func TestProposer_PrepareBeaconProposerOverlapping(t *testing.T) {
hook.Reset()
_, err = proposerServer.PrepareBeaconProposer(ctx, req)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, "Updated fee recipient addresses for validator indices")
require.LogsContain(t, hook, "Updated fee recipient addresses for validator indices")
}
func BenchmarkServer_PrepareBeaconProposer(b *testing.B) {
db := dbutil.SetupDB(b)
ctx := context.Background()
proposerServer := &Server{BeaconDB: db}
proposerServer := &Server{
BeaconDB: db,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
f := bytesutil.PadTo([]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}, fieldparams.FeeRecipientLength)
recipients := make([]*ethpb.PrepareBeaconProposerRequest_FeeRecipientContainer, 0)
for i := 0; i < 10000; i++ {

View File

@@ -44,7 +44,8 @@ import (
// and more.
type Server struct {
Ctx context.Context
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
HeadFetcher blockchain.HeadFetcher
ForkFetcher blockchain.ForkFetcher
ForkchoiceFetcher blockchain.ForkchoiceFetcher

View File

@@ -2,125 +2,18 @@ package validator
import (
"bytes"
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
consensus_types "github.com/prysmaticlabs/prysm/v4/consensus-types"
consensusblocks "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/sirupsen/logrus"
"google.golang.org/protobuf/proto"
)
type unblinder struct {
b interfaces.SignedBeaconBlock
builder builder.BlockBuilder
}
func newUnblinder(b interfaces.SignedBeaconBlock, builder builder.BlockBuilder) (*unblinder, error) {
if err := consensusblocks.BeaconBlockIsNil(b); err != nil {
return nil, err
}
if builder == nil {
return nil, errors.New("nil builder provided")
}
return &unblinder{
b: b,
builder: builder,
}, nil
}
func (u *unblinder) unblindBuilderBlock(ctx context.Context) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) {
if !u.b.IsBlinded() || u.b.Version() < version.Bellatrix {
return u.b, nil, nil
}
if u.b.IsBlinded() && !u.builder.Configured() {
return nil, nil, errors.New("builder not configured")
}
psb, err := u.blindedProtoBlock()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get blinded proto block")
}
sb, err := consensusblocks.NewSignedBeaconBlock(psb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not create signed block")
}
if err = copyBlockData(u.b, sb); err != nil {
return nil, nil, errors.Wrap(err, "could not copy block data")
}
h, err := u.b.Block().Body().Execution()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution")
}
if err = sb.SetExecution(h); err != nil {
return nil, nil, errors.Wrap(err, "could not set execution")
}
payload, blobsBundle, err := u.builder.SubmitBlindedBlock(ctx, sb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not submit blinded block")
}
headerRoot, err := h.HashTreeRoot()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get header root")
}
payloadRoot, err := payload.HashTreeRoot()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get payload root")
}
if headerRoot != payloadRoot {
return nil, nil, fmt.Errorf("header and payload root do not match, consider disconnect from relay to avoid further issues, "+
"%#x != %#x", headerRoot, payloadRoot)
}
bb, err := u.protoBlock()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get proto block")
}
wb, err := consensusblocks.NewSignedBeaconBlock(bb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not create signed block")
}
if err = copyBlockData(sb, wb); err != nil {
return nil, nil, errors.Wrap(err, "could not copy block data")
}
if err = wb.SetExecution(payload); err != nil {
return nil, nil, errors.Wrap(err, "could not set execution")
}
txs, err := payload.Transactions()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get transactions from payload")
}
if wb.Version() >= version.Deneb && blobsBundle != nil {
log.WithField("blobCount", len(blobsBundle.Blobs))
}
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", h.BlockHash()),
"feeRecipient": fmt.Sprintf("%#x", h.FeeRecipient()),
"gasUsed": h.GasUsed(),
"slot": u.b.Block().Slot(),
"txs": len(txs),
}).Info("Retrieved full payload from builder")
sidecars, err := unblindBlobsSidecars(u.b, blobsBundle)
if err != nil {
return nil, nil, errors.Wrap(err, "could not unblind blobs sidecars")
}
return wb, sidecars, nil
}
func unblindBlobsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.BlobsBundle) ([]*ethpb.BlobSidecar, error) {
if bundle == nil {
if block.Version() < version.Deneb || bundle == nil {
return nil, nil
}
header, err := block.Header()
@@ -168,98 +61,3 @@ func unblindBlobsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.B
}
return sidecars, nil
}
func copyBlockData(src interfaces.SignedBeaconBlock, dst interfaces.SignedBeaconBlock) error {
agg, err := src.Block().Body().SyncAggregate()
if err != nil {
return errors.Wrap(err, "could not get sync aggregate")
}
parentRoot := src.Block().ParentRoot()
stateRoot := src.Block().StateRoot()
randaoReveal := src.Block().Body().RandaoReveal()
graffiti := src.Block().Body().Graffiti()
sig := src.Signature()
blsToExecChanges, err := src.Block().Body().BLSToExecutionChanges()
if err != nil && !errors.Is(err, consensus_types.ErrUnsupportedField) {
return errors.Wrap(err, "could not get bls to execution changes")
}
kzgCommitments, err := src.Block().Body().BlobKzgCommitments()
if err != nil && !errors.Is(err, consensus_types.ErrUnsupportedField) {
return errors.Wrap(err, "could not get blob kzg commitments")
}
dst.SetSlot(src.Block().Slot())
dst.SetProposerIndex(src.Block().ProposerIndex())
dst.SetParentRoot(parentRoot[:])
dst.SetStateRoot(stateRoot[:])
dst.SetRandaoReveal(randaoReveal[:])
dst.SetEth1Data(src.Block().Body().Eth1Data())
dst.SetGraffiti(graffiti[:])
dst.SetProposerSlashings(src.Block().Body().ProposerSlashings())
dst.SetAttesterSlashings(src.Block().Body().AttesterSlashings())
dst.SetAttestations(src.Block().Body().Attestations())
dst.SetDeposits(src.Block().Body().Deposits())
dst.SetVoluntaryExits(src.Block().Body().VoluntaryExits())
if err = dst.SetSyncAggregate(agg); err != nil {
return errors.Wrap(err, "could not set sync aggregate")
}
dst.SetSignature(sig[:])
if err = dst.SetBLSToExecutionChanges(blsToExecChanges); err != nil && !errors.Is(err, consensus_types.ErrUnsupportedField) {
return errors.Wrap(err, "could not set bls to execution changes")
}
if err = dst.SetBlobKzgCommitments(kzgCommitments); err != nil && !errors.Is(err, consensus_types.ErrUnsupportedField) {
return errors.Wrap(err, "could not set bls to execution changes")
}
return nil
}
func (u *unblinder) blindedProtoBlock() (proto.Message, error) {
switch u.b.Version() {
case version.Bellatrix:
return &ethpb.SignedBlindedBeaconBlockBellatrix{
Block: &ethpb.BlindedBeaconBlockBellatrix{
Body: &ethpb.BlindedBeaconBlockBodyBellatrix{},
},
}, nil
case version.Capella:
return &ethpb.SignedBlindedBeaconBlockCapella{
Block: &ethpb.BlindedBeaconBlockCapella{
Body: &ethpb.BlindedBeaconBlockBodyCapella{},
},
}, nil
case version.Deneb:
return &ethpb.SignedBlindedBeaconBlockDeneb{
Message: &ethpb.BlindedBeaconBlockDeneb{
Body: &ethpb.BlindedBeaconBlockBodyDeneb{},
},
}, nil
default:
return nil, fmt.Errorf("invalid version %s", version.String(u.b.Version()))
}
}
func (u *unblinder) protoBlock() (proto.Message, error) {
switch u.b.Version() {
case version.Bellatrix:
return &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{},
},
}, nil
case version.Capella:
return &ethpb.SignedBeaconBlockCapella{
Block: &ethpb.BeaconBlockCapella{
Body: &ethpb.BeaconBlockBodyCapella{},
},
}, nil
case version.Deneb:
return &ethpb.SignedBeaconBlockDeneb{
Block: &ethpb.BeaconBlockDeneb{
Body: &ethpb.BeaconBlockBodyDeneb{},
},
}, nil
default:
return nil, fmt.Errorf("invalid version %s", version.String(u.b.Version()))
}
}

View File

@@ -1,409 +0,0 @@
package validator
import (
"context"
"testing"
"github.com/pkg/errors"
builderTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/builder/testing"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz"
v1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
)
func Test_unblindBuilderBlock(t *testing.T) {
p := emptyPayload()
p.GasLimit = 123
pCapella := emptyPayloadCapella()
pCapella.GasLimit = 123
pDeneb := emptyPayloadDeneb()
pDeneb.GasLimit = 123
pDeneb.ExcessBlobGas = 456
pDeneb.BlobGasUsed = 789
denebblk, denebsidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, fieldparams.MaxBlobsPerBlock)
denebCommitments, err := denebblk.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
execution, err := denebblk.Block().Body().Execution()
require.NoError(t, err)
denebPayload, err := execution.PbDeneb()
require.NoError(t, err)
blobs := make([][]byte, len(denebsidecars))
for i, sidecar := range denebsidecars {
blobs[i] = sidecar.BlobSidecar.Blob
}
tests := []struct {
name string
blk interfaces.SignedBeaconBlock
mock *builderTest.MockBuilderService
err string
returnedBlk interfaces.SignedBeaconBlock
returnedBlobSidecars []blocks.ROBlob
}{
{
name: "old block version",
blk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
return wb
}(),
returnedBlk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
return wb
}(),
},
{
name: "blinded without configured builder",
blk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockBellatrix())
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: false,
},
err: "builder not configured",
},
{
name: "non-blinded without configured builder",
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
b.Block.Body.ExecutionPayload = &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
GasLimit: 123,
}
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: false,
Payload: p,
},
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
b.Block.Body.ExecutionPayload = p
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
},
{
name: "submit blind block error",
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
Payload: &v1.ExecutionPayload{},
HasConfigured: true,
ErrSubmitBlindedBlock: errors.New("can't submit"),
},
err: "can't submit",
},
{
name: "head and payload root mismatch",
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
Payload: p,
},
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
b.Block.Body.ExecutionPayload = p
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
err: "header and payload root do not match",
},
{
name: "can get payload Bellatrix",
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
txRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
b.Block.Body.ExecutionPayloadHeader = &v1.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: txRoot[:],
GasLimit: 123,
}
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
Payload: p,
},
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
b.Block.Body.ExecutionPayload = p
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
},
{
name: "can get payload Capella",
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockCapella()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
b.Block.Body.BlsToExecutionChanges = []*eth.SignedBLSToExecutionChange{
{
Message: &eth.BLSToExecutionChange{
ValidatorIndex: 123,
FromBlsPubkey: []byte{'a'},
ToExecutionAddress: []byte{'a'},
},
Signature: []byte("sig123"),
},
{
Message: &eth.BLSToExecutionChange{
ValidatorIndex: 456,
FromBlsPubkey: []byte{'b'},
ToExecutionAddress: []byte{'b'},
},
Signature: []byte("sig456"),
},
}
txRoot, err := ssz.TransactionsRoot([][]byte{})
require.NoError(t, err)
withdrawalsRoot, err := ssz.WithdrawalSliceRoot([]*v1.Withdrawal{}, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
b.Block.Body.ExecutionPayloadHeader = &v1.ExecutionPayloadHeaderCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: txRoot[:],
WithdrawalsRoot: withdrawalsRoot[:],
GasLimit: 123,
}
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
PayloadCapella: pCapella,
},
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockCapella()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
b.Block.Body.BlsToExecutionChanges = []*eth.SignedBLSToExecutionChange{
{
Message: &eth.BLSToExecutionChange{
ValidatorIndex: 123,
FromBlsPubkey: []byte{'a'},
ToExecutionAddress: []byte{'a'},
},
Signature: []byte("sig123"),
},
{
Message: &eth.BLSToExecutionChange{
ValidatorIndex: 456,
FromBlsPubkey: []byte{'b'},
ToExecutionAddress: []byte{'b'},
},
Signature: []byte("sig456"),
},
}
b.Block.Body.ExecutionPayload = pCapella
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
},
{
name: "can get payload and blobs Deneb",
blk: func() interfaces.SignedBeaconBlock {
blindedBlock, err := denebblk.ToBlinded()
require.NoError(t, err)
b, err := blindedBlock.PbBlindedDenebBlock()
require.NoError(t, err)
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
PayloadDeneb: denebPayload,
BlobBundle: &v1.BlobsBundle{
KzgCommitments: denebCommitments,
Proofs: [][]byte{{'d', 0}, {'d', 1}, {'d', 2}, {'d', 3}, {'d', 4}, {'d', 5}},
Blobs: blobs,
},
},
returnedBlk: func() interfaces.SignedBeaconBlock {
b, err := denebblk.PbDenebBlock()
require.NoError(t, err)
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
returnedBlobSidecars: denebsidecars,
},
{
name: "deneb mismatch commitments count",
blk: func() interfaces.SignedBeaconBlock {
blindedBlock, err := denebblk.ToBlinded()
require.NoError(t, err)
b, err := blindedBlock.PbBlindedDenebBlock()
require.NoError(t, err)
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
PayloadDeneb: denebPayload,
BlobBundle: &v1.BlobsBundle{
KzgCommitments: [][]byte{{'c', 0}, {'c', 1}, {'c', 2}, {'c', 3}, {'c', 4}},
Proofs: [][]byte{{'d', 0}, {'d', 1}, {'d', 2}, {'d', 3}, {'d', 4}, {'d', 5}},
Blobs: blobs,
},
},
err: "mismatch commitments count",
},
{
name: "deneb mismatch proofs count",
blk: func() interfaces.SignedBeaconBlock {
blindedBlock, err := denebblk.ToBlinded()
require.NoError(t, err)
b, err := blindedBlock.PbBlindedDenebBlock()
require.NoError(t, err)
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
PayloadDeneb: denebPayload,
BlobBundle: &v1.BlobsBundle{
KzgCommitments: [][]byte{{'c', 0}, {'c', 1}, {'c', 2}, {'c', 3}, {'c', 4}, {'c', 5}},
Proofs: [][]byte{{'d', 0}, {'d', 1}, {'d', 2}, {'d', 3}, {'d', 4}},
Blobs: blobs,
},
},
err: "mismatch proofs count",
},
{
name: "deneb different count commitments bundle vs block",
blk: func() interfaces.SignedBeaconBlock {
blindedBlock, err := denebblk.ToBlinded()
require.NoError(t, err)
b, err := blindedBlock.PbBlindedDenebBlock()
require.NoError(t, err)
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
PayloadDeneb: denebPayload,
BlobBundle: &v1.BlobsBundle{
KzgCommitments: [][]byte{{'c', 0}, {'c', 1}, {'c', 2}, {'c', 3}, {'c', 4}},
Proofs: [][]byte{{'d', 0}, {'d', 1}, {'d', 2}, {'d', 3}, {'d', 4}},
Blobs: blobs[:5],
},
},
err: "commitment count doesn't match block",
},
{
name: "deneb different value commitments bundle vs block",
blk: func() interfaces.SignedBeaconBlock {
blindedBlock, err := denebblk.ToBlinded()
require.NoError(t, err)
b, err := blindedBlock.PbBlindedDenebBlock()
require.NoError(t, err)
wb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
return wb
}(),
mock: &builderTest.MockBuilderService{
HasConfigured: true,
PayloadDeneb: denebPayload,
BlobBundle: &v1.BlobsBundle{
KzgCommitments: [][]byte{{'c', 0}, {'c', 1}, {'c', 2}, {'c', 3}, {'c', 4}, {'c', 5}},
Proofs: [][]byte{{'d', 0}, {'d', 1}, {'d', 2}, {'d', 3}, {'d', 4}, {'d', 5}},
Blobs: blobs,
},
},
err: "commitment value doesn't match block",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
unblinder, err := newUnblinder(tc.blk, tc.mock)
require.NoError(t, err)
gotBlk, gotBlobs, err := unblinder.unblindBuilderBlock(context.Background())
if tc.err != "" {
require.ErrorContains(t, tc.err, err)
} else {
require.NoError(t, err)
require.DeepEqual(t, tc.returnedBlk, gotBlk)
if tc.returnedBlobSidecars != nil {
blobs := make([]blocks.ROBlob, len(gotBlobs))
for i := range gotBlobs {
blobs[i], err = blocks.NewROBlob(gotBlobs[i])
require.NoError(t, err)
// TODO: update this check when generate function is updated for inclusion proofs require.DeepEqual(t, tc.returnedBlobSidecars[i].CommitmentInclusionProof, blobs[i].CommitmentInclusionProof)
require.DeepEqual(t, tc.returnedBlobSidecars[i].SignedBlockHeader, blobs[i].SignedBlockHeader)
require.Equal(t, len(tc.returnedBlobSidecars[i].Blob), len(blobs[i].Blob))
}
}
}
})
}
}

View File

@@ -128,12 +128,13 @@ type Config struct {
StateGen *stategen.State
MaxMsgSize int
ExecutionEngineCaller execution.EngineCaller
ProposerIdsCache *cache.ProposerPayloadIDsCache
OptimisticModeFetcher blockchain.OptimisticModeFetcher
BlockBuilder builder.BlockBuilder
Router *mux.Router
ClockWaiter startup.ClockWaiter
BlobStorage *filesystem.BlobStorage
TrackedValidatorsCache *cache.TrackedValidatorsCache
PayloadIDCache *cache.PayloadIDCache
}
// NewService instantiates a new RPC service instance that will
@@ -401,11 +402,12 @@ func (s *Service) Start() {
ReplayerBuilder: ch,
ExecutionEngineCaller: s.cfg.ExecutionEngineCaller,
BeaconDB: s.cfg.BeaconDB,
ProposerSlotIndexCache: s.cfg.ProposerIdsCache,
BlockBuilder: s.cfg.BlockBuilder,
BLSChangesPool: s.cfg.BLSChangesPool,
ClockWaiter: s.cfg.ClockWaiter,
CoreService: coreService,
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
PayloadIDCache: s.cfg.PayloadIDCache,
}
s.initializeValidatorServerRoutes(&validator.Server{
HeadFetcher: s.cfg.HeadFetcher,
@@ -418,11 +420,12 @@ func (s *Service) Start() {
V1Alpha1Server: validatorServer,
Stater: stater,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
ProposerSlotIndexCache: s.cfg.ProposerIdsCache,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
BeaconDB: s.cfg.BeaconDB,
BlockBuilder: s.cfg.BlockBuilder,
OperationNotifier: s.cfg.OperationNotifier,
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
PayloadIDCache: s.cfg.PayloadIDCache,
CoreService: coreService,
BlockRewardFetcher: rewardFetcher,
})

View File

@@ -164,7 +164,7 @@ func (b *BeaconState) ToProtoUnsafe() interface{} {
PreviousJustifiedCheckpoint: b.previousJustifiedCheckpoint,
CurrentJustifiedCheckpoint: b.currentJustifiedCheckpoint,
FinalizedCheckpoint: b.finalizedCheckpoint,
InactivityScores: b.inactivityScores,
InactivityScores: b.inactivityScoresVal(),
CurrentSyncCommittee: b.currentSyncCommittee,
NextSyncCommittee: b.nextSyncCommittee,
LatestExecutionPayloadHeader: b.latestExecutionPayloadHeaderDeneb,

View File

@@ -7,6 +7,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
statenative "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
@@ -733,3 +734,41 @@ func TestBeaconState_ValidatorMutation_Bellatrix(t *testing.T) {
assert.Equal(t, rt, rt2)
}
func TestBeaconState_InitializeInactivityScoresCorrectly_Deneb(t *testing.T) {
resetCfg := features.InitWithReset(&features.Flags{
EnableExperimentalState: true,
})
defer resetCfg()
st, _ := util.DeterministicGenesisStateDeneb(t, 200)
_, err := st.HashTreeRoot(context.Background())
require.NoError(t, err)
ic, err := st.InactivityScores()
require.NoError(t, err)
ic[10] = 10000
ic[100] = 1000
err = st.SetInactivityScores(ic)
require.NoError(t, err)
_, err = st.HashTreeRoot(context.Background())
require.NoError(t, err)
ic[150] = 2390239
err = st.SetInactivityScores(ic)
require.NoError(t, err)
rt, err := st.HashTreeRoot(context.Background())
require.NoError(t, err)
copiedSt, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateDeneb)
if !ok {
t.Error("not ok")
}
newSt, err := statenative.InitializeFromProtoUnsafeDeneb(copiedSt)
require.NoError(t, err)
newRt, err := newSt.HashTreeRoot(context.Background())
require.NoError(t, err)
require.DeepSSZEqual(t, rt, newRt)
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
@@ -98,7 +99,7 @@ func TestExtractBlockDataType(t *testing.T) {
chain: &mock.ChainService{ValidatorsRoot: [32]byte{}},
},
want: func() interfaces.ReadOnlySignedBeaconBlock {
wsb, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{Body: &ethpb.BeaconBlockBodyBellatrix{}}})
wsb, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{Body: &ethpb.BeaconBlockBodyBellatrix{ExecutionPayload: &enginev1.ExecutionPayload{}}}})
require.NoError(t, err)
return wsb
}(),

View File

@@ -14,7 +14,7 @@ var log = logrus.WithField("prefix", "db")
var Commands = &cli.Command{
Name: "db",
Category: "db",
Usage: "defines commands for interacting with the Ethereum Beacon Node database",
Usage: "Defines commands for interacting with the Ethereum Beacon Node database",
Subcommands: []*cli.Command{
{
Name: "restore",

View File

@@ -80,7 +80,7 @@ var (
// MonitoringPortFlag defines the http port used to serve prometheus metrics.
MonitoringPortFlag = &cli.IntFlag{
Name: "monitoring-port",
Usage: "Port used to listening and respond metrics for prometheus.",
Usage: "Port used to listening and respond metrics for Prometheus.",
Value: 8080,
}
// CertFlag defines a flag for the node's TLS certificate.

View File

@@ -15,12 +15,12 @@ var (
// MinimalConfigFlag declares to use the minimal config for running Ethereum consensus.
MinimalConfigFlag = &cli.BoolFlag{
Name: "minimal-config",
Usage: "Use minimal config with parameters as defined in the spec.",
Usage: "Uses minimal config with parameters as defined in the spec.",
}
// E2EConfigFlag declares to use a testing specific config for running Ethereum consensus in end-to-end testing.
E2EConfigFlag = &cli.BoolFlag{
Name: "e2e-config",
Usage: "Use the E2E testing config, only for use within end-to-end testing.",
Usage: "Enables the E2E testing config, only for use within end-to-end testing.",
}
// RPCMaxPageSizeFlag defines the maximum numbers per page returned in RPC responses from this
// beacon node (default: 500).
@@ -31,34 +31,35 @@ var (
// VerbosityFlag defines the logrus configuration.
VerbosityFlag = &cli.StringFlag{
Name: "verbosity",
Usage: "Logging verbosity (trace, debug, info=default, warn, error, fatal, panic)",
Usage: "Logging verbosity. (trace, debug, info, warn, error, fatal, panic)",
Value: "info",
}
// DataDirFlag defines a path on disk where Prysm databases are stored.
DataDirFlag = &cli.StringFlag{
Name: "datadir",
Usage: "Data directory for the databases",
Usage: "Data directory for the databases.",
Value: DefaultDataDir(),
}
// EnableBackupWebhookFlag for users to trigger db backups via an HTTP webhook.
EnableBackupWebhookFlag = &cli.BoolFlag{
Name: "enable-db-backup-webhook",
Usage: "Serve HTTP handler to initiate database backups. The handler is served on the monitoring port at path /db/backup.",
Name: "enable-db-backup-webhook",
Usage: `Serves HTTP handler to initiate database backups.
The handler is served on the monitoring port at path /db/backup.`,
}
// BackupWebhookOutputDir to customize the output directory for db backups.
BackupWebhookOutputDir = &cli.StringFlag{
Name: "db-backup-output-dir",
Usage: "Output directory for db backups",
Usage: "Output directory for db backups.",
}
// EnableTracingFlag defines a flag to enable p2p message tracing.
EnableTracingFlag = &cli.BoolFlag{
Name: "enable-tracing",
Usage: "Enable request tracing.",
Usage: "Enables request tracing.",
}
// TracingProcessNameFlag defines a flag to specify a process name.
TracingProcessNameFlag = &cli.StringFlag{
Name: "tracing-process-name",
Usage: "The name to apply to tracing tag \"process_name\"",
Usage: "Name to apply to tracing tag `process_name`.",
}
// TracingEndpointFlag flag defines the http endpoint for serving traces to Jaeger.
TracingEndpointFlag = &cli.StringFlag{
@@ -70,7 +71,7 @@ var (
// messages are sampled for tracing.
TraceSampleFractionFlag = &cli.Float64Flag{
Name: "trace-sample-fraction",
Usage: "Indicate what fraction of p2p messages are sampled for tracing.",
Usage: "Indicates what fraction of p2p messages are sampled for tracing.",
Value: 0.20,
}
// MonitoringHostFlag defines the host used to serve prometheus metrics.
@@ -82,13 +83,13 @@ var (
// DisableMonitoringFlag defines a flag to disable the metrics collection.
DisableMonitoringFlag = &cli.BoolFlag{
Name: "disable-monitoring",
Usage: "Disable monitoring service.",
Usage: "Disables monitoring service.",
}
// NoDiscovery specifies whether we are running a local network and have no need for connecting
// to the bootstrap nodes in the cloud
NoDiscovery = &cli.BoolFlag{
Name: "no-discovery",
Usage: "Enable only local network p2p and do not connect to cloud bootstrap nodes.",
Usage: "Enable only local network p2p and do not connect to cloud bootstrap nodes",
}
// StaticPeers specifies a set of peers to connect to explicitly.
StaticPeers = &cli.StringSliceFlag{
@@ -185,17 +186,17 @@ var (
// ForceClearDB removes any previously stored data at the data directory.
ForceClearDB = &cli.BoolFlag{
Name: "force-clear-db",
Usage: "Clear any previously stored data at the data directory",
Usage: "Clears any previously stored data at the data directory.",
}
// ClearDB prompts user to see if they want to remove any previously stored data at the data directory.
ClearDB = &cli.BoolFlag{
Name: "clear-db",
Usage: "Prompt for clearing any previously stored data at the data directory",
Usage: "Prompt for clearing any previously stored data at the data directory.",
}
// LogFormat specifies the log output format.
LogFormat = &cli.StringFlag{
Name: "log-format",
Usage: "Specify log formatting. Supports: text, json, fluentd, journald.",
Usage: "Specifies log formatting. Supports: text, json, fluentd, journald.",
Value: "text",
}
// MaxGoroutines specifies the maximum amount of goroutines tolerated, before a status check fails.
@@ -207,7 +208,7 @@ var (
// LogFileName specifies the log output file name.
LogFileName = &cli.StringFlag{
Name: "log-file",
Usage: "Specify log file name, relative or absolute",
Usage: "Specifies log file name, relative or absolute.",
}
// EnableUPnPFlag specifies if UPnP should be enabled or not. The default value is false.
EnableUPnPFlag = &cli.BoolFlag{
@@ -217,27 +218,27 @@ var (
// ConfigFileFlag specifies the filepath to load flag values.
ConfigFileFlag = &cli.StringFlag{
Name: "config-file",
Usage: "The filepath to a yaml file with flag values",
Usage: "Filepath to a yaml file with flag values.",
}
// ChainConfigFileFlag specifies the filepath to load flag values.
ChainConfigFileFlag = &cli.StringFlag{
Name: "chain-config-file",
Usage: "The path to a YAML file with chain config values",
Usage: "Path to a YAML file with chain config values.",
}
// GrpcMaxCallRecvMsgSizeFlag defines the max call message size for GRPC
GrpcMaxCallRecvMsgSizeFlag = &cli.IntFlag{
Name: "grpc-max-msg-size",
Usage: "Integer to define max receive message call size. If serving a public gRPC server, " +
"set this to a more reasonable size to avoid resource exhaustion from large messages. " +
"Validators with as many as 10000 keys can be run with a max message size of less than " +
"50Mb. The default here is set to a very high value for local users. " +
"(default: 2147483647 (2Gi)).",
Usage: `Integer to define max receive message call size (in bytes).
If serving a public gRPC server, set this to a more reasonable size to avoid
resource exhaustion from large messages.
Validators with as many as 10000 keys can be run with a max message size of less than
50Mb. The default here is set to a very high value for local users.`,
Value: math.MaxInt32,
}
// AcceptTosFlag specifies user acceptance of ToS for non-interactive environments.
AcceptTosFlag = &cli.BoolFlag{
Name: "accept-terms-of-use",
Usage: "Accept Terms and Conditions (for non-interactive environments)",
Usage: "Accepts Terms and Conditions (for non-interactive environments).",
}
// ValidatorMonitorIndicesFlag specifies a list of validator indices to
// track for performance updates
@@ -261,7 +262,7 @@ var (
// ApiTimeoutFlag specifies the timeout value for API requests in seconds. A timeout of zero means no timeout.
ApiTimeoutFlag = &cli.IntFlag{
Name: "api-timeout",
Usage: "Specifies the timeout value for API requests in seconds",
Usage: "Specifies the timeout value for API requests in seconds.",
Value: 120,
}
// JwtOutputFileFlag specifies the JWT file path that gets generated into when invoked by generate-jwt-secret.

View File

@@ -17,7 +17,7 @@ var log = logrus.WithField("prefix", "accounts")
var Commands = &cli.Command{
Name: "accounts",
Category: "accounts",
Usage: "defines commands for interacting with Ethereum validator accounts",
Usage: "Defines commands for interacting with Ethereum validator accounts.",
Subcommands: []*cli.Command{
{
Name: "delete",

View File

@@ -14,7 +14,7 @@ var log = logrus.WithField("prefix", "db")
var Commands = &cli.Command{
Name: "db",
Category: "db",
Usage: "defines commands for interacting with the Prysm validator database",
Usage: "Defines commands for interacting with the Prysm validator database.",
Subcommands: []*cli.Command{
{
Name: "restore",

View File

@@ -24,26 +24,26 @@ var (
// DisableAccountMetricsFlag disables the prometheus metrics for validator accounts, default false.
DisableAccountMetricsFlag = &cli.BoolFlag{
Name: "disable-account-metrics",
Usage: "Disable prometheus metrics for validator accounts. Operators with high volumes " +
"of validating keys may wish to disable granular prometheus metrics as it increases " +
"the data cardinality.",
Usage: `Disables prometheus metrics for validator accounts. Operators with high volumes
of validating keys may wish to disable granular prometheus metrics as it increases
the data cardinality.`,
}
// BeaconRPCProviderFlag defines a beacon node RPC endpoint.
BeaconRPCProviderFlag = &cli.StringFlag{
Name: "beacon-rpc-provider",
Usage: "Beacon node RPC provider endpoint",
Usage: "Beacon node RPC provider endpoint.",
Value: "127.0.0.1:4000",
}
// BeaconRPCGatewayProviderFlag defines a beacon node JSON-RPC endpoint.
BeaconRPCGatewayProviderFlag = &cli.StringFlag{
Name: "beacon-rpc-gateway-provider",
Usage: "Beacon node RPC gateway provider endpoint",
Usage: "Beacon node RPC gateway provider endpoint.",
Value: "127.0.0.1:3500",
}
// BeaconRESTApiProviderFlag defines a beacon node REST API endpoint.
BeaconRESTApiProviderFlag = &cli.StringFlag{
Name: "beacon-rest-api-provider",
Usage: "Beacon node REST API provider endpoint",
Usage: "Beacon node REST API provider endpoint.",
Value: "http://127.0.0.1:3500",
}
// CertFlag defines a flag for the node's TLS certificate.
@@ -54,25 +54,25 @@ var (
// EnableRPCFlag enables controlling the validator client via gRPC (without web UI).
EnableRPCFlag = &cli.BoolFlag{
Name: "rpc",
Usage: "Enables the RPC server for the validator client (without Web UI)",
Usage: "Enables the RPC server for the validator client (without Web UI).",
Value: false,
}
// RPCHost defines the host on which the RPC server should listen.
RPCHost = &cli.StringFlag{
Name: "rpc-host",
Usage: "Host on which the RPC server should listen",
Usage: "Host on which the RPC server should listen.",
Value: "127.0.0.1",
}
// RPCPort defines a validator client RPC port to open.
RPCPort = &cli.IntFlag{
Name: "rpc-port",
Usage: "RPC port exposed by a validator client",
Usage: "RPC port exposed by a validator client.",
Value: 7000,
}
// SlasherRPCProviderFlag defines a slasher node RPC endpoint.
SlasherRPCProviderFlag = &cli.StringFlag{
Name: "slasher-rpc-provider",
Usage: "Slasher node RPC provider endpoint",
Usage: "Slasher node RPC provider endpoint.",
Value: "127.0.0.1:4002",
}
// SlasherCertFlag defines a flag for the slasher node's TLS certificate.
@@ -83,85 +83,86 @@ var (
// DisablePenaltyRewardLogFlag defines the ability to not log reward/penalty information during deployment
DisablePenaltyRewardLogFlag = &cli.BoolFlag{
Name: "disable-rewards-penalties-logging",
Usage: "Disable reward/penalty logging during cluster deployment",
Usage: "Disables reward/penalty logging during cluster deployment.",
}
// GraffitiFlag defines the graffiti value included in proposed blocks
GraffitiFlag = &cli.StringFlag{
Name: "graffiti",
Usage: "String to include in proposed blocks",
Usage: "String to include in proposed blocks.",
}
// GrpcRetriesFlag defines the number of times to retry a failed gRPC request.
GrpcRetriesFlag = &cli.UintFlag{
Name: "grpc-retries",
Usage: "Number of attempts to retry gRPC requests",
Usage: "Number of attempts to retry gRPC requests.",
Value: 5,
}
// GrpcRetryDelayFlag defines the interval to retry a failed gRPC request.
GrpcRetryDelayFlag = &cli.DurationFlag{
Name: "grpc-retry-delay",
Usage: "The amount of time between gRPC retry requests.",
Usage: "Amount of time between gRPC retry requests.",
Value: 1 * time.Second,
}
// GrpcHeadersFlag defines a list of headers to send with all gRPC requests.
GrpcHeadersFlag = &cli.StringFlag{
Name: "grpc-headers",
Usage: "A comma separated list of key value pairs to pass as gRPC headers for all gRPC " +
"calls. Example: --grpc-headers=key=value",
Usage: `Comma separated list of key value pairs to pass as gRPC headers for all gRPC calls.
Example: --grpc-headers=key=value`,
}
// GRPCGatewayHost specifies a gRPC gateway host for the validator client.
GRPCGatewayHost = &cli.StringFlag{
Name: "grpc-gateway-host",
Usage: "The host on which the gateway server runs on",
Usage: "Host on which the gateway server runs on.",
Value: DefaultGatewayHost,
}
// GRPCGatewayPort enables a gRPC gateway to be exposed for the validator client.
GRPCGatewayPort = &cli.IntFlag{
Name: "grpc-gateway-port",
Usage: "Enable gRPC gateway for JSON requests",
Usage: "Enables gRPC gateway for JSON requests.",
Value: 7500,
}
// GPRCGatewayCorsDomain serves preflight requests when serving gRPC JSON gateway.
GPRCGatewayCorsDomain = &cli.StringFlag{
Name: "grpc-gateway-corsdomain",
Usage: "Comma separated list of domains from which to accept cross origin requests " +
"(browser enforced). This flag has no effect if not used with --grpc-gateway-port.",
Usage: `Comma separated list of domains from which to accept cross origin requests (browser enforced).
This flag has no effect if not used with --grpc-gateway-port.
`,
Value: "http://localhost:7500,http://127.0.0.1:7500,http://0.0.0.0:7500,http://localhost:4242,http://127.0.0.1:4242,http://localhost:4200,http://0.0.0.0:4242,http://127.0.0.1:4200,http://0.0.0.0:4200,http://localhost:3000,http://0.0.0.0:3000,http://127.0.0.1:3000"}
// MonitoringPortFlag defines the http port used to serve prometheus metrics.
MonitoringPortFlag = &cli.IntFlag{
Name: "monitoring-port",
Usage: "Port used to listening and respond metrics for prometheus.",
Usage: "Port used to listening and respond metrics for Prometheus.",
Value: 8081,
}
// WalletDirFlag defines the path to a wallet directory for Prysm accounts.
WalletDirFlag = &cli.StringFlag{
Name: "wallet-dir",
Usage: "Path to a wallet directory on-disk for Prysm validator accounts",
Usage: "Path to a wallet directory on-disk for Prysm validator accounts.",
Value: filepath.Join(DefaultValidatorDir(), WalletDefaultDirName),
}
// AccountPasswordFileFlag is path to a file containing a password for a validator account.
AccountPasswordFileFlag = &cli.StringFlag{
Name: "account-password-file",
Usage: "Path to a plain-text, .txt file containing a password for a validator account",
Usage: "Path to a plain-text, .txt file containing a password for a validator account.",
}
// WalletPasswordFileFlag is the path to a file containing your wallet password.
WalletPasswordFileFlag = &cli.StringFlag{
Name: "wallet-password-file",
Usage: "Path to a plain-text, .txt file containing your wallet password",
Usage: "Path to a plain-text, .txt file containing your wallet password.",
}
// Mnemonic25thWordFileFlag defines a path to a file containing a "25th" word mnemonic passphrase for advanced users.
Mnemonic25thWordFileFlag = &cli.StringFlag{
Name: "mnemonic-25th-word-file",
Usage: "(Advanced) Path to a plain-text, .txt file containing a 25th word passphrase for your mnemonic for HD wallets",
Usage: "(Advanced) Path to a plain-text, `.txt` file containing a 25th word passphrase for your mnemonic for HD wallets.",
}
// SkipMnemonic25thWordCheckFlag allows for skipping a check for mnemonic 25th word passphrases for HD wallets.
SkipMnemonic25thWordCheckFlag = &cli.StringFlag{
Name: "skip-mnemonic-25th-word-check",
Usage: "Allows for skipping the check for a mnemonic 25th word passphrase for HD wallets",
Usage: "Allows for skipping the check for a mnemonic 25th word passphrase for HD wallets.",
}
// ImportPrivateKeyFileFlag allows for directly importing a private key hex string as an account.
ImportPrivateKeyFileFlag = &cli.StringFlag{
Name: "import-private-key-file",
Usage: "Path to a plain-text, .txt file containing a hex string representation of a private key to import",
Usage: "Path to a plain-text, .txt file containing a hex string representation of a private key to import.",
}
// MnemonicFileFlag is used to enter a file to mnemonic phrase for new wallet creation, non-interactively.
MnemonicFileFlag = &cli.StringFlag{
@@ -171,113 +172,113 @@ var (
// MnemonicLanguageFlag is used to specify the language of the mnemonic.
MnemonicLanguageFlag = &cli.StringFlag{
Name: "mnemonic-language",
Usage: "Allows specifying mnemonic language. Supported languages are: english|chinese_traditional|chinese_simplified|czech|french|japanese|korean|italian|spanish",
Usage: "Allows specifying mnemonic language. Supported languages are: english|chinese_traditional|chinese_simplified|czech|french|japanese|korean|italian|spanish.",
}
// ShowDepositDataFlag for accounts.
ShowDepositDataFlag = &cli.BoolFlag{
Name: "show-deposit-data",
Usage: "Display raw eth1 tx deposit data for validator accounts",
Usage: "Displays raw eth1 tx deposit data for validator accounts.",
Value: false,
}
// ShowPrivateKeysFlag for accounts.
ShowPrivateKeysFlag = &cli.BoolFlag{
Name: "show-private-keys",
Usage: "Display the private keys for validator accounts",
Usage: "Displays the private keys for validator accounts.",
Value: false,
}
// ListValidatorIndices for accounts.
ListValidatorIndices = &cli.BoolFlag{
Name: "list-validator-indices",
Usage: "List validator indices",
Usage: "Lists validator indices.",
Value: false,
}
// NumAccountsFlag defines the amount of accounts to generate for derived wallets.
NumAccountsFlag = &cli.IntFlag{
Name: "num-accounts",
Usage: "Number of accounts to generate for derived wallets",
Usage: "Number of accounts to generate for derived wallets.",
Value: 1,
}
// DeletePublicKeysFlag defines a comma-separated list of hex string public keys
// for accounts which a user desires to delete from their wallet.
DeletePublicKeysFlag = &cli.StringFlag{
Name: "delete-public-keys",
Usage: "Comma-separated list of public key hex strings to specify which validator accounts to delete",
Usage: "Comma separated list of public key hex strings to specify which validator accounts to delete.",
Value: "",
}
// BackupPublicKeysFlag defines a comma-separated list of hex string public keys
// for accounts which a user desires to backup from their wallet.
BackupPublicKeysFlag = &cli.StringFlag{
Name: "backup-public-keys",
Usage: "Comma-separated list of public key hex strings to specify which validator accounts to backup",
Usage: "Comma separated list of public key hex strings to specify which validator accounts to backup.",
Value: "",
}
// VoluntaryExitPublicKeysFlag defines a comma-separated list of hex string public keys
// for accounts on which a user wants to perform a voluntary exit.
VoluntaryExitPublicKeysFlag = &cli.StringFlag{
Name: "public-keys",
Usage: "Comma-separated list of public key hex strings to specify on which validator accounts to perform " +
"a voluntary exit",
Usage: "Comma separated list of public key hex strings to specify on which validator accounts to perform " +
"a voluntary exit.",
Value: "",
}
// ExitAllFlag allows stakers to select all validating keys for exit. This will still require the staker
// to confirm a userprompt for this action given it is a dangerous one.
ExitAllFlag = &cli.BoolFlag{
Name: "exit-all",
Usage: "Exit all validators. This will still require the staker to confirm a userprompt for the action",
Usage: "Exits all validators. This will still require the staker to confirm a userprompt for the action.",
}
// ForceExitFlag to exit without displaying the confirmation prompt.
ForceExitFlag = &cli.BoolFlag{
Name: "force-exit",
Usage: "Exit without displaying the confirmation prompt",
Usage: "Exits without displaying the confirmation prompt.",
}
VoluntaryExitJSONOutputPath = &cli.StringFlag{
Name: "exit-json-output-dir",
Usage: "The output directory to write voluntary exits as individual unencrypted JSON " +
Usage: "Output directory to write voluntary exits as individual unencrypted JSON " +
"files. If this flag is provided, voluntary exits will be written to the provided " +
"directory and will not be broadcasted.",
}
// BackupPasswordFile for encrypting accounts a user wishes to back up.
BackupPasswordFile = &cli.StringFlag{
Name: "backup-password-file",
Usage: "Path to a plain-text, .txt file containing the desired password for your backed up accounts",
Usage: "Path to a plain-text, .txt file containing the desired password for your backed up accounts.",
Value: "",
}
// BackupDirFlag defines the path for the zip backup of the wallet will be created.
BackupDirFlag = &cli.StringFlag{
Name: "backup-dir",
Usage: "Path to a directory where accounts will be backed up into a zip file",
Usage: "Path to a directory where accounts will be backed up into a zip file.",
Value: DefaultValidatorDir(),
}
// SlashingProtectionJSONFileFlag is used to enter the file path of the slashing protection JSON.
SlashingProtectionJSONFileFlag = &cli.StringFlag{
Name: "slashing-protection-json-file",
Usage: "Path to an EIP-3076 compliant JSON file containing a user's slashing protection history",
Usage: "Path to an EIP-3076 compliant JSON file containing a user's slashing protection history.",
}
// KeysDirFlag defines the path for a directory where keystores to be imported at stored.
KeysDirFlag = &cli.StringFlag{
Name: "keys-dir",
Usage: "Path to a directory where keystores to be imported are stored",
Usage: "Path to a directory where keystores to be imported are stored.",
}
// RemoteSignerCertPathFlag defines the path to a client.crt file for a wallet to connect to
// a secure signer via TLS and gRPC.
RemoteSignerCertPathFlag = &cli.StringFlag{
Name: "remote-signer-crt-path",
Usage: "/path/to/client.crt for establishing a secure, TLS gRPC connection to a remote signer server",
Usage: "/path/to/client.crt for establishing a secure, TLS gRPC connection to a remote signer server.",
Value: "",
}
// RemoteSignerKeyPathFlag defines the path to a client.key file for a wallet to connect to
// a secure signer via TLS and gRPC.
RemoteSignerKeyPathFlag = &cli.StringFlag{
Name: "remote-signer-key-path",
Usage: "/path/to/client.key for establishing a secure, TLS gRPC connection to a remote signer server",
Usage: "/path/to/client.key for establishing a secure, TLS gRPC connection to a remote signer server.",
Value: "",
}
// RemoteSignerCACertPathFlag defines the path to a ca.crt file for a wallet to connect to
// a secure signer via TLS and gRPC.
RemoteSignerCACertPathFlag = &cli.StringFlag{
Name: "remote-signer-ca-crt-path",
Usage: "/path/to/ca.crt for establishing a secure, TLS gRPC connection to a remote signer server",
Usage: "/path/to/ca.crt for establishing a secure, TLS gRPC connection to a remote signer server.",
Value: "",
}
// Web3SignerURLFlag defines the URL for a web3signer to connect to.
@@ -285,7 +286,7 @@ var (
// web3signer documentation can be found in Consensys' web3signer project docs
Web3SignerURLFlag = &cli.StringFlag{
Name: "validators-external-signer-url",
Usage: "URL for consensys' web3signer software to use with the Prysm validator client",
Usage: "URL for consensys' web3signer software to use with the Prysm validator client.",
Value: "",
}
@@ -295,65 +296,70 @@ var (
// web3signer documentation can be found in Consensys' web3signer project docs```
Web3SignerPublicValidatorKeysFlag = &cli.StringSliceFlag{
Name: "validators-external-signer-public-keys",
Usage: "comma separated list of public keys OR an external url endpoint for the validator to retrieve public keys from for usage with web3signer",
Usage: "Comma separated list of public keys OR an external url endpoint for the validator to retrieve public keys from for usage with web3signer.",
}
// KeymanagerKindFlag defines the kind of keymanager desired by a user during wallet creation.
KeymanagerKindFlag = &cli.StringFlag{
Name: "keymanager-kind",
Usage: "Kind of keymanager, either imported, derived, or remote, specified during wallet creation",
Usage: "Kind of keymanager, either imported, derived, or remote, specified during wallet creation.",
Value: "",
}
// SkipDepositConfirmationFlag skips the y/n confirmation userprompt for sending a deposit to the deposit contract.
SkipDepositConfirmationFlag = &cli.BoolFlag{
Name: "skip-deposit-confirmation",
Usage: "Skips the y/n confirmation userprompt for sending a deposit to the deposit contract",
Usage: "Skips the y/n confirmation userprompt for sending a deposit to the deposit contract.",
Value: false,
}
// EnableWebFlag enables controlling the validator client via the Prysm web ui. This is a work in progress.
EnableWebFlag = &cli.BoolFlag{
Name: "web",
Usage: "Enables the web portal for the validator client (work in progress)",
Usage: "(Work in progress): Enables the web portal for the validator client.",
Value: false,
}
// SlashingProtectionExportDirFlag allows specifying the outpt directory
// for a validator's slashing protection history.
SlashingProtectionExportDirFlag = &cli.StringFlag{
Name: "slashing-protection-export-dir",
Usage: "Allows users to specify the output directory to export their slashing protection EIP-3076 standard JSON File",
Usage: "Allows users to specify the output directory to export their slashing protection EIP-3076 standard JSON File.",
Value: "",
}
// GraffitiFileFlag specifies the file path to load graffiti values.
GraffitiFileFlag = &cli.StringFlag{
Name: "graffiti-file",
Usage: "The path to a YAML file with graffiti values",
Usage: "Path to a YAML file with graffiti values.",
}
// ProposerSettingsFlag defines the path or URL to a file with proposer config.
ProposerSettingsFlag = &cli.StringFlag{
Name: "proposer-settings-file",
Usage: "Set path to a YAML or JSON file containing validator settings used when proposing blocks such as (fee recipient and gas limit) (i.e. --proposer-settings-file=/path/to/proposer.json). File format found in docs",
Name: "proposer-settings-file",
Usage: `Sets path to a YAML or JSON file containing validator settings used when proposing blocks such as
fee recipient and gas limit. File format found in docs.`,
Value: "",
}
// ProposerSettingsURLFlag defines the path or URL to a file with proposer config.
ProposerSettingsURLFlag = &cli.StringFlag{
Name: "proposer-settings-url",
Usage: "Set URL to a REST endpoint containing validator settings used when proposing blocks such as (fee recipient) (i.e. --proposer-settings-url=https://example.com/api/getConfig). File format found in docs",
Name: "proposer-settings-url",
Usage: `Sets URL to a REST endpoint containing validator settings used when proposing blocks such as
fee recipient and gas limit. File format found in docs`,
Value: "",
}
// SuggestedFeeRecipientFlag defines the address of the fee recipient.
SuggestedFeeRecipientFlag = &cli.StringFlag{
Name: "suggested-fee-recipient",
Usage: "Sets ALL validators' mapping to a suggested eth address to receive gas fees when proposing a block." +
" note that this is only a suggestion when integrating with a Builder API, which may choose to specify a different fee recipient as payment for the blocks it builds." +
" For additional setting overrides use the --" + ProposerSettingsFlag.Name + " or --" + ProposerSettingsURLFlag.Name + " flags. ",
Usage: `Sets ALL validators' mapping to a suggested eth address to receive gas fees when proposing a block.
Note that this is only a suggestion when integrating with a Builder API, which may choose to specify
a different fee recipient as payment for the blocks it builds.For additional setting overrides use the
--` + ProposerSettingsFlag.Name + " or --" + ProposerSettingsURLFlag.Name + " flags.",
Value: params.BeaconConfig().EthBurnAddressHex,
}
// EnableBuilderFlag enables the periodic validator registration API calls that will update the custom builder with validator settings.
EnableBuilderFlag = &cli.BoolFlag{
Name: "enable-builder",
Usage: "Enables Builder validator registration APIs for the validator client to update settings such as fee recipient and gas limit. Note* this flag is not required if using proposer settings config file",
Name: "enable-builder",
Usage: `Enables builder validator registration APIs for the validator client to update settings
such as fee recipient and gas limit. This flag is not required if using proposer
settings config file.`,
Value: false,
Aliases: []string{"enable-validator-registration"},
}
@@ -361,13 +367,13 @@ var (
// BuilderGasLimitFlag defines the gas limit for the builder to use for constructing a payload.
BuilderGasLimitFlag = &cli.StringFlag{
Name: "suggested-gas-limit",
Usage: "Sets gas limit for the builder to use for constructing a payload for all the validators",
Usage: "Sets gas limit for the builder to use for constructing a payload for all the validators.",
Value: fmt.Sprint(params.BeaconConfig().DefaultBuilderGasLimit),
}
// ValidatorRegistrationBatchSizeFlag sets the maximum size for one batch of validator registrations. Use a non-positive value to disable batching.
ValidatorRegistrationBatchSizeFlag = &cli.IntFlag{
Name: "validator-registration-batch-size",
// ValidatorsRegistrationBatchSizeFlag sets the maximum size for one batch of validator registrations. Use a non-positive value to disable batching.
ValidatorsRegistrationBatchSizeFlag = &cli.IntFlag{
Name: "validators-registration-batch-size",
Usage: "Sets the maximum size for one batch of validator registrations. Use a non-positive value to disable batching.",
Value: 0,
}

View File

@@ -8,14 +8,12 @@ import (
var (
InteropStartIndex = &cli.Uint64Flag{
Name: "interop-start-index",
Usage: "The start index to deterministically generate validator keys when used in combination with " +
"--interop-num-validators. Example: --interop-start-index=5 --interop-num-validators=3 would generate " +
"keys from index 5 to 7.",
Usage: `Start index to deterministically generate validator keys when used in combination with --interop-num-validators.
Example: --interop-start-index=5 --interop-num-validators=3 would generate keys from index 5 to 7.`,
}
InteropNumValidators = &cli.Uint64Flag{
Name: "interop-num-validators",
Usage: "The number of validators to deterministically generate. " +
"Example: --interop-start-index=5 --interop-num-validators=3 would generate " +
"keys from index 5 to 7.",
Usage: `Number of validators to deterministically generate.
Example: --interop-start-index=5 --interop-num-validators=3 would generate keys from index 5 to 7.`,
}
)

View File

@@ -79,7 +79,7 @@ var appFlags = []cli.Flag{
flags.ProposerSettingsFlag,
flags.EnableBuilderFlag,
flags.BuilderGasLimitFlag,
flags.ValidatorRegistrationBatchSizeFlag,
flags.ValidatorsRegistrationBatchSizeFlag,
////////////////////
cmd.DisableMonitoringFlag,
cmd.MonitoringHostFlag,
@@ -117,81 +117,79 @@ func init() {
}
func main() {
app := cli.App{}
app.Name = "validator"
app.Usage = `launches an Ethereum validator client that interacts with a beacon chain, starts proposer and attester services, p2p connections, and more`
app.Version = version.Version()
app.Action = func(ctx *cli.Context) error {
if err := startNode(ctx); err != nil {
return cli.Exit(err.Error(), 1)
}
return nil
}
app.Commands = []*cli.Command{
walletcommands.Commands,
accountcommands.Commands,
slashingprotectioncommands.Commands,
dbcommands.Commands,
web.Commands,
}
app.Flags = appFlags
app.Before = func(ctx *cli.Context) error {
// Load flags from config file, if specified.
if err := cmd.LoadFlagsFromConfig(ctx, app.Flags); err != nil {
return err
}
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
formatter := new(prefixed.TextFormatter)
formatter.TimestampFormat = "2006-01-02 15:04:05"
formatter.FullTimestamp = true
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as Gibberish in the log files.
formatter.DisableColors = ctx.String(cmd.LogFileName.Name) != ""
logrus.SetFormatter(formatter)
case "fluentd":
f := joonix.NewFormatter()
if err := joonix.DisableTimestampFormat(f); err != nil {
panic(err)
app := cli.App{
Name: "validator",
Usage: "Launches an Ethereum validator client that interacts with a beacon chain, starts proposer and attester services, p2p connections, and more.",
Version: version.Version(),
Action: func(ctx *cli.Context) error {
if err := startNode(ctx); err != nil {
return cli.Exit(err.Error(), 1)
}
logrus.SetFormatter(f)
case "json":
logrus.SetFormatter(&logrus.JSONFormatter{})
case "journald":
if err := journald.Enable(); err != nil {
return nil
},
Commands: []*cli.Command{
walletcommands.Commands,
accountcommands.Commands,
slashingprotectioncommands.Commands,
dbcommands.Commands,
web.Commands,
},
Flags: appFlags,
Before: func(ctx *cli.Context) error {
// Load flags from config file, if specified.
if err := cmd.LoadFlagsFromConfig(ctx, appFlags); err != nil {
return err
}
default:
return fmt.Errorf("unknown log format %s", format)
}
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
format := ctx.String(cmd.LogFormat.Name)
switch format {
case "text":
formatter := new(prefixed.TextFormatter)
formatter.TimestampFormat = "2006-01-02 15:04:05"
formatter.FullTimestamp = true
// If persistent log files are written - we disable the log messages coloring because
// the colors are ANSI codes and seen as Gibberish in the log files.
formatter.DisableColors = ctx.String(cmd.LogFileName.Name) != ""
logrus.SetFormatter(formatter)
case "fluentd":
f := joonix.NewFormatter()
if err := joonix.DisableTimestampFormat(f); err != nil {
panic(err)
}
logrus.SetFormatter(f)
case "json":
logrus.SetFormatter(&logrus.JSONFormatter{})
case "journald":
if err := journald.Enable(); err != nil {
return err
}
default:
return fmt.Errorf("unknown log format %s", format)
}
}
// Fix data dir for Windows users.
outdatedDataDir := filepath.Join(file.HomeDir(), "AppData", "Roaming", "Eth2Validators")
currentDataDir := flags.DefaultValidatorDir()
if err := cmd.FixDefaultDataDir(outdatedDataDir, currentDataDir); err != nil {
log.WithError(err).Error("Cannot update data directory")
}
logFileName := ctx.String(cmd.LogFileName.Name)
if logFileName != "" {
if err := logs.ConfigurePersistentLogging(logFileName); err != nil {
log.WithError(err).Error("Failed to configuring logging to disk.")
}
}
if err := debug.Setup(ctx); err != nil {
return err
}
return cmd.ValidateNoArgs(ctx)
}
// Fix data dir for Windows users.
outdatedDataDir := filepath.Join(file.HomeDir(), "AppData", "Roaming", "Eth2Validators")
currentDataDir := flags.DefaultValidatorDir()
if err := cmd.FixDefaultDataDir(outdatedDataDir, currentDataDir); err != nil {
log.WithError(err).Error("Cannot update data directory")
}
app.After = func(ctx *cli.Context) error {
debug.Exit(ctx)
return nil
if err := debug.Setup(ctx); err != nil {
return err
}
return cmd.ValidateNoArgs(ctx)
},
After: func(ctx *cli.Context) error {
debug.Exit(ctx)
return nil
},
}
defer func() {

View File

@@ -13,7 +13,7 @@ import (
var Commands = &cli.Command{
Name: "slashing-protection-history",
Category: "slashing-protection-history",
Usage: "defines commands for interacting your validator's slashing protection history",
Usage: "Defines commands for interacting your validator's slashing protection history.",
Subcommands: []*cli.Command{
{
Name: "export",

View File

@@ -14,25 +14,33 @@ import (
var appHelpTemplate = `NAME:
{{.App.Name}} - {{.App.Usage}}
USAGE:
{{.App.HelpName}} [options]{{if .App.Commands}} command [command options]{{end}} {{if .App.ArgsUsage}}{{.App.ArgsUsage}}{{else}}[arguments...]{{end}}
{{if .App.Version}}
AUTHOR:
{{range .App.Authors}}{{ . }}{{end}}
{{end}}{{if .App.Commands}}
GLOBAL OPTIONS:
{{if .App.Version}}
VERSION:
{{.App.Version}}
{{end -}}
{{if len .App.Authors}}
AUTHORS:
{{range .App.Authors}}{{ . }}
{{end -}}
{{end -}}
{{if .App.Commands}}
global OPTIONS:
{{range .App.Commands}}{{join .Names ", "}}{{ "\t" }}{{.Usage}}
{{end}}{{end}}{{if .FlagGroups}}
{{end -}}
{{end -}}
{{if .FlagGroups}}
{{range .FlagGroups}}{{.Name}} OPTIONS:
{{range .Flags}}{{.}}
{{end}}
{{end}}{{end}}{{if .App.Copyright }}
{{end -}}
{{end -}}
{{if .App.Copyright }}
COPYRIGHT:
{{.App.Copyright}}
VERSION:
{{.App.Version}}
{{end}}{{if len .App.Authors}}
{{end}}
{{end -}}
`
type flagGroup struct {
@@ -113,7 +121,7 @@ var appHelpFlagGroups = []flagGroup{
flags.SuggestedFeeRecipientFlag,
flags.EnableBuilderFlag,
flags.BuilderGasLimitFlag,
flags.ValidatorRegistrationBatchSizeFlag,
flags.ValidatorsRegistrationBatchSizeFlag,
},
},
{

View File

@@ -15,7 +15,7 @@ var log = logrus.WithField("prefix", "wallet")
var Commands = &cli.Command{
Name: "wallet",
Category: "wallet",
Usage: "defines commands for interacting with Ethereum validator wallets",
Usage: "Defines commands for interacting with Ethereum validator wallets.",
Subcommands: []*cli.Command{
{
Name: "create",

View File

@@ -15,7 +15,7 @@ import (
var Commands = &cli.Command{
Name: "web",
Category: "web",
Usage: "defines commands for interacting with the Prysm web interface",
Usage: "Defines commands for interacting with the Prysm web interface.",
Subcommands: []*cli.Command{
{
Name: "generate-auth-token",

View File

@@ -40,7 +40,6 @@ type Flags struct {
EnableExperimentalState bool // EnableExperimentalState turns on the latest and greatest (but potentially unstable) changes to the beacon state.
WriteSSZStateTransitions bool // WriteSSZStateTransitions to tmp directory.
EnablePeerScorer bool // EnablePeerScorer enables experimental peer scoring in p2p.
DisableReorgLateBlocks bool // DisableReorgLateBlocks disables reorgs of late blocks.
WriteWalletPasswordOnWebOnboarding bool // WriteWalletPasswordOnWebOnboarding writes the password to disk after Prysm web signup.
EnableDoppelGanger bool // EnableDoppelGanger enables doppelganger protection on startup for the validator.
EnableHistoricalSpaceRepresentation bool // EnableHistoricalSpaceRepresentation enables the saving of registry validators in separate buckets to save space
@@ -192,10 +191,6 @@ func ConfigureBeaconChain(ctx *cli.Context) error {
cfg.DisableGRPCConnectionLogs = true
}
if ctx.Bool(disableReorgLateBlocks.Name) {
logEnabled(disableReorgLateBlocks)
cfg.DisableReorgLateBlocks = true
}
cfg.EnablePeerScorer = true
if ctx.Bool(disablePeerScorer.Name) {
logDisabled(disablePeerScorer)

View File

@@ -53,6 +53,11 @@ var (
Usage: deprecatedUsage,
Hidden: true,
}
deprecatedDisableReorgLateBlocks = &cli.BoolFlag{
Name: "disable-reorg-late-blocks",
Usage: deprecatedUsage,
Hidden: true,
}
)
// Deprecated flags for both the beacon node and validator client.
@@ -66,6 +71,7 @@ var deprecatedFlags = []cli.Flag{
deprecatedAggregateParallel,
deprecatedEnableOptionalEngineMethods,
deprecatedDisableBuildBlockParallel,
deprecatedDisableReorgLateBlocks,
}
// deprecatedBeaconFlags contains flags that are still used by other components

View File

@@ -10,44 +10,40 @@ var (
// PraterTestnet flag for the multiclient Ethereum consensus testnet.
PraterTestnet = &cli.BoolFlag{
Name: "prater",
Usage: "Run Prysm configured for the Prater / Goerli test network",
Usage: "Runs Prysm configured for the Prater / Goerli test network.",
Aliases: []string{"goerli"},
}
// SepoliaTestnet flag for the multiclient Ethereum consensus testnet.
SepoliaTestnet = &cli.BoolFlag{
Name: "sepolia",
Usage: "Run Prysm configured for the Sepolia beacon chain test network",
Usage: "Runs Prysm configured for the Sepolia test network.",
}
// HoleskyTestnet flag for the multiclient Ethereum consensus testnet.
HoleskyTestnet = &cli.BoolFlag{
Name: "holesky",
Usage: "Run Prysm configured for the Holesky beacon chain test network",
Usage: "Runs Prysm configured for the Holesky test network.",
}
// Mainnet flag for easier tooling, no-op
Mainnet = &cli.BoolFlag{
Value: true,
Name: "mainnet",
Usage: "Run on Ethereum Beacon Chain Main Net. This is the default and can be omitted.",
Usage: "Runs on Ethereum main network. This is the default and can be omitted.",
}
devModeFlag = &cli.BoolFlag{
Name: "dev",
Usage: "Enable experimental features still in development. These features may not be stable.",
Usage: "Enables experimental features still in development. These features may not be stable.",
}
enableExperimentalState = &cli.BoolFlag{
Name: "enable-experimental-state",
Usage: "Turn on the latest and greatest (but potentially unstable) changes to the beacon state",
Usage: "Turns on the latest and greatest (but potentially unstable) changes to the beacon state.",
}
writeSSZStateTransitionsFlag = &cli.BoolFlag{
Name: "interop-write-ssz-state-transitions",
Usage: "Write ssz states to disk after attempted state transition",
Usage: "Writes SSZ states to disk after attempted state transitio.",
}
disableGRPCConnectionLogging = &cli.BoolFlag{
Name: "disable-grpc-connection-logging",
Usage: "Disables displaying logs for newly connected grpc clients",
}
disableReorgLateBlocks = &cli.BoolFlag{
Name: "disable-reorg-late-blocks",
Usage: "Disables reorgs of late blocks",
Usage: "Disables displaying logs for newly connected grpc clients.",
}
disablePeerScorer = &cli.BoolFlag{
Name: "disable-peer-scorer",
@@ -55,31 +51,31 @@ var (
}
writeWalletPasswordOnWebOnboarding = &cli.BoolFlag{
Name: "write-wallet-password-on-web-onboarding",
Usage: "(Danger): Writes the wallet password to the wallet directory on completing Prysm web onboarding. " +
"We recommend against this flag unless you are an advanced user.",
Usage: `(Danger): Writes the wallet password to the wallet directory on completing Prysm web onboarding.
We recommend against this flag unless you are an advanced user.`,
}
aggregateFirstInterval = &cli.DurationFlag{
Name: "aggregate-first-interval",
Usage: "(Advanced): Specifies the first interval in which attestations are aggregated in the slot (typically unnaggregated attestations are aggregated in this interval)",
Usage: "(Advanced): Specifies the first interval in which attestations are aggregated in the slot (typically unnaggregated attestations are aggregated in this interval).",
Value: 7000 * time.Millisecond,
Hidden: true,
}
aggregateSecondInterval = &cli.DurationFlag{
Name: "aggregate-second-interval",
Usage: "(Advanced): Specifies the second interval in which attestations are aggregated in the slot",
Usage: "(Advanced): Specifies the second interval in which attestations are aggregated in the slot.",
Value: 9500 * time.Millisecond,
Hidden: true,
}
aggregateThirdInterval = &cli.DurationFlag{
Name: "aggregate-third-interval",
Usage: "(Advanced): Specifies the third interval in which attestations are aggregated in the slot",
Usage: "(Advanced): Specifies the third interval in which attestations are aggregated in the slot.",
Value: 11800 * time.Millisecond,
Hidden: true,
}
dynamicKeyReloadDebounceInterval = &cli.DurationFlag{
Name: "dynamic-key-reload-debounce-interval",
Usage: "(Advanced): Specifies the time duration the validator waits to reload new keys if they have " +
"changed on disk. Default 1s, can be any type of duration such as 1.5s, 1000ms, 1m.",
Usage: `(Advanced): Specifies the time duration the validator waits to reload new keys if they have changed on disk.
Can be any type of duration such as 1.5s, 1000ms, 1m.`,
Value: time.Second,
}
disableBroadcastSlashingFlag = &cli.BoolFlag{
@@ -88,25 +84,25 @@ var (
}
attestTimely = &cli.BoolFlag{
Name: "attest-timely",
Usage: "Fixes validator can attest timely after current block processes. See #8185 for more details",
Usage: "Fixes validator can attest timely after current block processes. See #8185 for more details.",
}
enableSlasherFlag = &cli.BoolFlag{
Name: "slasher",
Usage: "Enables a slasher in the beacon node for detecting slashable offenses",
Usage: "Enables a slasher in the beacon node for detecting slashable offenses.",
}
enableSlashingProtectionPruning = &cli.BoolFlag{
Name: "enable-slashing-protection-history-pruning",
Usage: "Enables the pruning of the validator client's slashing protection database",
Usage: "Enables the pruning of the validator client's slashing protection database.",
}
enableDoppelGangerProtection = &cli.BoolFlag{
Name: "enable-doppelganger",
Usage: "Enables the validator to perform a doppelganger check. (Warning): This is not " +
"a foolproof method to find duplicate instances in the network. Your validator will still be" +
" vulnerable if it is being run in unsafe configurations.",
Usage: `Enables the validator to perform a doppelganger check.
This is not "a foolproof method to find duplicate instances in the network.
Your validator will still be vulnerable if it is being run in unsafe configurations.`,
}
disableStakinContractCheck = &cli.BoolFlag{
Name: "disable-staking-contract-check",
Usage: "Disables checking of staking contract deposits when proposing blocks, useful for devnets",
Usage: "Disables checking of staking contract deposits when proposing blocks, useful for devnets.",
}
enableHistoricalSpaceRepresentation = &cli.BoolFlag{
Name: "enable-historical-state-representation",
@@ -116,52 +112,52 @@ var (
}
enableStartupOptimistic = &cli.BoolFlag{
Name: "startup-optimistic",
Usage: "Treats every block as optimistically synced at launch. Use with caution",
Usage: "Treats every block as optimistically synced at launch. Use with caution.",
Value: false,
Hidden: true,
}
enableFullSSZDataLogging = &cli.BoolFlag{
Name: "enable-full-ssz-data-logging",
Usage: "Enables displaying logs for full ssz data on rejected gossip messages",
Usage: "Enables displaying logs for full ssz data on rejected gossip messages.",
}
SaveFullExecutionPayloads = &cli.BoolFlag{
Name: "save-full-execution-payloads",
Usage: "Saves beacon blocks with full execution payloads instead of execution payload headers in the database",
Usage: "Saves beacon blocks with full execution payloads instead of execution payload headers in the database.",
}
EnableBeaconRESTApi = &cli.BoolFlag{
Name: "enable-beacon-rest-api",
Usage: "Experimental enable of the beacon REST API when querying a beacon node",
Usage: "(Experimental): Enables of the beacon REST API when querying a beacon node.",
}
enableVerboseSigVerification = &cli.BoolFlag{
Name: "enable-verbose-sig-verification",
Usage: "Enables identifying invalid signatures if batch verification fails when processing block",
Usage: "Enables identifying invalid signatures if batch verification fails when processing block.",
}
disableOptionalEngineMethods = &cli.BoolFlag{
Name: "disable-optional-engine-methods",
Usage: "Disables the optional engine methods",
Usage: "Disables the optional engine methods.",
}
prepareAllPayloads = &cli.BoolFlag{
Name: "prepare-all-payloads",
Usage: "Informs the engine to prepare all local payloads. Useful for relayers and builders",
Usage: "Informs the engine to prepare all local payloads. Useful for relayers and builders.",
}
EnableEIP4881 = &cli.BoolFlag{
Name: "enable-eip-4881",
Usage: "Enables the deposit tree specified in EIP4881",
Usage: "Enables the deposit tree specified in EIP-4881.",
}
disableResourceManager = &cli.BoolFlag{
Name: "disable-resource-manager",
Usage: "Disables running the libp2p resource manager",
Usage: "Disables running the libp2p resource manager.",
}
// DisableRegistrationCache a flag for disabling the validator registration cache and use db instead.
DisableRegistrationCache = &cli.BoolFlag{
Name: "disable-registration-cache",
Usage: "A temporary flag for disabling the validator registration cache instead of using the db. note: registrations do not clear on restart while using the db",
Usage: "Temporary flag for disabling the validator registration cache instead of using the DB. Note: registrations do not clear on restart while using the DB.",
}
disableAggregateParallel = &cli.BoolFlag{
Name: "disable-aggregate-parallel",
Usage: "Disables parallel aggregation of attestations",
Usage: "Disables parallel aggregation of attestations.",
}
)
@@ -206,7 +202,6 @@ var BeaconChainFlags = append(deprecatedBeaconFlags, append(deprecatedFlags, []c
enableSlasherFlag,
enableHistoricalSpaceRepresentation,
disableStakinContractCheck,
disableReorgLateBlocks,
SaveFullExecutionPayloads,
enableStartupOptimistic,
enableFullSSZDataLogging,

View File

@@ -58,12 +58,14 @@ go_test(
"@consensus_spec_tests_mainnet//:test_data",
"@consensus_spec_tests_minimal//:test_data",
"@eth2_networks//:configs",
"@goerli_testnet//:configs",
"@holesky_testnet//:configs",
],
embed = [":go_default_library"],
gotags = ["develop"],
tags = ["CI_race_detection"],
deps = [
"//build/bazel:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//io/file:go_default_library",

View File

@@ -1,7 +1,6 @@
package params_test
import (
"path"
"path/filepath"
"testing"
@@ -12,13 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func testnetConfigFilePath(t *testing.T, network string) string {
fPath, err := bazel.Runfile("external/eth2_networks")
require.NoError(t, err)
configFilePath := path.Join(fPath, "shared", network, "config.yaml")
return configFilePath
}
func TestE2EConfigParity(t *testing.T) {
params.SetupTestConfigCleanup(t)
testDir := bazel.TestTmpDir()

View File

@@ -1,8 +1,6 @@
package params
import (
"math"
eth1Params "github.com/ethereum/go-ethereum/params"
)
@@ -42,7 +40,7 @@ func PraterConfig() *BeaconChainConfig {
cfg.BellatrixForkVersion = []byte{0x2, 0x0, 0x10, 0x20}
cfg.CapellaForkEpoch = 162304
cfg.CapellaForkVersion = []byte{0x3, 0x0, 0x10, 0x20}
cfg.DenebForkEpoch = math.MaxUint64
cfg.DenebForkEpoch = 231680 // 2024-01-17 06:32:00 (UTC)
cfg.DenebForkVersion = []byte{0x4, 0x0, 0x10, 0x20}
cfg.TerminalTotalDifficulty = "10790000"
cfg.DepositContractAddress = "0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b"

View File

@@ -1,8 +1,10 @@
package params_test
import (
"path"
"testing"
"github.com/prysmaticlabs/prysm/v4/build/bazel"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
@@ -16,7 +18,9 @@ func TestPraterConfigMatchesUpstreamYaml(t *testing.T) {
cfg, err = params.UnmarshalConfigFile(fp, cfg)
require.NoError(t, err)
}
configFP := testnetConfigFilePath(t, "prater")
fPath, err := bazel.Runfile("external/goerli_testnet")
require.NoError(t, err)
configFP := path.Join(fPath, "prater", "config.yaml")
pcfg, err := params.UnmarshalConfigFile(configFP, nil)
require.NoError(t, err)
fields := fieldsFromYamls(t, append(presetFPs, configFP))

View File

@@ -318,7 +318,7 @@ func Test_NewBeaconBlockBody(t *testing.T) {
b, ok := i.(*BeaconBlockBody)
require.Equal(t, true, ok)
assert.Equal(t, version.Bellatrix, b.version)
assert.Equal(t, true, b.isBlinded)
assert.Equal(t, true, b.IsBlinded())
})
t.Run("BeaconBlockBodyCapella", func(t *testing.T) {
pb := &eth.BeaconBlockBodyCapella{}
@@ -335,7 +335,7 @@ func Test_NewBeaconBlockBody(t *testing.T) {
b, ok := i.(*BeaconBlockBody)
require.Equal(t, true, ok)
assert.Equal(t, version.Capella, b.version)
assert.Equal(t, true, b.isBlinded)
assert.Equal(t, true, b.IsBlinded())
})
t.Run("BeaconBlockBodyDeneb", func(t *testing.T) {
pb := &eth.BeaconBlockBodyDeneb{}
@@ -352,7 +352,7 @@ func Test_NewBeaconBlockBody(t *testing.T) {
b, ok := i.(*BeaconBlockBody)
require.Equal(t, true, ok)
assert.Equal(t, version.Deneb, b.version)
assert.Equal(t, true, b.isBlinded)
assert.Equal(t, true, b.IsBlinded())
})
t.Run("nil", func(t *testing.T) {
_, err := NewBeaconBlockBody(nil)
@@ -388,7 +388,7 @@ func Test_BuildSignedBeaconBlock(t *testing.T) {
assert.Equal(t, version.Bellatrix, sb.Version())
})
t.Run("BellatrixBlind", func(t *testing.T) {
b := &BeaconBlock{version: version.Bellatrix, body: &BeaconBlockBody{version: version.Bellatrix, isBlinded: true}}
b := &BeaconBlock{version: version.Bellatrix, body: &BeaconBlockBody{version: version.Bellatrix}}
sb, err := BuildSignedBeaconBlock(b, sig[:])
require.NoError(t, err)
assert.DeepEqual(t, sig, sb.Signature())
@@ -403,7 +403,7 @@ func Test_BuildSignedBeaconBlock(t *testing.T) {
assert.Equal(t, version.Capella, sb.Version())
})
t.Run("CapellaBlind", func(t *testing.T) {
b := &BeaconBlock{version: version.Capella, body: &BeaconBlockBody{version: version.Capella, isBlinded: true}}
b := &BeaconBlock{version: version.Capella, body: &BeaconBlockBody{version: version.Capella}}
sb, err := BuildSignedBeaconBlock(b, sig[:])
require.NoError(t, err)
assert.DeepEqual(t, sig, sb.Signature())
@@ -418,7 +418,7 @@ func Test_BuildSignedBeaconBlock(t *testing.T) {
assert.Equal(t, version.Deneb, sb.Version())
})
t.Run("DenebBlind", func(t *testing.T) {
b := &BeaconBlock{version: version.Deneb, body: &BeaconBlockBody{version: version.Deneb, isBlinded: true}}
b := &BeaconBlock{version: version.Deneb, body: &BeaconBlockBody{version: version.Deneb}}
sb, err := BuildSignedBeaconBlock(b, sig[:])
require.NoError(t, err)
assert.DeepEqual(t, sig, sb.Signature())

View File

@@ -42,7 +42,7 @@ func (b *SignedBeaconBlock) IsNil() bool {
}
// Copy performs a deep copy of the signed beacon block object.
func (b *SignedBeaconBlock) Copy() (interfaces.ReadOnlySignedBeaconBlock, error) {
func (b *SignedBeaconBlock) Copy() (interfaces.SignedBeaconBlock, error) {
if b == nil {
return nil, nil
}
@@ -333,6 +333,34 @@ func (b *SignedBeaconBlock) ToBlinded() (interfaces.ReadOnlySignedBeaconBlock, e
}
}
func (b *SignedBeaconBlock) Unblind(e interfaces.ExecutionData) error {
if e.IsNil() {
return errors.New("cannot unblind with nil execution data")
}
if !b.IsBlinded() {
return errors.New("cannot unblind if the block is already unblinded")
}
payloadRoot, err := e.HashTreeRoot()
if err != nil {
return err
}
header, err := b.Block().Body().Execution()
if err != nil {
return err
}
headerRoot, err := header.HashTreeRoot()
if err != nil {
return err
}
if payloadRoot != headerRoot {
return errors.New("cannot unblind with different execution data")
}
if err := b.SetExecution(e); err != nil {
return err
}
return nil
}
// Version of the underlying protobuf object.
func (b *SignedBeaconBlock) Version() int {
return b.version
@@ -340,7 +368,7 @@ func (b *SignedBeaconBlock) Version() int {
// IsBlinded metadata on whether a block is blinded
func (b *SignedBeaconBlock) IsBlinded() bool {
return b.block.body.isBlinded
return b.version >= version.Bellatrix && b.block.body.executionPayload == nil
}
// ValueInGwei metadata on the payload value returned by the builder. Value is 0 by default if local.
@@ -612,7 +640,7 @@ func (b *BeaconBlock) IsNil() bool {
// IsBlinded checks if the beacon block is a blinded block.
func (b *BeaconBlock) IsBlinded() bool {
return b.body.isBlinded
return b.version >= version.Bellatrix && b.body.executionPayload == nil
}
// Version of the underlying protobuf object.
@@ -1011,7 +1039,7 @@ func (b *BeaconBlockBody) Execution() (interfaces.ExecutionData, error) {
case version.Phase0, version.Altair:
return nil, consensus_types.ErrNotSupported("Execution", b.version)
case version.Bellatrix:
if b.isBlinded {
if b.IsBlinded() {
var ph *enginev1.ExecutionPayloadHeader
var ok bool
if b.executionPayloadHeader != nil {
@@ -1032,7 +1060,7 @@ func (b *BeaconBlockBody) Execution() (interfaces.ExecutionData, error) {
}
return WrappedExecutionPayload(p)
case version.Capella:
if b.isBlinded {
if b.IsBlinded() {
var ph *enginev1.ExecutionPayloadHeaderCapella
var ok bool
if b.executionPayloadHeader != nil {
@@ -1053,7 +1081,7 @@ func (b *BeaconBlockBody) Execution() (interfaces.ExecutionData, error) {
}
return WrappedExecutionPayloadCapella(p, 0)
case version.Deneb:
if b.isBlinded {
if b.IsBlinded() {
var ph *enginev1.ExecutionPayloadHeaderDeneb
var ok bool
if b.executionPayloadHeader != nil {
@@ -1114,17 +1142,17 @@ func (b *BeaconBlockBody) HashTreeRoot() ([field_params.RootLength]byte, error)
case version.Altair:
return pb.(*eth.BeaconBlockBodyAltair).HashTreeRoot()
case version.Bellatrix:
if b.isBlinded {
if b.IsBlinded() {
return pb.(*eth.BlindedBeaconBlockBodyBellatrix).HashTreeRoot()
}
return pb.(*eth.BeaconBlockBodyBellatrix).HashTreeRoot()
case version.Capella:
if b.isBlinded {
if b.IsBlinded() {
return pb.(*eth.BlindedBeaconBlockBodyCapella).HashTreeRoot()
}
return pb.(*eth.BeaconBlockBodyCapella).HashTreeRoot()
case version.Deneb:
if b.isBlinded {
if b.IsBlinded() {
return pb.(*eth.BlindedBeaconBlockBodyDeneb).HashTreeRoot()
}
return pb.(*eth.BeaconBlockBodyDeneb).HashTreeRoot()
@@ -1132,3 +1160,8 @@ func (b *BeaconBlockBody) HashTreeRoot() ([field_params.RootLength]byte, error)
return [field_params.RootLength]byte{}, errIncorrectBodyVersion
}
}
// IsBlinded checks if the beacon block body is a blinded block body.
func (b *BeaconBlockBody) IsBlinded() bool {
return b.version >= version.Bellatrix && b.executionPayload == nil
}

View File

@@ -200,7 +200,6 @@ func Test_BeaconBlock_Copy(t *testing.T) {
b.version = version.Bellatrix
b.body.version = b.version
b.body.isBlinded = true
cp, err = b.Copy()
require.NoError(t, err)
assert.NotEqual(t, cp, b)
@@ -232,17 +231,6 @@ func Test_BeaconBlock_Copy(t *testing.T) {
gas, err := e.ExcessBlobGas()
require.NoError(t, err)
require.DeepEqual(t, gas, uint64(123))
b.body.isBlinded = true
cp, err = b.Copy()
require.NoError(t, err)
assert.NotEqual(t, cp, b)
assert.NotEqual(t, cp.Body(), bb)
e, err = cp.Body().Execution()
require.NoError(t, err)
gas, err = e.ExcessBlobGas()
require.NoError(t, err)
require.DeepEqual(t, gas, uint64(223))
}
func Test_BeaconBlock_IsNil(t *testing.T) {
@@ -263,8 +251,9 @@ func Test_BeaconBlock_IsNil(t *testing.T) {
func Test_BeaconBlock_IsBlinded(t *testing.T) {
b := &SignedBeaconBlock{block: &BeaconBlock{body: &BeaconBlockBody{}}}
assert.Equal(t, false, b.IsBlinded())
b.SetBlinded(true)
assert.Equal(t, true, b.IsBlinded())
b1 := &SignedBeaconBlock{version: version.Bellatrix, block: &BeaconBlock{body: &BeaconBlockBody{executionPayloadHeader: executionPayloadHeader{}}}}
assert.Equal(t, true, b1.IsBlinded())
}
func Test_BeaconBlock_Version(t *testing.T) {
@@ -433,7 +422,7 @@ func Test_BeaconBlockBody_Execution(t *testing.T) {
executionCapellaHeader := &pb.ExecutionPayloadHeaderCapella{BlockNumber: 1}
eCapellaHeader, err := WrappedExecutionPayloadHeaderCapella(executionCapellaHeader, 0)
require.NoError(t, err)
bb = &SignedBeaconBlock{version: version.Capella, block: &BeaconBlock{version: version.Capella, body: &BeaconBlockBody{version: version.Capella, isBlinded: true}}}
bb = &SignedBeaconBlock{version: version.Capella, block: &BeaconBlock{version: version.Capella, body: &BeaconBlockBody{version: version.Capella}}}
require.NoError(t, bb.SetExecution(eCapellaHeader))
result, err = bb.Block().Body().Execution()
require.NoError(t, err)
@@ -454,7 +443,7 @@ func Test_BeaconBlockBody_Execution(t *testing.T) {
executionDenebHeader := &pb.ExecutionPayloadHeaderDeneb{BlockNumber: 1, ExcessBlobGas: 223}
eDenebHeader, err := WrappedExecutionPayloadHeaderDeneb(executionDenebHeader, 0)
require.NoError(t, err)
bb = &SignedBeaconBlock{version: version.Deneb, block: &BeaconBlock{version: version.Deneb, body: &BeaconBlockBody{version: version.Deneb, isBlinded: true}}}
bb = &SignedBeaconBlock{version: version.Deneb, block: &BeaconBlock{version: version.Deneb, body: &BeaconBlockBody{version: version.Deneb}}}
require.NoError(t, bb.SetExecution(eDenebHeader))
result, err = bb.Block().Body().Execution()
require.NoError(t, err)

View File

@@ -313,7 +313,7 @@ func (b *BeaconBlockBody) Proto() (proto.Message, error) {
SyncAggregate: b.syncAggregate,
}, nil
case version.Bellatrix:
if b.isBlinded {
if b.IsBlinded() {
var ph *enginev1.ExecutionPayloadHeader
var ok bool
if b.executionPayloadHeader != nil {
@@ -356,7 +356,7 @@ func (b *BeaconBlockBody) Proto() (proto.Message, error) {
ExecutionPayload: p,
}, nil
case version.Capella:
if b.isBlinded {
if b.IsBlinded() {
var ph *enginev1.ExecutionPayloadHeaderCapella
var ok bool
if b.executionPayloadHeader != nil {
@@ -401,7 +401,7 @@ func (b *BeaconBlockBody) Proto() (proto.Message, error) {
BlsToExecutionChanges: b.blsToExecutionChanges,
}, nil
case version.Deneb:
if b.isBlinded {
if b.IsBlinded() {
var ph *enginev1.ExecutionPayloadHeaderDeneb
var ok bool
if b.executionPayloadHeader != nil {
@@ -755,7 +755,6 @@ func initBlockBodyFromProtoPhase0(pb *eth.BeaconBlockBody) (*BeaconBlockBody, er
b := &BeaconBlockBody{
version: version.Phase0,
isBlinded: false,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -775,7 +774,6 @@ func initBlockBodyFromProtoAltair(pb *eth.BeaconBlockBodyAltair) (*BeaconBlockBo
b := &BeaconBlockBody{
version: version.Altair,
isBlinded: false,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -801,7 +799,6 @@ func initBlockBodyFromProtoBellatrix(pb *eth.BeaconBlockBodyBellatrix) (*BeaconB
}
b := &BeaconBlockBody{
version: version.Bellatrix,
isBlinded: false,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -828,7 +825,6 @@ func initBlindedBlockBodyFromProtoBellatrix(pb *eth.BlindedBeaconBlockBodyBellat
}
b := &BeaconBlockBody{
version: version.Bellatrix,
isBlinded: true,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -855,7 +851,6 @@ func initBlockBodyFromProtoCapella(pb *eth.BeaconBlockBodyCapella) (*BeaconBlock
}
b := &BeaconBlockBody{
version: version.Capella,
isBlinded: false,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -883,7 +878,6 @@ func initBlindedBlockBodyFromProtoCapella(pb *eth.BlindedBeaconBlockBodyCapella)
}
b := &BeaconBlockBody{
version: version.Capella,
isBlinded: true,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -911,7 +905,6 @@ func initBlockBodyFromProtoDeneb(pb *eth.BeaconBlockBodyDeneb) (*BeaconBlockBody
}
b := &BeaconBlockBody{
version: version.Deneb,
isBlinded: false,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),
@@ -940,7 +933,6 @@ func initBlindedBlockBodyFromProtoDeneb(pb *eth.BlindedBeaconBlockBodyDeneb) (*B
}
b := &BeaconBlockBody{
version: version.Deneb,
isBlinded: true,
randaoReveal: bytesutil.ToBytes96(pb.RandaoReveal),
eth1Data: pb.Eth1Data,
graffiti: bytesutil.ToBytes32(pb.Graffiti),

View File

@@ -1312,7 +1312,6 @@ func bodyBlindedBellatrix(t *testing.T) *BeaconBlockBody {
require.NoError(t, err)
return &BeaconBlockBody{
version: version.Bellatrix,
isBlinded: true,
randaoReveal: f.sig,
eth1Data: &eth.Eth1Data{
DepositRoot: f.root[:],
@@ -1360,7 +1359,6 @@ func bodyBlindedCapella(t *testing.T) *BeaconBlockBody {
require.NoError(t, err)
return &BeaconBlockBody{
version: version.Capella,
isBlinded: true,
randaoReveal: f.sig,
eth1Data: &eth.Eth1Data{
DepositRoot: f.root[:],
@@ -1410,7 +1408,6 @@ func bodyBlindedDeneb(t *testing.T) *BeaconBlockBody {
require.NoError(t, err)
return &BeaconBlockBody{
version: version.Deneb,
isBlinded: true,
randaoReveal: f.sig,
eth1Data: &eth.Eth1Data{
DepositRoot: f.root[:],

View File

@@ -38,12 +38,6 @@ func (b *SignedBeaconBlock) SetStateRoot(root []byte) {
copy(b.block.stateRoot[:], root)
}
// SetBlinded sets the blinded flag of the beacon block.
// This function is not thread safe, it is only used during block creation.
func (b *SignedBeaconBlock) SetBlinded(blinded bool) {
b.block.body.isBlinded = blinded
}
// SetRandaoReveal sets the randao reveal in the block body.
// This function is not thread safe, it is only used during block creation.
func (b *SignedBeaconBlock) SetRandaoReveal(r []byte) {
@@ -108,7 +102,7 @@ func (b *SignedBeaconBlock) SetExecution(e interfaces.ExecutionData) error {
if b.version == version.Phase0 || b.version == version.Altair {
return consensus_types.ErrNotSupported("Execution", b.version)
}
if b.block.body.isBlinded {
if e.IsBlinded() {
b.block.body.executionPayloadHeader = e
return nil
}

View File

@@ -36,7 +36,6 @@ var (
// BeaconBlockBody is the main beacon block body structure. It can represent any block type.
type BeaconBlockBody struct {
version int
isBlinded bool
randaoReveal [field_params.BLSSignatureLength]byte
eth1Data *eth.Eth1Data
graffiti [field_params.RootLength]byte

View File

@@ -16,7 +16,7 @@ type ReadOnlySignedBeaconBlock interface {
Block() ReadOnlyBeaconBlock
Signature() [field_params.BLSSignatureLength]byte
IsNil() bool
Copy() (ReadOnlySignedBeaconBlock, error)
Copy() (SignedBeaconBlock, error)
Proto() (proto.Message, error)
PbGenericBlock() (*ethpb.GenericSignedBeaconBlock, error)
PbPhase0Block() (*ethpb.SignedBeaconBlock, error)
@@ -91,12 +91,12 @@ type SignedBeaconBlock interface {
SetGraffiti([]byte)
SetEth1Data(*ethpb.Eth1Data)
SetRandaoReveal([]byte)
SetBlinded(bool)
SetStateRoot([]byte)
SetParentRoot([]byte)
SetProposerIndex(idx primitives.ValidatorIndex)
SetSlot(slot primitives.Slot)
SetSignature(sig []byte)
Unblind(e ExecutionData) error
}
// ExecutionData represents execution layer information that is contained

View File

@@ -34,7 +34,7 @@ func (m SignedBeaconBlock) IsNil() bool {
return m.BeaconBlock == nil || m.Block().IsNil()
}
func (SignedBeaconBlock) Copy() (interfaces.ReadOnlySignedBeaconBlock, error) {
func (SignedBeaconBlock) Copy() (interfaces.SignedBeaconBlock, error) {
panic("implement me")
}
@@ -197,10 +197,6 @@ func (BeaconBlock) SetParentRoot(_ []byte) {
panic("implement me")
}
func (BeaconBlock) SetBlinded(_ bool) {
panic("implement me")
}
func (BeaconBlock) Copy() (interfaces.ReadOnlyBeaconBlock, error) {
panic("implement me")
}

View File

@@ -94,3 +94,23 @@ func (a *data) PbV3() (*enginev1.PayloadAttributesV3, error) {
ParentBeaconBlockRoot: a.parentBeaconBlockRoot,
}, nil
}
// IsEmpty returns whether the given payload attribute is empty
func (a *data) IsEmpty() bool {
if len(a.PrevRandao()) != 0 {
return false
}
if a.Timestamps() != 0 {
return false
}
if len(a.SuggestedFeeRecipient()) != 0 {
return false
}
if a.Version() >= version.Capella && len(a.withdrawals) != 0 {
return false
}
if a.Version() >= version.Deneb && len(a.parentBeaconBlockRoot) != 0 {
return false
}
return true
}

View File

@@ -204,3 +204,72 @@ func TestPayloadAttributeGetters(t *testing.T) {
t.Run(test.name, test.tc)
}
}
func TestIsEmpty(t *testing.T) {
tests := []struct {
name string
a Attributer
empty bool
}{
{
name: "Empty V1",
a: EmptyWithVersion(version.Bellatrix),
empty: true,
},
{
name: "Empty V2",
a: EmptyWithVersion(version.Capella),
empty: true,
},
{
name: "Empty V3",
a: EmptyWithVersion(version.Deneb),
empty: true,
},
{
name: "non empty prevrandao",
a: &data{
version: version.Bellatrix,
prevRandao: []byte{0x01},
},
empty: false,
},
{
name: "non zero Timestamps",
a: &data{
version: version.Bellatrix,
timeStamp: 1,
},
empty: false,
},
{
name: "non empty fee recipient",
a: &data{
version: version.Bellatrix,
suggestedFeeRecipient: []byte{0x01},
},
empty: false,
},
{
name: "non empty withdrawals",
a: &data{
version: version.Capella,
withdrawals: make([]*enginev1.Withdrawal, 1),
},
empty: false,
},
{
name: "non empty parent block root",
a: &data{
version: version.Deneb,
parentBeaconBlockRoot: []byte{0x01},
},
empty: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
require.Equal(t, tt.empty, tt.a.IsEmpty())
})
}
}

View File

@@ -13,4 +13,5 @@ type Attributer interface {
PbV1() (*enginev1.PayloadAttributes, error)
PbV2() (*enginev1.PayloadAttributesV2, error)
PbV3() (*enginev1.PayloadAttributesV3, error)
IsEmpty() bool
}

View File

@@ -6,6 +6,8 @@ go_library(
"committee_index.go",
"domain.go",
"epoch.go",
"execution_address.go",
"payload_id.go",
"randao.go",
"slot.go",
"sszbytes.go",

View File

@@ -0,0 +1,3 @@
package primitives
type ExecutionAddress [20]byte

View File

@@ -0,0 +1,4 @@
package primitives
// PayloadID represents an execution engine Payload ID
type PayloadID [8]byte

View File

@@ -2140,12 +2140,6 @@ def prysm_deps():
version = "v0.6.0",
)
go_repository(
name = "com_github_ipfs_go_detect_race",
importpath = "github.com/ipfs/go-detect-race",
sum = "h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=",
version = "v0.0.1",
)
go_repository(
name = "com_github_ipfs_go_ds_badger",
importpath = "github.com/ipfs/go-ds-badger",
@@ -2159,13 +2153,6 @@ def prysm_deps():
version = "v0.5.0",
)
go_repository(
name = "com_github_ipfs_go_ipfs_util",
importpath = "github.com/ipfs/go-ipfs-util",
sum = "h1:59Sswnk1MFaiq+VcaknX7aYEyGyGDAA73ilhEK2POp8=",
version = "v0.0.2",
)
go_repository(
name = "com_github_ipfs_go_log_v2",
build_file_proto_mode = "disable_global",
@@ -2599,8 +2586,8 @@ def prysm_deps():
go_repository(
name = "com_github_libp2p_go_libp2p_asn_util",
importpath = "github.com/libp2p/go-libp2p-asn-util",
sum = "h1:gMDcMyYiZKkocGXDQ5nsUQyquC9+H+iLEQHwOCZ7s8s=",
version = "v0.3.0",
sum = "h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94=",
version = "v0.4.1",
)
go_repository(
name = "com_github_libp2p_go_libp2p_mplex",
@@ -5617,14 +5604,14 @@ def prysm_deps():
go_repository(
name = "org_golang_x_crypto",
importpath = "golang.org/x/crypto",
sum = "h1:frVn1TEaCEaZcn3Tmd7Y2b5KKPaZ+I32Q2OA3kYp5TA=",
version = "v0.15.0",
sum = "h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=",
version = "v0.17.0",
)
go_repository(
name = "org_golang_x_exp",
importpath = "golang.org/x/exp",
sum = "h1:FRnLl4eNAQl8hwxVVC17teOw8kdjVDVAiFMtgUdTSRQ=",
version = "v0.0.0-20231110203233-9a3e6036ecaa",
sum = "h1:qCEDpW1G+vcj3Y7Fy52pEM1AWm3abj8WimGYejI3SC4=",
version = "v0.0.0-20231214170342-aacd6d4b4611",
)
go_repository(
name = "org_golang_x_exp_typeparams",
@@ -5663,8 +5650,8 @@ def prysm_deps():
go_repository(
name = "org_golang_x_net",
importpath = "golang.org/x/net",
sum = "h1:mIYleuAkSbHh0tCv7RvjL3F6ZVbLjq4+R7zbOn3Kokg=",
version = "v0.18.0",
sum = "h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=",
version = "v0.19.0",
)
go_repository(
name = "org_golang_x_oauth2",
@@ -5688,14 +5675,14 @@ def prysm_deps():
go_repository(
name = "org_golang_x_sys",
importpath = "golang.org/x/sys",
sum = "h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=",
version = "v0.14.0",
sum = "h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=",
version = "v0.15.0",
)
go_repository(
name = "org_golang_x_term",
importpath = "golang.org/x/term",
sum = "h1:LGK9IlZ8T9jvdy6cTdfKUCltatMFOehAQo9SRC46UQ8=",
version = "v0.14.0",
sum = "h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=",
version = "v0.15.0",
)
go_repository(
@@ -5713,8 +5700,8 @@ def prysm_deps():
go_repository(
name = "org_golang_x_tools",
importpath = "golang.org/x/tools",
sum = "h1:zdAyfUGbYmuVokhzVmghFl2ZJh5QhcfebBgmVPFYA+8=",
version = "v0.15.0",
sum = "h1:GO788SKMRunPIBCXiQyo2AaexLstOrVhuAL5YwsckQM=",
version = "v0.16.0",
)
go_repository(

15
go.mod
View File

@@ -85,11 +85,11 @@ require (
go.etcd.io/bbolt v1.3.6
go.opencensus.io v0.24.0
go.uber.org/automaxprocs v1.5.2
golang.org/x/crypto v0.15.0
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa
golang.org/x/crypto v0.17.0
golang.org/x/exp v0.0.0-20231214170342-aacd6d4b4611
golang.org/x/mod v0.14.0
golang.org/x/sync v0.5.0
golang.org/x/tools v0.15.0
golang.org/x/tools v0.16.0
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1
google.golang.org/grpc v1.56.3
google.golang.org/protobuf v1.30.0
@@ -169,9 +169,8 @@ require (
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.2.3 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.3.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
github.com/libp2p/go-mplex v0.7.0 // indirect
github.com/libp2p/go-msgio v0.3.0 // indirect
github.com/libp2p/go-nat v0.2.0 // indirect
@@ -236,9 +235,9 @@ require (
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 // indirect
golang.org/x/net v0.18.0 // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/oauth2 v0.7.0 // indirect
golang.org/x/term v0.14.0 // indirect
golang.org/x/term v0.15.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.3.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
@@ -259,7 +258,7 @@ require (
github.com/go-playground/validator/v10 v10.13.0
github.com/peterh/liner v1.2.0 // indirect
github.com/prysmaticlabs/gohashtree v0.0.3-alpha
golang.org/x/sys v0.14.0 // indirect
golang.org/x/sys v0.15.0 // indirect
google.golang.org/api v0.44.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
k8s.io/klog/v2 v2.80.0 // indirect

32
go.sum
View File

@@ -686,8 +686,6 @@ github.com/influxdata/tdigest v0.0.0-20181121200506-bf2b5ad3c0a9/go.mod h1:Js0mq
github.com/influxdata/usage-client v0.0.0-20160829180054-6d3895376368/go.mod h1:Wbbw6tYNvwa5dlB6304Sd+82Z3f7PmVZHVKU637d4po=
github.com/ipfs/go-cid v0.4.1 h1:A/T3qGvxi4kpKWWcPC/PgbvDA2bjVLO7n4UeVwnbs/s=
github.com/ipfs/go-cid v0.4.1/go.mod h1:uQHwDeX4c6CtyrFwdqyhpNcxVewur1M7l7fNU7LKwZk=
github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=
github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps=
github.com/ipfs/go-log/v2 v2.5.1 h1:1XdUzF7048prq4aBjDQQ4SL5RxftpRGdXhNRwKSAlcY=
github.com/ipfs/go-log/v2 v2.5.1/go.mod h1:prSpmC1Gpllc9UYWxDiZDreBYw7zp4Iqp1kOLU9U5UI=
github.com/iris-contrib/blackfriday v2.0.0+incompatible/go.mod h1:UzZ2bDEoaSGPbkg6SAB4att1aAwTmVIx/5gCVqeyUdI=
@@ -786,14 +784,12 @@ github.com/leodido/go-urn v1.2.3/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNa
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-cidranger v1.1.0 h1:ewPN8EZ0dd1LSnrtuwd4709PXVcITVeuwbag38yPW7c=
github.com/libp2p/go-cidranger v1.1.0/go.mod h1:KWZTfSr+r9qEo9OkI9/SIEeAtw+NNoU0dXIXt15Okic=
github.com/libp2p/go-flow-metrics v0.1.0 h1:0iPhMI8PskQwzh57jB9WxIuIOQ0r+15PChFGkx3Q3WM=
github.com/libp2p/go-flow-metrics v0.1.0/go.mod h1:4Xi8MX8wj5aWNDAZttg6UPmc0ZrnFNsMtpsYUClFtro=
github.com/libp2p/go-libp2p v0.32.1 h1:wy1J4kZIZxOaej6NveTWCZmHiJ/kY7GoAqXgqNCnPps=
github.com/libp2p/go-libp2p v0.32.1/go.mod h1:hXXC3kXPlBZ1eu8Q2hptGrMB4mZ3048JUoS4EKaHW5c=
github.com/libp2p/go-libp2p-asn-util v0.3.0 h1:gMDcMyYiZKkocGXDQ5nsUQyquC9+H+iLEQHwOCZ7s8s=
github.com/libp2p/go-libp2p-asn-util v0.3.0/go.mod h1:B1mcOrKUE35Xq/ASTmQ4tN3LNzVVaMNmq2NACuqyB9w=
github.com/libp2p/go-libp2p-asn-util v0.4.1 h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94=
github.com/libp2p/go-libp2p-asn-util v0.4.1/go.mod h1:d/NI6XZ9qxw67b4e+NgpQexCIiFYJjErASrYW4PFDN8=
github.com/libp2p/go-libp2p-mplex v0.9.0 h1:R58pDRAmuBXkYugbSSXR9wrTX3+1pFM1xP2bLuodIq8=
github.com/libp2p/go-libp2p-mplex v0.9.0/go.mod h1:ro1i4kuwiFT+uMPbIDIFkcLs1KRbNp0QwnUXM+P64Og=
github.com/libp2p/go-libp2p-pubsub v0.10.0 h1:wS0S5FlISavMaAbxyQn3dxMOe2eegMfswM471RuHJwA=
@@ -1424,8 +1420,8 @@ golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.15.0 h1:frVn1TEaCEaZcn3Tmd7Y2b5KKPaZ+I32Q2OA3kYp5TA=
golang.org/x/crypto v0.15.0/go.mod h1:4ChreQoLWfG3xLDer1WdlH5NdlQ3+mwnQq1YTKY+72g=
golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1441,8 +1437,8 @@ golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EH
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20200331195152-e8c3332aa8e5/go.mod h1:4M0jN8W1tt0AVLNr8HDosyJCDCDuyL9N9+3m7wDWgKw=
golang.org/x/exp v0.0.0-20220426173459-3bcf042a4bf5/go.mod h1:lgLbSvA5ygNOMpwM/9anMpWVlVJ7Z+cHWq/eFuinpGE=
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa h1:FRnLl4eNAQl8hwxVVC17teOw8kdjVDVAiFMtgUdTSRQ=
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa/go.mod h1:zk2irFbV9DP96SEBUUAy67IdHUaZuSnrz1n472HUCLE=
golang.org/x/exp v0.0.0-20231214170342-aacd6d4b4611 h1:qCEDpW1G+vcj3Y7Fy52pEM1AWm3abj8WimGYejI3SC4=
golang.org/x/exp v0.0.0-20231214170342-aacd6d4b4611/go.mod h1:iRJReGqOEeBhDZGkGbynYwcHlctCvnjTYIamk7uXpHI=
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678 h1:1P7xPZEwZMoBoz0Yze5Nx2/4pxj6nw9ZqHWXqP0iRgQ=
golang.org/x/exp/typeparams v0.0.0-20231108232855-2478ac86f678/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
@@ -1540,8 +1536,8 @@ golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.18.0 h1:mIYleuAkSbHh0tCv7RvjL3F6ZVbLjq4+R7zbOn3Kokg=
golang.org/x/net v0.18.0/go.mod h1:/czyP5RqHAH4odGYxBJ1qz0+CE5WZ+2j1YgoEo8F2jQ=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/oauth2 v0.0.0-20170912212905-13449ad91cb2/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@@ -1686,16 +1682,16 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.14.0 h1:LGK9IlZ8T9jvdy6cTdfKUCltatMFOehAQo9SRC46UQ8=
golang.org/x/term v0.14.0/go.mod h1:TySc+nGkYR6qt8km8wUhuFRTVSMIX3XPR58y2lC8vww=
golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1795,8 +1791,8 @@ golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.8-0.20211029000441-d6a9af8af023/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.15.0 h1:zdAyfUGbYmuVokhzVmghFl2ZJh5QhcfebBgmVPFYA+8=
golang.org/x/tools v0.15.0/go.mod h1:hpksKq4dtpQWS1uQ61JkdqWM3LscIS6Slf+VVkm+wQk=
golang.org/x/tools v0.16.0 h1:GO788SKMRunPIBCXiQyo2AaexLstOrVhuAL5YwsckQM=
golang.org/x/tools v0.16.0/go.mod h1:kYVVN6I1mBNoB1OX+noeBjbRk4IUEPa7JJ+TJMEooJ0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -51,45 +51,45 @@ var (
// PProfFlag to enable pprof HTTP server.
PProfFlag = &cli.BoolFlag{
Name: "pprof",
Usage: "Enable the pprof HTTP server",
Usage: "Enables the pprof HTTP server.",
}
// PProfPortFlag to specify HTTP server listening port.
PProfPortFlag = &cli.IntFlag{
Name: "pprofport",
Usage: "pprof HTTP server listening port",
Usage: "pprof HTTP server listening port.",
Value: 6060,
}
// PProfAddrFlag to specify HTTP server address.
PProfAddrFlag = &cli.StringFlag{
Name: "pprofaddr",
Usage: "pprof HTTP server listening interface",
Usage: "pprof HTTP server listening interface.",
Value: "127.0.0.1",
}
// MemProfileRateFlag to specify the mem profiling rate.
MemProfileRateFlag = &cli.IntFlag{
Name: "memprofilerate",
Usage: "Turn on memory profiling with the given rate",
Usage: "Turns on memory profiling with the given rate.",
Value: runtime.MemProfileRate,
}
// MutexProfileFractionFlag to specify the mutex profiling rate.
MutexProfileFractionFlag = &cli.IntFlag{
Name: "mutexprofilefraction",
Usage: "Turn on mutex profiling with the given rate",
Usage: "Turns on mutex profiling with the given rate.",
}
// BlockProfileRateFlag to specify the block profiling rate.
BlockProfileRateFlag = &cli.IntFlag{
Name: "blockprofilerate",
Usage: "Turn on block profiling with the given rate",
Usage: "Turns on block profiling with the given rate.",
}
// CPUProfileFlag to specify where to write the CPU profile.
CPUProfileFlag = &cli.StringFlag{
Name: "cpuprofile",
Usage: "Write CPU profile to the given file",
Usage: "Writes CPU profile to the given file.",
}
// TraceFlag to specify where to write the trace execution profile.
TraceFlag = &cli.StringFlag{
Name: "trace",
Usage: "Write execution trace to the given file",
Usage: "Writes execution trace to the given file.",
}
)

View File

@@ -137,7 +137,8 @@ func StringContains(loggerFn assertionLoggerFn, expected, actual string, flag bo
// NoError asserts that error is nil.
func NoError(loggerFn assertionLoggerFn, err error, msg ...interface{}) {
if err != nil {
// reflect.ValueOf is needed for nil instances of custom types implementing Error
if err != nil && !reflect.ValueOf(err).IsNil() {
errMsg := parseMsg("Unexpected error", msg...)
_, file, line, _ := runtime.Caller(2)
loggerFn("%s:%d %s: %v", filepath.Base(file), line, errMsg, err)

Some files were not shown because too many files have changed in this diff Show More