Compare commits

...

18 Commits

Author SHA1 Message Date
satushh
c27a48aa24 exactly one goroutine computes at a time 2026-03-03 22:38:41 +05:30
satushh
ddd2883de5 lint 2026-03-03 12:45:32 +05:30
satushh
771b1fda14 metric fix 2026-03-03 12:34:12 +05:30
satushh
b801b12af1 cache for payload committee computation 2026-03-02 22:10:21 +05:30
Potuz
c9a2a0dd86 Gloas Changes to ReceiveBlock (#16396)
This PR removes any execution traces from ReceiveBlock after Gloas. It
sets daWaitTime to zero and changes the logs to report on the important
fields in the bid.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-02-24 20:43:38 +00:00
james-prysm
fdd04a5466 gloas grpc proposer (#16336)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

This PR defines the validator proposer paths for Gloas, namely adding
gloas fork support on the validator client for propose function, and
adding the following gRPC endpoint support: get Gloas Block, publish
Gloas Block, Get Payload Envelope, Publish Payload envelope.

we propose in this order

1. Get block (cache payload)
2. Submit block
3. Get payload (get post state of 2 via chain info getter) // needs the
block to be recieved and processed
4. Submit payload

There are several things still missing in the PR depending on
https://github.com/OffchainLabs/prysm/pull/15656 as well as future ones,
for now I have left those areas as TODO remarks

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2026-02-24 19:16:45 +00:00
Potuz
a199580672 Receive payload envelope (#16386)
This PR wires execution payload envelope processing on the blockchain
package for Gloas.

It **does not** deal with DA yet.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-24 16:56:32 +00:00
Potuz
b5bdd65f64 Remove head state from new head event (#16388)
The headstate parameter is not used.
2026-02-23 19:06:55 +00:00
Potuz
ce4e653261 Handle Gloas pre-state fetching (#16382)
- a new getter to forkchoice that returns the block hash for the given
blockroot
- a new blockchain getter getLookupParentRoot that returns the root that
serves as key for stategen to fetch the parent state, pre-gloas is not
modified and always returns the blockroot
- Changes the signature of getBlockPreState to receive a ROBlock
2026-02-23 18:11:47 +00:00
Potuz
6f87a0cb95 Remove num active validators (#16390)
Remove unused field in forkchoice

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-23 18:03:55 +00:00
terence
d262c9f927 Implement upgrade gloas (#16378)
This PR ismplement Gloas fork upgrade. There are two main parts
- The first part is upgrade fulu state to gloas and including new gloas
fields
- The second part on board builder at the fork, which is a state setter
under one block. For this, we had to refactor a few things to be lock
free
- We also added spec tests for it to pass
2026-02-23 17:44:29 +00:00
Potuz
7865bd5f32 Gloas/forkchoice gloas weights (#16357)
This PR changes attestation weight counting so that they are compatible
with Gloas.

Same slot attestations are added to the pending node, otherwise
attestations are added to the full payload status node. This PR also
deals with updating the balances and weight transfer. Notice that due to
our current limitations on how we account for weights, we will mimic the
spec status as if https://github.com/ethereum/consensus-specs/pull/4918
were merged. Counting for these attestations would be quite difficult
otherwise in Prysm.
2026-02-23 16:33:21 +00:00
terence
7f2593e496 e2e: wait for all nodes to reach mid-epoch before head compare (#16379)
`AllNodesHaveSameHead` only waits for mid‑epoch on node 0. In
`testCheckpointSync`, the checkpoint‑sync node can still be finishing
syncing while node 0 advances, so the evaluator compares heads across
different epochs and flakes. We should wait for all nodes to reach the
same point before comparing head epochs and roots

I also added better handling for context timeout per feedback from
@Inspector-Butters
2026-02-23 15:12:01 +00:00
satushh
55e2f0a7d2 Ptc duty endpoint /eth/v1/validator/duties/ptc/{epoch} (#16326)
**What type of PR is this?**

Feature

**What does this PR do? Why is it needed?**

Implement the endpoint `/eth/v1/validator/duties/ptc/{epoch}` for gloas.
Relevant link:
https://github.com/ethereum/beacon-APIs/pull/552/files#diff-ea5c1a400bd02986042f033c8d62441c11077868b0a325c6e24297875fd35c38

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-21 03:58:34 +00:00
Potuz
bf11ac0d64 Only call FCU pre-Gloas (#16373)
Also this PR removes the unnecessary pointer parameter fcuArgs since it
is no longer used by the caller.

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-02-19 15:47:21 +00:00
Potuz
dfb918432f Gloas/forkchoice payload envelope (#16351)
Adds a method to insert full nodes after Gloas. Requires #16338
2026-02-19 10:10:55 +00:00
terence
a28c6c8145 Add gloas block gossip changes (#16368)
This updates beacon-block gossip validation for Gloas to use the signed
execution payload bid instead of the execution payload. It removes
execution‑payload‑based checks and introduces bid‑based checks, plus
stubs for parent‑payload checks pending blockchain package support

Reference:
https://github.com/ethereum/consensus-specs/blob/master/specs/gloas/p2p-interface.md#beacon_block

fixes #https://github.com/OffchainLabs/prysm/issues/16372
2026-02-18 20:14:21 +00:00
Bastin
045a3cfe4e GetVersionV2 endpoint (#16347)
**What does this PR do? Why is it needed?**
This PR adds the implementation for `GetVersionV2` based on
[ethereum/beacon-APIs#568](https://github.com/ethereum/beacon-APIs/pull/568).

Q: is there anything to be done for deprecating `GetVersionV1`?
2026-02-18 10:02:06 +00:00
159 changed files with 7577 additions and 2026 deletions

View File

@@ -63,6 +63,19 @@ type PeerCount struct {
Connected string `json:"connected"`
Disconnecting string `json:"disconnecting"`
}
type GetVersionV2Response struct {
Data *VersionV2 `json:"data"`
}
type VersionV2 struct {
BeaconNode *ClientVersionV1 `json:"beacon_node"`
ExecutionClient *ClientVersionV1 `json:"execution_client,omitempty"`
}
type ClientVersionV1 struct {
Code string `json:"code"`
Name string `json:"name"`
Version string `json:"version"`
Commit string `json:"commit"`
}
type GetVersionResponse struct {
Data *Version `json:"data"`

View File

@@ -74,6 +74,18 @@ type SyncCommitteeDuty struct {
ValidatorSyncCommitteeIndices []string `json:"validator_sync_committee_indices"`
}
type GetPTCDutiesResponse struct {
DependentRoot string `json:"dependent_root"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Data []*PTCDuty `json:"data"`
}
type PTCDuty struct {
Pubkey string `json:"pubkey"`
ValidatorIndex string `json:"validator_index"`
Slot string `json:"slot"`
}
// ProduceBlockV3Response is a wrapper json object for the returned block from the ProduceBlockV3 endpoint
type ProduceBlockV3Response struct {
Version string `json:"version"`

View File

@@ -10,6 +10,7 @@ go_library(
"error.go",
"execution_engine.go",
"forkchoice_update_execution.go",
"gloas.go",
"head.go",
"head_sync_committee_info.go",
"init_sync_process_block.go",
@@ -52,6 +53,7 @@ go_library(
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
@@ -121,6 +123,7 @@ go_test(
"error_test.go",
"execution_engine_test.go",
"forkchoice_update_execution_test.go",
"gloas_test.go",
"head_sync_committee_info_test.go",
"head_test.go",
"init_sync_process_block_test.go",
@@ -181,6 +184,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",

View File

@@ -46,6 +46,7 @@ type ForkchoiceFetcher interface {
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, consensus_blocks.ROBlock) error
InsertPayload(interfaces.ROExecutionPayloadEnvelope) error
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
NewSlot(context.Context, primitives.Slot) error
ProposerBoost() [32]byte

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
consensus_blocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/forkchoice"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/runtime/version"
@@ -55,6 +56,13 @@ func (s *Service) InsertNode(ctx context.Context, st state.BeaconState, block co
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, block)
}
// InsertPayload is a wrapper for payload insertion which is self locked
func (s *Service) InsertPayload(pe interfaces.ROExecutionPayloadEnvelope) error {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.InsertPayload(pe)
}
// ForkChoiceDump returns the corresponding value from forkchoice
func (s *Service) ForkChoiceDump(ctx context.Context) (*forkchoice.Dump, error) {
s.cfg.ForkChoiceStore.RLock()

View File

@@ -233,6 +233,9 @@ func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header in
if stVersion < version.Bellatrix {
return true, nil
}
if blk.Version() >= version.Gloas {
return false, nil
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(header, body)
if err != nil {

View File

@@ -429,9 +429,9 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
// Insert Attestations to D, F and G so that they have higher weight than D
// Ensure G is head
fcs.ProcessAttestation(ctx, []uint64{0}, brd, 1)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, 1)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, 1)
fcs.ProcessAttestation(ctx, []uint64{0}, brd, params.BeaconConfig().SlotsPerEpoch, true)
fcs.ProcessAttestation(ctx, []uint64{1}, brf, params.BeaconConfig().SlotsPerEpoch, true)
fcs.ProcessAttestation(ctx, []uint64{2}, brg, params.BeaconConfig().SlotsPerEpoch, true)
fcs.SetBalancesByRooter(service.cfg.StateGen.ActiveNonSlashedBalancesByRoot)
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: bra}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, jc))

View File

@@ -56,7 +56,7 @@ type fcuConfig struct {
// sendFCU handles the logic to notify the engine of a forckhoice update
// when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions .
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
func (s *Service) sendFCU(cfg *postBlockProcessConfig) {
if cfg.postState.Version() < version.Fulu {
// update the caches to compute the right proposer index
// this function is called under a forkchoice lock which we need to release.
@@ -64,7 +64,8 @@ func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
s.updateCachesPostBlockProcessing(cfg)
s.ForkChoicer().Lock()
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
fcuArgs, err := s.getFCUArgs(cfg)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return
}

View File

@@ -0,0 +1,35 @@
package blockchain
import (
consensus_blocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/pkg/errors"
)
// getLookupParentRoot returns the root that serves as key to generate the parent state for the passed beacon block.
// if it is based on empty or it is pre-Gloas, it is the parent root of the block, otherwise if it is based on full it is
// the parent hash.
// The caller of this function should not hold a lock on forkchoice.
func (s *Service) getLookupParentRoot(b consensus_blocks.ROBlock) ([32]byte, error) {
bl := b.Block()
parentRoot := bl.ParentRoot()
if b.Version() < version.Gloas {
return parentRoot, nil
}
blockHash, err := s.cfg.ForkChoiceStore.BlockHash(parentRoot)
if err != nil {
return [32]byte{}, errors.Wrap(err, "failed to get block hash for parent root")
}
bid, err := bl.Body().SignedExecutionPayloadBid()
if err != nil {
return [32]byte{}, errors.Wrap(err, "failed to get signed execution payload bid from block body")
}
if bid == nil || bid.Message == nil || len(bid.Message.ParentBlockHash) != 32 {
return [32]byte{}, errors.New("invalid signed execution payload bid message")
}
parentHash := [32]byte(bid.Message.ParentBlockHash)
if blockHash == parentHash {
return parentHash, nil
}
return parentRoot, nil
}

View File

@@ -0,0 +1,544 @@
package blockchain
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution"
mockExecution "github.com/OffchainLabs/prysm/v7/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
payloadattribute "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func prepareGloasForkchoiceState(
_ context.Context,
slot primitives.Slot,
blockRoot [32]byte,
parentRoot [32]byte,
blockHash [32]byte,
parentBlockHash [32]byte,
justifiedEpoch primitives.Epoch,
finalizedEpoch primitives.Epoch,
) (state.BeaconState, blocks.ROBlock, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: justifiedEpoch,
}
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: finalizedEpoch,
}
builderPendingPayments := make([]*ethpb.BuilderPendingPayment, 64)
for i := range builderPendingPayments {
builderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
}
base := &ethpb.BeaconStateGloas{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentJustifiedCheckpoint: justifiedCheckpoint,
FinalizedCheckpoint: finalizedCheckpoint,
LatestBlockHeader: blockHeader,
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Builders: make([]*ethpb.Builder, 0),
BuilderPendingPayments: builderPendingPayments,
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
}
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
if err != nil {
return nil, blocks.ROBlock{}, err
}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
},
})
blk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{
SignedExecutionPayloadBid: bid,
},
},
})
signed, err := blocks.NewSignedBeaconBlock(blk)
if err != nil {
return nil, blocks.ROBlock{}, err
}
roblock, err := blocks.NewROBlockWithRoot(signed, blockRoot)
return st, roblock, err
}
func testGloasState(t *testing.T, slot primitives.Slot, parentRoot [32]byte, blockHash [32]byte) (*ethpb.BeaconStateGloas, *ethpb.SignedBeaconBlockGloas) {
t.Helper()
builderPendingPayments := make([]*ethpb.BuilderPendingPayment, 64)
for i := range builderPendingPayments {
builderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{FeeRecipient: make([]byte, 20)},
}
}
base := &ethpb.BeaconStateGloas{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
BlockRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
StateRoots: make([][]byte, params.BeaconConfig().SlotsPerHistoricalRoot),
Slashings: make([]uint64, params.BeaconConfig().EpochsPerSlashingsVector),
CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)},
LatestBlockHeader: &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Builders: make([]*ethpb.Builder, 0),
BuilderPendingPayments: builderPendingPayments,
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: make([]byte, 32),
},
})
blk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{SignedExecutionPayloadBid: bid},
},
})
return base, blk
}
func testSignedEnvelope(t *testing.T, blockRoot [32]byte, slot primitives.Slot, blockHash []byte) *ethpb.SignedExecutionPayloadEnvelope {
t.Helper()
return &ethpb.SignedExecutionPayloadEnvelope{
Message: &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: blockHash,
Transactions: [][]byte{},
Withdrawals: []*enginev1.Withdrawal{},
},
ExecutionRequests: &enginev1.ExecutionRequests{},
BuilderIndex: 0,
BeaconBlockRoot: blockRoot[:],
Slot: slot,
StateRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
}
func setupGloasService(t *testing.T, engineClient *mockExecution.EngineClient) (*Service, *testServiceRequirements) {
t.Helper()
return minimalTestService(t,
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithExecutionEngineCaller(engineClient),
)
}
func insertGloasBlock(t *testing.T, s *Service, base *ethpb.BeaconStateGloas, blk *ethpb.SignedBeaconBlockGloas, blockRoot [32]byte) {
t.Helper()
ctx := t.Context()
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
signed, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
roblock, err := blocks.NewROBlockWithRoot(signed, blockRoot)
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(ctx, signed))
require.NoError(t, s.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: blockRoot[:], Slot: blk.Block.Slot}))
require.NoError(t, s.cfg.StateGen.SaveState(ctx, blockRoot, st))
require.NoError(t, s.InsertNode(ctx, st, roblock))
}
func TestGetPayloadEnvelopePrestate_UnknownRoot(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
unknownRoot := bytesutil.ToBytes32([]byte("unknown"))
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: unknownRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{},
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
_, err = s.getPayloadEnvelopePrestate(ctx, envelope)
require.ErrorContains(t, "not found in forkchoice", err)
}
func TestGetPayloadEnvelopePrestate_OK(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, blk := testGloasState(t, 1, parentRoot, blockHash)
insertGloasBlock(t, s, base, blk, blockRoot)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{},
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
st, err := s.getPayloadEnvelopePrestate(ctx, envelope)
require.NoError(t, err)
require.Equal(t, primitives.Slot(1), st.Slot())
}
func TestNotifyNewEnvelope_Valid(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, parentRoot, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{BlockHash: blockHash[:]},
ExecutionRequests: &enginev1.ExecutionRequests{},
Slot: 1,
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
isValid, err := s.notifyNewEnvelope(ctx, st, envelope)
require.NoError(t, err)
require.Equal(t, true, isValid)
}
func TestNotifyNewEnvelope_Syncing(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{
ErrNewPayload: execution.ErrAcceptedSyncingPayloadStatus,
})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, parentRoot, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{BlockHash: blockHash[:]},
ExecutionRequests: &enginev1.ExecutionRequests{},
Slot: 1,
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
isValid, err := s.notifyNewEnvelope(ctx, st, envelope)
require.NoError(t, err)
require.Equal(t, false, isValid)
}
func TestNotifyNewEnvelope_Invalid(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{
ErrNewPayload: execution.ErrInvalidPayloadStatus,
})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, parentRoot, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{BlockHash: blockHash[:]},
ExecutionRequests: &enginev1.ExecutionRequests{},
Slot: 1,
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
_, err = s.notifyNewEnvelope(ctx, st, envelope)
require.Equal(t, true, IsInvalidBlock(err))
}
func TestNotifyForkchoiceUpdateGloas_Valid(t *testing.T) {
pid := &enginev1.PayloadIDBytes{1, 2, 3, 4, 5, 6, 7, 8}
s, _ := setupGloasService(t, &mockExecution.EngineClient{PayloadIDBytes: pid})
ctx := t.Context()
blockHash := bytesutil.ToBytes32([]byte("hash1"))
attr := payloadattribute.EmptyWithVersion(version.Gloas)
retPid, err := s.notifyForkchoiceUpdateGloas(ctx, blockHash, attr)
require.NoError(t, err)
require.DeepEqual(t, pid, retPid)
}
func TestNotifyForkchoiceUpdateGloas_Syncing(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{
ErrForkchoiceUpdated: execution.ErrAcceptedSyncingPayloadStatus,
})
ctx := t.Context()
blockHash := bytesutil.ToBytes32([]byte("hash1"))
_, err := s.notifyForkchoiceUpdateGloas(ctx, blockHash, nil)
require.NoError(t, err)
}
func TestNotifyForkchoiceUpdateGloas_Invalid(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{
ErrForkchoiceUpdated: execution.ErrInvalidPayloadStatus,
})
ctx := t.Context()
blockHash := bytesutil.ToBytes32([]byte("hash1"))
_, err := s.notifyForkchoiceUpdateGloas(ctx, blockHash, nil)
require.Equal(t, true, IsInvalidBlock(err))
}
func TestNotifyForkchoiceUpdateGloas_NilAttributes(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
blockHash := bytesutil.ToBytes32([]byte("hash1"))
_, err := s.notifyForkchoiceUpdateGloas(ctx, blockHash, nil)
require.NoError(t, err)
}
func TestSavePostPayload(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, params.BeaconConfig().ZeroHash, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
protoEnv := testSignedEnvelope(t, blockRoot, 1, blockHash[:])
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(protoEnv)
require.NoError(t, err)
require.NoError(t, s.savePostPayload(ctx, signed, st))
// Verify the envelope was saved in the DB.
require.Equal(t, true, s.cfg.BeaconDB.HasExecutionPayloadEnvelope(ctx, blockRoot))
}
func TestValidateExecutionOnEnvelope_Valid(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
blockRoot := bytesutil.ToBytes32([]byte("root1"))
parentRoot := params.BeaconConfig().ZeroHash
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, parentRoot, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{BlockHash: blockHash[:], ParentHash: make([]byte, 32)},
ExecutionRequests: &enginev1.ExecutionRequests{},
Slot: 1,
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
isValid, err := s.validateExecutionOnEnvelope(ctx, st, envelope)
require.NoError(t, err)
require.Equal(t, true, isValid)
}
func TestPostPayloadHeadUpdate_NotHead(t *testing.T) {
s, _ := setupGloasService(t, &mockExecution.EngineClient{})
ctx := t.Context()
root := bytesutil.ToBytes32([]byte("root1"))
headRoot := bytesutil.ToBytes32([]byte("different"))
blockHash := bytesutil.ToBytes32([]byte("hash1"))
base, _ := testGloasState(t, 1, params.BeaconConfig().ZeroHash, blockHash)
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
require.NoError(t, err)
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: root[:],
Payload: &enginev1.ExecutionPayloadDeneb{BlockHash: blockHash[:]},
Slot: 1,
}
envelope, err := blocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
require.NoError(t, s.postPayloadHeadUpdate(ctx, envelope, st, root, headRoot[:]))
}
func TestGetLookupParentRoot_PreGloas(t *testing.T) {
service, _ := minimalTestService(t)
parentRoot := [32]byte{1}
blk := util.HydrateSignedBeaconBlockDeneb(&ethpb.SignedBeaconBlockDeneb{
Block: &ethpb.BeaconBlockDeneb{
ParentRoot: parentRoot[:],
},
})
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
roblock, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
got, err := service.getLookupParentRoot(roblock)
require.NoError(t, err)
require.Equal(t, parentRoot, got)
}
func TestGetLookupParentRoot_GloasBuildsOnEmpty(t *testing.T) {
service, req := minimalTestService(t)
ctx := t.Context()
parentRoot := [32]byte{1}
parentBlockHash := [32]byte{20}
parentNodeBlockHash := [32]byte{99} // Different from parentBlockHash => builds on empty
// Insert a Gloas node for the parent so BlockHash works.
st, parentROBlock, err := prepareGloasForkchoiceState(ctx, 1, parentRoot, params.BeaconConfig().ZeroHash, parentNodeBlockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, req.fcs.InsertNode(ctx, st, parentROBlock))
blockHash := [32]byte{10}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
},
})
blk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: 2,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{
SignedExecutionPayloadBid: bid,
},
},
})
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
roblock, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
got, err := service.getLookupParentRoot(roblock)
require.NoError(t, err)
// parentBlockHash != parentNodeBlockHash, so it builds on empty => returns parentRoot
require.Equal(t, parentRoot, got)
}
func TestGetLookupParentRoot_GloasBuildsOnFull(t *testing.T) {
service, req := minimalTestService(t)
ctx := t.Context()
parentRoot := [32]byte{1}
parentNodeBlockHash := [32]byte{10}
// Insert a Gloas node for the parent so BlockHash works.
st, parentROBlock, err := prepareGloasForkchoiceState(ctx, 1, parentRoot, params.BeaconConfig().ZeroHash, parentNodeBlockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, req.fcs.InsertNode(ctx, st, parentROBlock))
// Set parentBlockHash in the bid to match the parent's blockHash.
blockHash := [32]byte{20}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentNodeBlockHash[:],
},
})
blk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: 2,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{
SignedExecutionPayloadBid: bid,
},
},
})
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
roblock, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
got, err := service.getLookupParentRoot(roblock)
require.NoError(t, err)
// parentBlockHash == parentNodeBlockHash, so it builds on full => returns parentBlockHash
require.Equal(t, parentNodeBlockHash, got)
}

View File

@@ -170,7 +170,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
// Forward an event capturing a new chain head over a common event feed
// done in a goroutine to avoid blocking the critical runtime main routine.
go func() {
if err := s.notifyNewHeadEvent(ctx, newHeadSlot, headState, newStateRoot[:], newHeadRoot[:]); err != nil {
if err := s.notifyNewHeadEvent(ctx, newHeadSlot, newStateRoot[:], newHeadRoot[:]); err != nil {
log.WithError(err).Error("Could not notify event feed of new chain head")
}
}()
@@ -322,7 +322,6 @@ func (s *Service) hasHeadState() bool {
func (s *Service) notifyNewHeadEvent(
ctx context.Context,
newHeadSlot primitives.Slot,
newHeadState state.BeaconState,
newHeadStateRoot,
newHeadRoot []byte,
) error {

View File

@@ -152,7 +152,6 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("genesis_state_root", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
@@ -165,7 +164,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -202,7 +201,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -220,7 +219,6 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("epoch transition", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
@@ -234,7 +232,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))

View File

@@ -5,7 +5,6 @@ import (
"fmt"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v7/config/params"
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
@@ -41,7 +40,7 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
if b.Version() >= version.Bellatrix && b.Version() < version.Gloas {
p, err := b.Body().Execution()
if err != nil {
return err
@@ -57,7 +56,7 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
if b.Version() >= version.Deneb && b.Version() < version.Gloas {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
@@ -65,7 +64,7 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
if b.Version() >= version.Electra {
if b.Version() >= version.Electra && b.Version() < version.Gloas {
eReqs, err := b.Body().ExecutionRequests()
if err != nil {
log.WithError(err).Error("Failed to get execution requests")
@@ -81,6 +80,20 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
}
}
}
if b.Version() >= version.Gloas {
signedBid, err := b.Body().SignedExecutionPayloadBid()
if err != nil {
log.WithError(err).Error("Failed to get signed execution payload bid")
} else {
bid := signedBid.Message
log = log.WithFields(logrus.Fields{
"blobKzgCommitmentCount": len(bid.BlobKzgCommitments),
"payloadHash": fmt.Sprintf("%#x", bytesutil.Trunc(bid.BlockHash)),
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(bid.ParentBlockHash)),
"builderIndex": bid.BuilderIndex,
})
}
}
log.Info("Finished applying state transition")
return nil
}
@@ -104,19 +117,21 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"sinceSlotStartTime": sinceSlotStartTime,
}
moreFields := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": blkRoot,
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": finalizedRoot,
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": sinceSlotStartTime,
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": blkRoot,
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": finalizedRoot,
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": sinceSlotStartTime,
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
}
if block.Version() < version.Gloas {
moreFields["dataAvailabilityWaitedTime"] = daWaitedTime
}
level := logs.PackageVerbosity("beacon-chain/blockchain")
@@ -132,11 +147,7 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")
}
if !isExecutionBlk {
if block.Version() < version.Bellatrix || block.Version() >= version.Gloas {
return nil
}
payload, err := block.Body().Execution()

View File

@@ -96,7 +96,11 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// We assume trusted attestation in this function has verified signature.
// Update forkchoice store with the new attestation for updating weight.
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Target.Epoch)
attData := a.GetData()
payloadStatus := true
if attData.Target.Epoch >= params.BeaconConfig().GloasForkEpoch {
payloadStatus = attData.CommitteeIndex == 1
}
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(attData.BeaconBlockRoot), attData.Slot, payloadStatus)
return nil
}

View File

@@ -64,7 +64,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
return invalidBlock{error: err}
}
startTime := time.Now()
fcuArgs := &fcuConfig{}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
@@ -102,7 +101,9 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
s.logNonCanonicalBlockReceived(cfg.roblock.Root(), cfg.headRoot)
return nil
}
s.sendFCU(cfg, fcuArgs)
if cfg.roblock.Version() < version.Gloas {
s.sendFCU(cfg)
}
// Pre-Fulu the caches are updated when computing the payload attributes
if cfg.postState.Version() >= version.Fulu {
@@ -402,7 +403,11 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Re
}
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
if s.cfg.ForkChoiceStore.HasNode(r) {
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Target.Epoch)
payloadStatus := true
if a.GetData().Target.Epoch >= params.BeaconConfig().GloasForkEpoch {
payloadStatus = a.GetData().CommitteeIndex == 1
}
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Slot, payloadStatus)
} else if features.Get().EnableExperimentalAttestationPool {
if err = s.cfg.AttestationCache.Add(a); err != nil {
return err
@@ -544,6 +549,9 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
if blocks.IsPreBellatrixVersion(blk.Block().Version()) {
return nil
}
if blk.Block().Version() >= version.Gloas {
return nil
}
// Skip validation if block has an empty payload.
payload, err := blk.Block().Body().Execution()
@@ -993,6 +1001,7 @@ func (s *Service) waitForSync() error {
}
}
// the caller of this function must hold a write lock in forkchoice store.
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [32]byte, parentHash [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, parentHash, InvalidBlockLVH(err))

View File

@@ -38,23 +38,26 @@ func (s *Service) CurrentSlot() primitives.Slot {
}
// getFCUArgs returns the arguments to call forkchoice update
func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
func (s *Service) getFCUArgs(cfg *postBlockProcessConfig) (*fcuConfig, error) {
fcuArgs, err := s.getFCUArgsEarlyBlock(cfg)
if err != nil {
return nil, err
}
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
return fcuArgs, nil
}
func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig) (*fcuConfig, error) {
if cfg.roblock.Root() == cfg.headRoot {
fcuArgs.headState = cfg.postState
fcuArgs.headBlock = cfg.roblock
fcuArgs.headRoot = cfg.headRoot
fcuArgs.proposingSlot = s.CurrentSlot() + 1
return nil
return &fcuConfig{
headState: cfg.postState,
headBlock: cfg.roblock,
headRoot: cfg.headRoot,
proposingSlot: s.CurrentSlot() + 1,
}, nil
}
return s.fcuArgsNonCanonicalBlock(cfg, fcuArgs)
return s.fcuArgsNonCanonicalBlock(cfg)
}
// logNonCanonicalBlockReceived prints a message informing that the received
@@ -79,16 +82,17 @@ func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]
// fcuArgsNonCanonicalBlock returns the arguments to the FCU call when the
// incoming block is non-canonical, that is, based on the head root.
func (s *Service) fcuArgsNonCanonicalBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
func (s *Service) fcuArgsNonCanonicalBlock(cfg *postBlockProcessConfig) (*fcuConfig, error) {
headState, headBlock, err := s.getStateAndBlock(cfg.ctx, cfg.headRoot)
if err != nil {
return err
return nil, err
}
fcuArgs.headState = headState
fcuArgs.headBlock = headBlock
fcuArgs.headRoot = cfg.headRoot
fcuArgs.proposingSlot = s.CurrentSlot() + 1
return nil
return &fcuConfig{
headState: headState,
headBlock: headBlock,
headRoot: cfg.headRoot,
proposingSlot: s.CurrentSlot() + 1,
}, nil
}
// sendStateFeedOnBlock sends an event that a new block has been synced
@@ -192,33 +196,37 @@ func reportProcessingTime(startTime time.Time) {
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.
func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
func (s *Service) getBlockPreState(ctx context.Context, b consensus_blocks.ROBlock) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.getBlockPreState")
defer span.End()
accessRoot, err := s.getLookupParentRoot(b)
if err != nil {
return nil, errors.Wrap(err, "could not get lookup parent root")
}
// Verify incoming block has a valid pre state.
if err := s.verifyBlkPreState(ctx, b.ParentRoot()); err != nil {
if err := s.verifyBlkPreState(ctx, accessRoot); err != nil {
return nil, err
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, b.ParentRoot())
bl := b.Block()
preState, err := s.cfg.StateGen.StateByRoot(ctx, accessRoot)
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot())
return nil, errors.Wrapf(err, "could not get pre state for slot %d", bl.Slot())
}
if preState == nil || preState.IsNil() {
return nil, errors.Wrapf(err, "nil pre state for slot %d", b.Slot())
return nil, errors.Wrapf(err, "nil pre state for slot %d", bl.Slot())
}
// Verify block slot time is not from the future.
if err := slots.VerifyTime(s.genesisTime, b.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
if err := slots.VerifyTime(s.genesisTime, bl.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
return nil, err
}
// Verify block is later than the finalized epoch slot.
if err := s.verifyBlkFinalizedSlot(b); err != nil {
if err := s.verifyBlkFinalizedSlot(bl); err != nil {
return nil, err
}
return preState, nil
}

View File

@@ -731,13 +731,13 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -783,13 +783,13 @@ func TestOnBlock_CanFinalize(t *testing.T) {
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -847,13 +847,13 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, r)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1322,13 +1322,13 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
wg.Add(4)
var lock sync.Mutex
go func() {
preState, err := service.getBlockPreState(ctx, wsb1.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb1, r1)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb1)
require.NoError(t, err)
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb1, r1)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1336,13 +1336,13 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
wg.Done()
}()
go func() {
preState, err := service.getBlockPreState(ctx, wsb2.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb2, r2)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb2)
require.NoError(t, err)
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb2, r2)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1350,13 +1350,13 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
wg.Done()
}()
go func() {
preState, err := service.getBlockPreState(ctx, wsb3.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb3, r3)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb3)
require.NoError(t, err)
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb3, r3)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1364,13 +1364,13 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
wg.Done()
}()
go func() {
preState, err := service.getBlockPreState(ctx, wsb4.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb4, r4)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb4)
require.NoError(t, err)
lock.Lock()
roblock, err := consensusblocks.NewROBlockWithRoot(wsb4, r4)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1442,13 +1442,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1464,13 +1464,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1487,13 +1487,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1512,13 +1512,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
firstInvalidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, firstInvalidRoot)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, firstInvalidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
@@ -1549,12 +1549,12 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, rowsb)
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, rowsb)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
@@ -1578,13 +1578,13 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
@@ -1647,13 +1647,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1670,13 +1670,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1692,13 +1692,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
lastValidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
@@ -1723,13 +1723,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
invalidRoots[i-13], err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, invalidRoots[i-13])
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, invalidRoots[i-13])
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1753,12 +1753,12 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, rowsb)
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, rowsb)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
@@ -1793,13 +1793,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1820,13 +1820,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
@@ -1850,13 +1850,13 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, true})
service.cfg.ForkChoiceStore.Unlock()
@@ -1912,13 +1912,13 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1934,13 +1934,13 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -1956,13 +1956,13 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
lastValidRoot, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, lastValidRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
err = service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false})
service.cfg.ForkChoiceStore.Unlock()
@@ -1989,13 +1989,13 @@ func TestNoViableHead_Reboot(t *testing.T) {
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := service.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := service.FinalizedCheckpt().Epoch
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -2021,12 +2021,12 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, rowsb)
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, rowsb)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
@@ -2113,13 +2113,13 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -2181,13 +2181,13 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -2417,14 +2417,12 @@ func Test_getFCUArgs(t *testing.T) {
isValidPayload: true,
}
// error branch
fcuArgs := &fcuConfig{}
err = s.getFCUArgs(cfg, fcuArgs)
_, err = s.getFCUArgs(cfg)
require.ErrorContains(t, "block does not exist", err)
// canonical branch
cfg.headRoot = cfg.roblock.Root()
fcuArgs = &fcuConfig{}
err = s.getFCUArgs(cfg, fcuArgs)
fcuArgs, err := s.getFCUArgs(cfg)
require.NoError(t, err)
require.Equal(t, cfg.roblock.Root(), fcuArgs.headRoot)
}
@@ -2456,7 +2454,9 @@ func TestRollbackBlock(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
@@ -2469,7 +2469,7 @@ func TestRollbackBlock(t *testing.T) {
// Set invalid parent root to trigger forkchoice error.
wsb.SetParentRoot([]byte("bad"))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
// Rollback block insertion into db and caches.
@@ -2514,7 +2514,9 @@ func TestRollbackBlock_SavePostStateInfo_ContextDeadline(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
@@ -2570,13 +2572,13 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -2587,7 +2589,9 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
@@ -2601,8 +2605,6 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
// Set deadlined context when processing the block
cancCtx, canc := context.WithCancel(t.Context())
canc()
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
parentRoot = roblock.Block().ParentRoot()

View File

@@ -110,13 +110,13 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()
@@ -172,13 +172,13 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
service.cfg.ForkChoiceStore.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
service.cfg.ForkChoiceStore.Unlock()

View File

@@ -95,18 +95,17 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return errors.Wrap(err, "block copy")
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
}
currentCheckpoints := s.saveCurrentCheckpoints(preState)
roblock, err := blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "new ro block with root")
}
preState, err := s.getBlockPreState(ctx, roblock)
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
}
currentCheckpoints := s.saveCurrentCheckpoints(preState)
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return errors.Wrap(err, "validator execution and consensus")
@@ -211,6 +210,16 @@ func (s *Service) validateExecutionAndConsensus(
preState state.BeaconState,
block blocks.ROBlock,
) (state.BeaconState, bool, error) {
if block.Version() >= version.Gloas {
postState, err := s.validateStateTransition(ctx, preState, block)
if errors.Is(err, ErrNotDescendantOfFinalized) {
return nil, false, invalidBlock{error: err, root: block.Root()}
}
if err != nil {
return nil, false, errors.Wrap(err, "failed to validate consensus state transition function")
}
return postState, false, nil
}
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return nil, false, err
@@ -244,6 +253,10 @@ func (s *Service) validateExecutionAndConsensus(
}
func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityChecker, block blocks.ROBlock) (time.Duration, error) {
// Gloas DA is handled on the payload enevelope.
if block.Version() >= version.Gloas {
return 0, nil
}
var err error
start := time.Now()
if avs != nil {

View File

@@ -1,19 +1,284 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
payloadattribute "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
// ExecutionPayloadEnvelopeReceiver interface defines the methods of chain service for receiving
// validated execution payload envelopes.
// ExecutionPayloadEnvelopeReceiver defines the methods for receiving execution payload envelopes.
type ExecutionPayloadEnvelopeReceiver interface {
ReceiveExecutionPayloadEnvelope(context.Context, interfaces.ROSignedExecutionPayloadEnvelope) error
}
// ReceiveExecutionPayloadEnvelope accepts a signed execution payload envelope.
func (s *Service) ReceiveExecutionPayloadEnvelope(_ context.Context, _ interfaces.ROSignedExecutionPayloadEnvelope) error {
// TODO: wire into execution payload envelope processing pipeline.
// ReceiveExecutionPayloadEnvelope processes a signed execution payload envelope for the Gloas fork.
func (s *Service) ReceiveExecutionPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveExecutionPayloadEnvelope")
defer span.End()
envelope, err := signed.Envelope()
if err != nil {
return errors.Wrap(err, "could not get envelope")
}
root := envelope.BeaconBlockRoot()
err = s.payloadBeingSynced.set(root)
if errors.Is(err, errBlockBeingSynced) {
log.WithField("blockRoot", fmt.Sprintf("%#x", root)).Debug("Ignoring payload envelope currently being synced")
return nil
}
defer s.payloadBeingSynced.unset(root)
preState, err := s.getPayloadEnvelopePrestate(ctx, envelope)
if err != nil {
return err
}
var isValidPayload bool
g, gCtx := errgroup.WithContext(ctx)
g.Go(func() error {
return gloas.ProcessExecutionPayload(gCtx, preState, signed)
})
g.Go(func() error {
var elErr error
isValidPayload, elErr = s.validateExecutionOnEnvelope(gCtx, preState, envelope)
return elErr
})
if err := g.Wait(); err != nil {
return err
}
if err := s.savePostPayload(ctx, signed, preState); err != nil {
return err
}
if err := s.InsertPayload(envelope); err != nil {
return errors.Wrap(err, "could not insert payload into forkchoice")
}
if isValidPayload {
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, root); err != nil {
log.WithError(err).Error("Could not set optimistic to valid")
}
s.cfg.ForkChoiceStore.Unlock()
}
headRoot, err := s.HeadRoot(ctx)
if err != nil {
log.WithError(err).Error("Could not get head root")
return nil
}
if err := s.postPayloadHeadUpdate(ctx, envelope, preState, root, headRoot); err != nil {
return err
}
log.WithFields(logrus.Fields{
"slot": envelope.Slot(),
"blockRoot": fmt.Sprintf("%#x", root),
}).Info("Processed execution payload envelope")
return nil
}
func (s *Service) postPayloadHeadUpdate(ctx context.Context, envelope interfaces.ROExecutionPayloadEnvelope, st state.BeaconState, root [32]byte, headRoot []byte) error {
if !bytes.Equal(headRoot, root[:]) {
return nil
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload from envelope")
}
blockHash := bytesutil.ToBytes32(payload.BlockHash())
s.headLock.Lock()
s.head.state = st
s.headLock.Unlock()
if err := transition.UpdateNextSlotCache(ctx, blockHash[:], st); err != nil {
log.WithError(err).Error("Could not update next slot cache")
}
attr := s.getPayloadAttribute(ctx, st, envelope.Slot()+1, headRoot)
if s.inRegularSync() {
go func() {
pid, err := s.notifyForkchoiceUpdateGloas(s.ctx, blockHash, attr)
if err != nil {
log.WithError(err).Error("Could not notify forkchoice update")
return
}
if attr != nil && !attr.IsEmpty() && pid != nil {
var pId [8]byte
copy(pId[:], pid[:])
s.cfg.PayloadIDCache.Set(envelope.Slot()+1, root, pId)
}
}()
}
return nil
}
func (s *Service) getPayloadEnvelopePrestate(ctx context.Context, envelope interfaces.ROExecutionPayloadEnvelope) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.getPayloadEnvelopePrestate")
defer span.End()
root := envelope.BeaconBlockRoot()
if !s.InForkchoice(root) {
return nil, fmt.Errorf("beacon block root %#x not found in forkchoice", root)
}
if err := s.verifyBlkPreState(ctx, root); err != nil {
return nil, errors.Wrap(err, "could not verify pre-state")
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, root)
if err != nil {
return nil, errors.Wrap(err, "could not get pre-state by root")
}
if preState == nil || preState.IsNil() {
return nil, fmt.Errorf("nil pre-state for beacon block root %#x", root)
}
return preState, nil
}
// The returned boolean indicates whether the payload was valid or if it was accepted as syncing (optimistic).
func (s *Service) notifyNewEnvelope(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewEnvelope")
defer span.End()
payload, err := envelope.Execution()
if err != nil {
return false, errors.Wrap(err, "could not get execution payload from envelope")
}
latestBid, err := st.LatestExecutionPayloadBid()
if err != nil {
return false, errors.Wrap(err, "could not get latest execution payload bid")
}
commitments := latestBid.BlobKzgCommitments()
versionedHashes := make([]common.Hash, len(commitments))
for i, c := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(c)
}
parentRoot := common.Hash(bytesutil.ToBytes32(st.LatestBlockHeader().ParentRoot))
requests := envelope.ExecutionRequests()
_, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &parentRoot, requests)
if err == nil {
return true, nil
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
log.WithFields(logrus.Fields{
"slot": envelope.Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic envelope")
return false, nil
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
return false, invalidBlock{error: ErrInvalidPayload}
}
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
func (s *Service) validateExecutionOnEnvelope(ctx context.Context, st state.BeaconState, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
isValid, err := s.notifyNewEnvelope(ctx, st, envelope)
if err == nil {
return isValid, nil
}
blockRoot := envelope.BeaconBlockRoot()
parentRoot := bytesutil.ToBytes32(st.LatestBlockHeader().ParentRoot)
payload, payloadErr := envelope.Execution()
if payloadErr != nil {
return false, errors.Wrap(payloadErr, "could not get execution payload from envelope")
}
parentHash := bytesutil.ToBytes32(payload.ParentHash())
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return false, s.handleInvalidExecutionError(ctx, err, blockRoot, parentRoot, parentHash)
}
func (s *Service) savePostPayload(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, st state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.savePostPayload")
defer span.End()
protoEnv, ok := signed.Proto().(*ethpb.SignedExecutionPayloadEnvelope)
if !ok {
return errors.New("could not type assert signed envelope to proto")
}
if err := s.cfg.BeaconDB.SaveExecutionPayloadEnvelope(ctx, protoEnv); err != nil {
return errors.Wrap(err, "could not save execution payload envelope")
}
envelope, err := signed.Envelope()
if err != nil {
return errors.Wrap(err, "could not get envelope")
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload from envelope")
}
return s.cfg.StateGen.SaveState(ctx, bytesutil.ToBytes32(payload.BlockHash()), st)
}
// notifyForkchoiceUpdateGloas takes the block hash directly because Gloas
// blocks don't carry an execution payload in the body.
func (s *Service) notifyForkchoiceUpdateGloas(ctx context.Context, blockHash [32]byte, attributes payloadattribute.Attributer) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdateGloas")
defer span.End()
s.cfg.ForkChoiceStore.RLock()
finalizedHash := s.cfg.ForkChoiceStore.FinalizedPayloadBlockHash()
justifiedHash := s.cfg.ForkChoiceStore.UnrealizedJustifiedPayloadBlockHash()
s.cfg.ForkChoiceStore.RUnlock()
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: blockHash[:],
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
if attributes == nil {
attributes = payloadattribute.EmptyWithVersion(version.Gloas)
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attributes)
if err == nil {
return payloadID, nil
}
switch {
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
log.WithFields(logrus.Fields{
"headBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(blockHash[:])),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called forkchoice updated with optimistic block (Gloas)")
return payloadID, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
if len(lastValidHash) == 0 {
lastValidHash = defaultLatestValidHash
}
return nil, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: bytesutil.ToBytes32(lastValidHash),
}
default:
log.WithError(err).Error(ErrUndefinedExecutionEngineError)
return nil, nil
}
}

View File

@@ -62,6 +62,7 @@ type Service struct {
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
payloadBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
slasherEnabled bool
@@ -186,6 +187,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
payloadBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
syncCommitteeHeadState: cache.NewSyncCommitteeHeadState(),
}
for _, opt := range opts {

View File

@@ -104,7 +104,9 @@ func Test_setupForkchoiceTree_Head(t *testing.T) {
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, roblock)
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)

View File

@@ -700,6 +700,14 @@ func (s *ChainService) InsertNode(ctx context.Context, st state.BeaconState, blo
return nil
}
// InsertPayload mocks the same method in the chain service
func (s *ChainService) InsertPayload(pe interfaces.ROExecutionPayloadEnvelope) error {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.InsertPayload(pe)
}
return nil
}
// ForkChoiceDump mocks the same method in the chain service
func (s *ChainService) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, error) {
if s.ForkChoiceStore != nil {

View File

@@ -18,6 +18,7 @@ go_library(
"interfaces.go",
"log.go",
"payload_attestation.go",
"payload_committee.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
@@ -78,6 +79,7 @@ go_test(
"committee_fuzz_test.go",
"committee_test.go",
"payload_attestation_test.go",
"payload_committee_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",

127
beacon-chain/cache/payload_committee.go vendored Normal file
View File

@@ -0,0 +1,127 @@
//go:build !fuzz
package cache
import (
"context"
"sync"
"time"
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
lru "github.com/hashicorp/golang-lru"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
const (
// maxPayloadCommitteeCacheSize is the max number of payload committee entries to cache.
// 64 covers two full epochs of slots.
maxPayloadCommitteeCacheSize = 64
)
var (
// PayloadCommitteeCacheMiss tracks the number of payload committee requests that aren't present in the cache.
PayloadCommitteeCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "payload_committee_cache_miss",
Help: "The number of payload committee requests that aren't present in the cache.",
})
// PayloadCommitteeCacheHit tracks the number of payload committee requests that are in the cache.
PayloadCommitteeCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "payload_committee_cache_hit",
Help: "The number of payload committee requests that are present in the cache.",
})
)
// PayloadCommitteeCache is an LRU cache for payload timeliness committee results keyed by ptcSeed.
type PayloadCommitteeCache struct {
cache *lru.Cache
lock sync.RWMutex
inProgress map[string]bool
}
// NewPayloadCommitteeCache creates a new cache for storing payload committee results.
func NewPayloadCommitteeCache() *PayloadCommitteeCache {
c := &PayloadCommitteeCache{}
c.Clear()
return c
}
// Get returns the cached payload committee for the given seed. Returns nil on cache miss.
// Blocks if another goroutine is computing the same seed.
func (c *PayloadCommitteeCache) Get(ctx context.Context, seed [32]byte) ([]primitives.ValidatorIndex, error) {
if err := c.checkInProgress(ctx, seed); err != nil {
return nil, err
}
obj, exists := c.cache.Get(key(seed))
if exists {
PayloadCommitteeCacheHit.Inc()
} else {
PayloadCommitteeCacheMiss.Inc()
return nil, nil
}
indices, ok := obj.([]primitives.ValidatorIndex)
if !ok {
return nil, ErrIncorrectType
}
return indices, nil
}
// Add stores a payload committee result in the cache.
func (c *PayloadCommitteeCache) Add(seed [32]byte, indices []primitives.ValidatorIndex) {
c.cache.Add(key(seed), indices)
}
// MarkInProgress marks a seed as being computed. Returns ErrAlreadyInProgress if another
// goroutine is already computing it.
func (c *PayloadCommitteeCache) MarkInProgress(seed [32]byte) error {
c.lock.Lock()
defer c.lock.Unlock()
s := key(seed)
if c.inProgress[s] {
return ErrAlreadyInProgress
}
c.inProgress[s] = true
return nil
}
// MarkNotInProgress releases the in-progress lock on a given seed.
func (c *PayloadCommitteeCache) MarkNotInProgress(seed [32]byte) error {
c.lock.Lock()
defer c.lock.Unlock()
s := key(seed)
delete(c.inProgress, s)
return nil
}
// Clear resets the cache to its initial state.
func (c *PayloadCommitteeCache) Clear() {
c.lock.Lock()
defer c.lock.Unlock()
c.cache = lruwrpr.New(maxPayloadCommitteeCacheSize)
c.inProgress = make(map[string]bool)
}
func (c *PayloadCommitteeCache) checkInProgress(ctx context.Context, seed [32]byte) error {
delay := minDelay
for {
if ctx.Err() != nil {
return ctx.Err()
}
c.lock.RLock()
if !c.inProgress[key(seed)] {
c.lock.RUnlock()
break
}
c.lock.RUnlock()
time.Sleep(time.Duration(delay) * time.Nanosecond)
delay *= delayFactor
delay = min(delay, maxDelay)
}
return nil
}

View File

@@ -0,0 +1,39 @@
//go:build fuzz
// This file is used in fuzzer builds to bypass the payload committee cache.
package cache
import (
"context"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// FakePayloadCommitteeCache is a no-op implementation of the payload committee cache for fuzz builds.
type FakePayloadCommitteeCache struct{}
// NewPayloadCommitteeCache creates a new fake cache.
func NewPayloadCommitteeCache() *FakePayloadCommitteeCache {
return &FakePayloadCommitteeCache{}
}
// Get is a stub.
func (c *FakePayloadCommitteeCache) Get(_ context.Context, _ [32]byte) ([]primitives.ValidatorIndex, error) {
return nil, nil
}
// Add is a stub.
func (c *FakePayloadCommitteeCache) Add(_ [32]byte, _ []primitives.ValidatorIndex) {}
// MarkInProgress is a stub.
func (c *FakePayloadCommitteeCache) MarkInProgress(_ [32]byte) error {
return nil
}
// MarkNotInProgress is a stub.
func (c *FakePayloadCommitteeCache) MarkNotInProgress(_ [32]byte) error {
return nil
}
// Clear is a stub.
func (c *FakePayloadCommitteeCache) Clear() {}

View File

@@ -0,0 +1,94 @@
//go:build !fuzz
package cache
import (
"context"
"strconv"
"testing"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestPayloadCommitteeCache_MissOnEmpty(t *testing.T) {
c := NewPayloadCommitteeCache()
seed := [32]byte{'A'}
indices, err := c.Get(t.Context(), seed)
require.NoError(t, err)
assert.Equal(t, true, indices == nil, "Expected nil on empty cache")
}
func TestPayloadCommitteeCache_AddThenHit(t *testing.T) {
c := NewPayloadCommitteeCache()
seed := [32]byte{'A'}
want := []primitives.ValidatorIndex{1, 2, 3, 4, 5}
c.Add(seed, want)
got, err := c.Get(t.Context(), seed)
require.NoError(t, err)
assert.DeepEqual(t, want, got)
}
func TestPayloadCommitteeCache_LRUEviction(t *testing.T) {
c := NewPayloadCommitteeCache()
// Fill beyond capacity.
for i := range maxPayloadCommitteeCacheSize + 10 {
s := bytesutil.ToBytes32([]byte(strconv.Itoa(i)))
c.Add(s, []primitives.ValidatorIndex{primitives.ValidatorIndex(i)})
}
// Oldest entries should be evicted.
s := bytesutil.ToBytes32([]byte(strconv.Itoa(0)))
got, err := c.Get(t.Context(), s)
require.NoError(t, err)
assert.Equal(t, true, got == nil, "Expected oldest entry to be evicted")
// Newest entry should still be present.
s = bytesutil.ToBytes32([]byte(strconv.Itoa(maxPayloadCommitteeCacheSize + 9)))
got, err = c.Get(t.Context(), s)
require.NoError(t, err)
assert.Equal(t, 1, len(got))
}
func TestPayloadCommitteeCache_CancelledContext(t *testing.T) {
c := NewPayloadCommitteeCache()
seed := [32]byte{'A'}
// Mark in progress so Get blocks.
require.NoError(t, c.MarkInProgress(seed))
ctx, cancel := context.WithCancel(t.Context())
cancel()
_, err := c.Get(ctx, seed)
require.ErrorIs(t, err, context.Canceled)
require.NoError(t, c.MarkNotInProgress(seed))
}
func TestPayloadCommitteeCache_MarkInProgressDuplicate(t *testing.T) {
c := NewPayloadCommitteeCache()
seed := [32]byte{'A'}
require.NoError(t, c.MarkInProgress(seed))
err := c.MarkInProgress(seed)
assert.Equal(t, ErrAlreadyInProgress, err)
require.NoError(t, c.MarkNotInProgress(seed))
}
func TestPayloadCommitteeCache_Clear(t *testing.T) {
c := NewPayloadCommitteeCache()
seed := [32]byte{'A'}
c.Add(seed, []primitives.ValidatorIndex{1, 2, 3})
c.Clear()
got, err := c.Get(t.Context(), seed)
require.NoError(t, err)
assert.Equal(t, true, got == nil, "Expected nil after Clear")
}

View File

@@ -11,15 +11,18 @@ go_library(
"payload_attestation.go",
"pending_payment.go",
"proposer_slashing.go",
"upgrade.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/requests:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
@@ -49,25 +52,30 @@ go_test(
"payload_test.go",
"pending_payment_test.go",
"proposer_slashing_test.go",
"upgrade_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/validator-client:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",

View File

@@ -6,8 +6,6 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -122,7 +120,7 @@ func applyBuilderDepositRequest(beaconState state.BeaconState, request *enginev1
pubkey := bytesutil.ToBytes48(request.Pubkey)
_, isValidator := beaconState.ValidatorIndexByPubkey(pubkey)
idx, isBuilder := beaconState.BuilderIndexByPubkey(pubkey)
isBuilderPrefix := IsBuilderWithdrawalCredential(request.WithdrawalCredentials)
isBuilderPrefix := helpers.IsBuilderWithdrawalCredential(request.WithdrawalCredentials)
if !isBuilder && (!isBuilderPrefix || isValidator) {
return false, nil
}
@@ -173,8 +171,3 @@ func applyDepositForNewBuilder(
withdrawalCredBytes := bytesutil.ToBytes32(withdrawalCredentials)
return beaconState.AddBuilderFromDeposit(pubkeyBytes, withdrawalCredBytes, amount)
}
func IsBuilderWithdrawalCredential(withdrawalCredentials []byte) bool {
return len(withdrawalCredentials) == fieldparams.RootLength &&
withdrawalCredentials[0] == params.BeaconConfig().BuilderWithdrawalPrefixByte
}

View File

@@ -108,7 +108,7 @@ func ProcessExecutionPayload(
st state.BeaconState,
signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope,
) error {
if err := verifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
if err := VerifyExecutionPayloadEnvelopeSignature(st, signedEnvelope); err != nil {
return errors.Wrap(err, "signature verification failed")
}
@@ -301,25 +301,28 @@ func processExecutionRequests(ctx context.Context, st state.BeaconState, rqs *en
return nil
}
// verifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
// Spec v1.7.0-alpha.0 (pseudocode):
// builder_index = signed_envelope.message.builder_index
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// VerifyExecutionPayloadEnvelopeSignature verifies the BLS signature on a signed execution payload envelope.
// <spec fn="verify_execution_payload_envelope_signature" fork="gloas" style="full" hash="49483ae2">
// def verify_execution_payload_envelope_signature(
//
// validator_index = state.latest_block_header.proposer_index
// pubkey = state.validators[validator_index].pubkey
// state: BeaconState, signed_envelope: SignedExecutionPayloadEnvelope
//
// else:
// ) -> bool:
//
// pubkey = state.builders[builder_index].pubkey
// builder_index = signed_envelope.message.builder_index
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// validator_index = state.latest_block_header.proposer_index
// pubkey = state.validators[validator_index].pubkey
// else:
// pubkey = state.builders[builder_index].pubkey
//
// signing_root = compute_signing_root(
// signing_root = compute_signing_root(
// signed_envelope.message, get_domain(state, DOMAIN_BEACON_BUILDER)
// )
// return bls.Verify(pubkey, signing_root, signed_envelope.signature)
//
// signed_envelope.message, get_domain(state, DOMAIN_BEACON_BUILDER)
//
// )
// return bls.Verify(pubkey, signing_root, signed_envelope.signature)
func verifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
// </spec>
func VerifyExecutionPayloadEnvelopeSignature(st state.BeaconState, signedEnvelope interfaces.ROSignedExecutionPayloadEnvelope) error {
envelope, err := signedEnvelope.Envelope()
if err != nil {
return fmt.Errorf("failed to get envelope: %w", err)

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"slices"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
@@ -122,6 +123,37 @@ func PayloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot pr
return nil, err
}
// Try cache, then acquire the in-progress lock. If another goroutine
// is already computing, wait for it and retry. This loop ensures that
// exactly one goroutine computes at a time, avoiding thundering herd
// when a prior computation fails without populating the cache.
for {
if ctx.Err() != nil {
return nil, ctx.Err()
}
cached, err := helpers.PayloadCommitteeFromCache(ctx, seed)
if err != nil {
return nil, err
}
if cached != nil {
return cached, nil
}
if err := helpers.MarkPayloadCommitteeInProgress(seed); err != nil {
if !errors.Is(err, cache.ErrAlreadyInProgress) {
return nil, err
}
// Another goroutine is computing. PayloadCommitteeFromCache
// will block (via checkInProgress backoff) until it finishes,
// then we loop back to check the cache and retry.
continue
}
// We own the in-progress lock.
break
}
defer func() {
_ = helpers.MarkPayloadCommitteeNotInProgress(seed)
}()
activeCount, err := helpers.ActiveValidatorCount(ctx, st, epoch)
if err != nil {
return nil, err
@@ -153,6 +185,8 @@ func PayloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot pr
}
}
helpers.AddPayloadCommittee(seed, selected)
return selected, nil
}

View File

@@ -2,7 +2,9 @@ package gloas_test
import (
"bytes"
"sync"
"testing"
"time"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
@@ -15,7 +17,10 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
"github.com/OffchainLabs/prysm/v7/crypto/hash"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
testutil "github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
@@ -303,3 +308,115 @@ func (s *validatorLookupErrState) ValidatorAtIndexReadOnly(idx primitives.Valida
}
return s.BeaconState.ValidatorAtIndexReadOnly(idx)
}
// ptcSeed mirrors the seed derivation in PayloadCommittee so the test can
// pre-mark the seed as in-progress.
func ptcSeed(t *testing.T, st state.ReadOnlyBeaconState, slot primitives.Slot) [32]byte {
epoch := slots.ToEpoch(slot)
seed, err := helpers.Seed(st, epoch, params.BeaconConfig().DomainPTCAttester)
require.NoError(t, err)
return hash.Hash(append(seed[:], bytesutil.Bytes8(uint64(slot))...))
}
// TestPayloadCommittee_ConcurrentInProgress verifies that when another
// goroutine holds the in-progress lock and then releases WITHOUT populating
// the cache (simulating a failed computation), PayloadCommittee falls through
// and computes the result itself instead of returning an error.
func TestPayloadCommittee_ConcurrentInProgress(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
_, pk1 := newKey(t)
_, pk2 := newKey(t)
vals := []*eth.Validator{activeValidator(pk1), activeValidator(pk2)}
st := newTestState(t, vals, 2)
slot := primitives.Slot(1)
seed := ptcSeed(t, st, slot)
// Simulate another goroutine holding the lock.
require.NoError(t, helpers.MarkPayloadCommitteeInProgress(seed))
// Release the lock after a short delay WITHOUT adding to cache,
// simulating the other goroutine failing.
go func() {
time.Sleep(20 * time.Millisecond)
_ = helpers.MarkPayloadCommitteeNotInProgress(seed)
}()
// PayloadCommittee should wait, see no cache entry, and compute itself.
ptc, err := gloas.PayloadCommittee(t.Context(), st, slot)
require.NoError(t, err)
assert.Equal(t, true, len(ptc) > 0, "expected non-empty PTC")
}
// TestPayloadCommittee_ConcurrentCacheHit verifies that when another goroutine
// holds the in-progress lock and then populates the cache, a concurrent caller
// gets the cached result.
func TestPayloadCommittee_ConcurrentCacheHit(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
_, pk1 := newKey(t)
_, pk2 := newKey(t)
vals := []*eth.Validator{activeValidator(pk1), activeValidator(pk2)}
st := newTestState(t, vals, 2)
slot := primitives.Slot(1)
seed := ptcSeed(t, st, slot)
// First, compute the expected result.
expected, err := gloas.PayloadCommittee(t.Context(), st, slot)
require.NoError(t, err)
helpers.ClearCache()
// Simulate another goroutine that will populate the cache.
require.NoError(t, helpers.MarkPayloadCommitteeInProgress(seed))
go func() {
time.Sleep(20 * time.Millisecond)
helpers.AddPayloadCommittee(seed, expected)
_ = helpers.MarkPayloadCommitteeNotInProgress(seed)
}()
ptc, err := gloas.PayloadCommittee(t.Context(), st, slot)
require.NoError(t, err)
assert.DeepEqual(t, expected, ptc)
}
// TestPayloadCommittee_ParallelCallers verifies that multiple concurrent
// callers all get the correct result without errors.
func TestPayloadCommittee_ParallelCallers(t *testing.T) {
helpers.ClearCache()
setupTestConfig(t)
_, pk1 := newKey(t)
_, pk2 := newKey(t)
vals := []*eth.Validator{activeValidator(pk1), activeValidator(pk2)}
st := newTestState(t, vals, 2)
slot := primitives.Slot(1)
// Compute expected result first.
expected, err := gloas.PayloadCommittee(t.Context(), st, slot)
require.NoError(t, err)
helpers.ClearCache()
const numCallers = 8
var wg sync.WaitGroup
errs := make([]error, numCallers)
results := make([][]primitives.ValidatorIndex, numCallers)
for i := range numCallers {
wg.Add(1)
go func(idx int) {
defer wg.Done()
results[idx], errs[idx] = gloas.PayloadCommittee(t.Context(), st, slot)
}(i)
}
wg.Wait()
for i := range numCallers {
require.NoError(t, errs[i], "caller %d returned error", i)
assert.DeepEqual(t, expected, results[i], "caller %d got wrong result", i)
}
}

View File

@@ -69,16 +69,16 @@ func buildPayloadFixture(t *testing.T, mutate func(payload *enginev1.ExecutionPa
}
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentHash,
ParentBlockRoot: bytes.Repeat([]byte{0xDD}, 32),
BlockHash: blockHash,
PrevRandao: randao,
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
FeeRecipient: bytes.Repeat([]byte{0xEE}, 20),
ParentBlockHash: parentHash,
ParentBlockRoot: bytes.Repeat([]byte{0xDD}, 32),
BlockHash: blockHash,
PrevRandao: randao,
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
FeeRecipient: bytes.Repeat([]byte{0xEE}, 20),
}
header := &ethpb.BeaconBlockHeader{
@@ -91,11 +91,11 @@ func buildPayloadFixture(t *testing.T, mutate func(payload *enginev1.ExecutionPa
require.NoError(t, err)
envelope := &ethpb.ExecutionPayloadEnvelope{
Slot: slot,
BuilderIndex: builderIdx,
BeaconBlockRoot: headerRoot[:],
Payload: payload,
ExecutionRequests: &enginev1.ExecutionRequests{},
Slot: slot,
BuilderIndex: builderIdx,
BeaconBlockRoot: headerRoot[:],
Payload: payload,
ExecutionRequests: &enginev1.ExecutionRequests{},
}
if mutate != nil {
@@ -297,14 +297,14 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
require.NoError(t, verifyExecutionPayloadEnvelopeSignature(st, signed))
require.NoError(t, VerifyExecutionPayloadEnvelopeSignature(st, signed))
})
t.Run("builder", func(t *testing.T) {
signed, err := blocks.WrappedROSignedExecutionPayloadEnvelope(fixture.signedProto)
require.NoError(t, err)
require.NoError(t, verifyExecutionPayloadEnvelopeSignature(fixture.state, signed))
require.NoError(t, VerifyExecutionPayloadEnvelopeSignature(fixture.state, signed))
})
t.Run("invalid signature", func(t *testing.T) {
@@ -330,7 +330,7 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = verifyExecutionPayloadEnvelopeSignature(st, badSigned)
err = VerifyExecutionPayloadEnvelopeSignature(st, badSigned)
require.ErrorContains(t, "invalid signature format", err)
})
@@ -342,7 +342,7 @@ func TestVerifyExecutionPayloadEnvelopeSignature(t *testing.T) {
badSigned, err := blocks.WrappedROSignedExecutionPayloadEnvelope(signedProto)
require.NoError(t, err)
err = verifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
err = VerifyExecutionPayloadEnvelopeSignature(fixture.state, badSigned)
require.ErrorContains(t, "invalid signature format", err)
})
})

View File

@@ -0,0 +1,306 @@
package gloas
import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
// UpgradeToGloas updates inputs a generic state to return the version Gloas state.
//
// <spec fn="upgrade_to_gloas" fork="gloas" hash="6e66df25">
// def upgrade_to_gloas(pre: fulu.BeaconState) -> BeaconState:
// epoch = fulu.get_current_epoch(pre)
//
// post = BeaconState(
// genesis_time=pre.genesis_time,
// genesis_validators_root=pre.genesis_validators_root,
// slot=pre.slot,
// fork=Fork(
// previous_version=pre.fork.current_version,
// # [Modified in Gloas:EIP7732]
// current_version=GLOAS_FORK_VERSION,
// epoch=epoch,
// ),
// latest_block_header=pre.latest_block_header,
// block_roots=pre.block_roots,
// state_roots=pre.state_roots,
// historical_roots=pre.historical_roots,
// eth1_data=pre.eth1_data,
// eth1_data_votes=pre.eth1_data_votes,
// eth1_deposit_index=pre.eth1_deposit_index,
// validators=pre.validators,
// balances=pre.balances,
// randao_mixes=pre.randao_mixes,
// slashings=pre.slashings,
// previous_epoch_participation=pre.previous_epoch_participation,
// current_epoch_participation=pre.current_epoch_participation,
// justification_bits=pre.justification_bits,
// previous_justified_checkpoint=pre.previous_justified_checkpoint,
// current_justified_checkpoint=pre.current_justified_checkpoint,
// finalized_checkpoint=pre.finalized_checkpoint,
// inactivity_scores=pre.inactivity_scores,
// current_sync_committee=pre.current_sync_committee,
// next_sync_committee=pre.next_sync_committee,
// # [Modified in Gloas:EIP7732]
// # Removed `latest_execution_payload_header`
// # [New in Gloas:EIP7732]
// latest_execution_payload_bid=ExecutionPayloadBid(
// block_hash=pre.latest_execution_payload_header.block_hash,
// ),
// next_withdrawal_index=pre.next_withdrawal_index,
// next_withdrawal_validator_index=pre.next_withdrawal_validator_index,
// historical_summaries=pre.historical_summaries,
// deposit_requests_start_index=pre.deposit_requests_start_index,
// deposit_balance_to_consume=pre.deposit_balance_to_consume,
// exit_balance_to_consume=pre.exit_balance_to_consume,
// earliest_exit_epoch=pre.earliest_exit_epoch,
// consolidation_balance_to_consume=pre.consolidation_balance_to_consume,
// earliest_consolidation_epoch=pre.earliest_consolidation_epoch,
// pending_deposits=pre.pending_deposits,
// pending_partial_withdrawals=pre.pending_partial_withdrawals,
// pending_consolidations=pre.pending_consolidations,
// proposer_lookahead=pre.proposer_lookahead,
// # [New in Gloas:EIP7732]
// builders=[],
// # [New in Gloas:EIP7732]
// next_withdrawal_builder_index=BuilderIndex(0),
// # [New in Gloas:EIP7732]
// execution_payload_availability=[0b1 for _ in range(SLOTS_PER_HISTORICAL_ROOT)],
// # [New in Gloas:EIP7732]
// builder_pending_payments=[BuilderPendingPayment() for _ in range(2 * SLOTS_PER_EPOCH)],
// # [New in Gloas:EIP7732]
// builder_pending_withdrawals=[],
// # [New in Gloas:EIP7732]
// latest_block_hash=pre.latest_execution_payload_header.block_hash,
// # [New in Gloas:EIP7732]
// payload_expected_withdrawals=[],
// )
//
// # [New in Gloas:EIP7732]
// onboard_builders_from_pending_deposits(post)
//
// return post
// </spec>
//
// <spec fn="process_execution_payload_bid" fork="gloas" hash="823c9f3a">
// def process_execution_payload_bid(state: BeaconState, block: BeaconBlock) -> None:
// signed_bid = block.body.signed_execution_payload_bid
// bid = signed_bid.message
// builder_index = bid.builder_index
// amount = bid.value
//
// # For self-builds, amount must be zero regardless of withdrawal credential prefix
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// assert amount == 0
// assert signed_bid.signature == bls.G2_POINT_AT_INFINITY
// else:
// # Verify that the builder is active
// assert is_active_builder(state, builder_index)
// # Verify that the builder has funds to cover the bid
// assert can_builder_cover_bid(state, builder_index, amount)
// # Verify that the bid signature is valid
// assert verify_execution_payload_bid_signature(state, signed_bid)
//
// # Verify commitments are under limit
// assert (
// len(bid.blob_kzg_commitments)
// <= get_blob_parameters(get_current_epoch(state)).max_blobs_per_block
// )
//
// # Verify that the bid is for the current slot
// assert bid.slot == block.slot
// # Verify that the bid is for the right parent block
// assert bid.parent_block_hash == state.latest_block_hash
// assert bid.parent_block_root == block.parent_root
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state))
//
// # Record the pending payment if there is some payment
// if amount > 0:
// pending_payment = BuilderPendingPayment(
// weight=0,
// withdrawal=BuilderPendingWithdrawal(
// fee_recipient=bid.fee_recipient,
// amount=amount,
// builder_index=builder_index,
// ),
// )
// state.builder_pending_payments[SLOTS_PER_EPOCH + bid.slot % SLOTS_PER_EPOCH] = (
// pending_payment
// )
//
// # Cache the signed execution payload bid
// state.latest_execution_payload_bid = bid
// </spec>
func UpgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
s, err := upgradeToGloas(beaconState)
if err != nil {
return nil, errors.Wrap(err, "could not convert to gloas")
}
if err := s.OnboardBuildersFromPendingDeposits(); err != nil {
return nil, errors.Wrap(err, "failed to onboard builders from pending deposits")
}
return s, nil
}
func upgradeToGloas(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
depositRequestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
depositBalanceToConsume, err := beaconState.DepositBalanceToConsume()
if err != nil {
return nil, err
}
exitBalanceToConsume, err := beaconState.ExitBalanceToConsume()
if err != nil {
return nil, err
}
earliestExitEpoch, err := beaconState.EarliestExitEpoch()
if err != nil {
return nil, err
}
consolidationBalanceToConsume, err := beaconState.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
earliestConsolidationEpoch, err := beaconState.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pendingDeposits, err := beaconState.PendingDeposits()
if err != nil {
return nil, err
}
pendingPartialWithdrawals, err := beaconState.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pendingConsolidations, err := beaconState.PendingConsolidations()
if err != nil {
return nil, err
}
proposerLookahead, err := beaconState.ProposerLookahead()
if err != nil {
return nil, err
}
proposerLookaheadU64 := make([]uint64, len(proposerLookahead))
for i, v := range proposerLookahead {
proposerLookaheadU64[i] = uint64(v)
}
executionPayloadAvailability := make([]byte, int((params.BeaconConfig().SlotsPerHistoricalRoot+7)/8))
for i := range executionPayloadAvailability {
executionPayloadAvailability[i] = 0xff
}
builderPendingPayments := make([]*ethpb.BuilderPendingPayment, int(params.BeaconConfig().SlotsPerEpoch*2))
for i := range builderPendingPayments {
builderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
},
}
}
s := &ethpb.BeaconStateGloas{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().GloasForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: payloadHeader.BlockHash(),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
ParentBlockHash: make([]byte, fieldparams.RootLength),
ParentBlockRoot: make([]byte, fieldparams.RootLength),
PrevRandao: make([]byte, fieldparams.RootLength),
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: depositRequestsStartIndex,
DepositBalanceToConsume: depositBalanceToConsume,
ExitBalanceToConsume: exitBalanceToConsume,
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: consolidationBalanceToConsume,
EarliestConsolidationEpoch: earliestConsolidationEpoch,
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookaheadU64,
Builders: []*ethpb.Builder{},
NextWithdrawalBuilderIndex: primitives.BuilderIndex(0),
ExecutionPayloadAvailability: executionPayloadAvailability,
BuilderPendingPayments: builderPendingPayments,
BuilderPendingWithdrawals: []*ethpb.BuilderPendingWithdrawal{},
LatestBlockHash: payloadHeader.BlockHash(),
PayloadExpectedWithdrawals: []*enginev1.Withdrawal{},
}
return state_native.InitializeFromProtoUnsafeGloas(s)
}

View File

@@ -0,0 +1,177 @@
package gloas_test
import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
func TestUpgradeToGloas_Basic(t *testing.T) {
st, _ := util.DeterministicGenesisStateFulu(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
lookaheadSize := int(params.BeaconConfig().MinSeedLookahead+1) * int(params.BeaconConfig().SlotsPerEpoch)
lookahead := make([]primitives.ValidatorIndex, lookaheadSize)
for i := range lookahead {
lookahead[i] = primitives.ValidatorIndex(i)
}
require.NoError(t, st.SetProposerLookahead(lookahead))
require.NoError(t, st.SetPendingPartialWithdrawals([]*ethpb.PendingPartialWithdrawal{{Index: 1, Amount: 2}}))
require.NoError(t, st.SetPendingConsolidations([]*ethpb.PendingConsolidation{{SourceIndex: 3, TargetIndex: 4}}))
blockHash := bytes.Repeat([]byte{0xAB}, 32)
header := &enginev1.ExecutionPayloadHeaderDeneb{BlockHash: blockHash}
wrappedHeader, err := consensusblocks.WrappedExecutionPayloadHeaderDeneb(header)
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(wrappedHeader))
preForkState := st.Copy()
mSt, err := gloas.UpgradeToGloas(st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), mSt.GenesisTime())
require.DeepSSZEqual(t, preForkState.GenesisValidatorsRoot(), mSt.GenesisValidatorsRoot())
require.Equal(t, preForkState.Slot(), mSt.Slot())
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().GloasForkVersion,
Epoch: time.CurrentEpoch(st),
}, mSt.Fork())
bid, err := mSt.LatestExecutionPayloadBid()
require.NoError(t, err)
wantBlockHash := [32]byte{}
copy(wantBlockHash[:], blockHash)
require.DeepSSZEqual(t, wantBlockHash, bid.BlockHash())
require.DeepSSZEqual(t, [20]byte{}, bid.FeeRecipient())
require.DeepSSZEqual(t, [32]byte{}, bid.ParentBlockHash())
require.DeepSSZEqual(t, [32]byte{}, bid.ParentBlockRoot())
require.DeepSSZEqual(t, [32]byte{}, bid.PrevRandao())
latestBlockHash, err := mSt.LatestBlockHash()
require.NoError(t, err)
require.DeepSSZEqual(t, blockHash, latestBlockHash[:])
pbState, ok := mSt.ToProtoUnsafe().(*ethpb.BeaconStateGloas)
require.Equal(t, true, ok)
expectedAvailLen := int((params.BeaconConfig().SlotsPerHistoricalRoot + 7) / 8)
require.Equal(t, expectedAvailLen, len(pbState.ExecutionPayloadAvailability))
for _, b := range pbState.ExecutionPayloadAvailability {
require.Equal(t, byte(0xff), b)
}
require.Equal(t, 0, len(pbState.Builders))
require.Equal(t, primitives.BuilderIndex(0), pbState.NextWithdrawalBuilderIndex)
require.Equal(t, 0, len(pbState.BuilderPendingWithdrawals))
require.Equal(t, 0, len(pbState.PayloadExpectedWithdrawals))
require.Equal(t, int(params.BeaconConfig().SlotsPerEpoch*2), len(pbState.BuilderPendingPayments))
for _, payment := range pbState.BuilderPendingPayments {
require.NotNil(t, payment)
require.NotNil(t, payment.Withdrawal)
require.Equal(t, fieldparams.FeeRecipientLength, len(payment.Withdrawal.FeeRecipient))
}
ppw, err := mSt.PendingPartialWithdrawals()
require.NoError(t, err)
prePPW, err := preForkState.PendingPartialWithdrawals()
require.NoError(t, err)
require.DeepSSZEqual(t, prePPW, ppw)
pc, err := mSt.PendingConsolidations()
require.NoError(t, err)
prePC, err := preForkState.PendingConsolidations()
require.NoError(t, err)
require.DeepSSZEqual(t, prePC, pc)
}
func TestUpgradeToGloas_OnboardsBuilderDeposit(t *testing.T) {
st, _ := util.DeterministicGenesisStateFulu(t, 4)
sk, err := bls.RandKey()
require.NoError(t, err)
builderCreds := builderWithdrawalCredentials(0xDD)
amount := uint64(1234)
depSlot := primitives.Slot(params.BeaconConfig().SlotsPerEpoch*2 + 3)
deposit := newPendingDeposit(t, sk, builderCreds, amount, depSlot, true)
require.NoError(t, st.SetPendingDeposits([]*ethpb.PendingDeposit{deposit}))
mSt, err := gloas.UpgradeToGloas(st)
require.NoError(t, err)
pbState, ok := mSt.ToProtoUnsafe().(*ethpb.BeaconStateGloas)
require.Equal(t, true, ok)
require.Equal(t, 0, len(pbState.PendingDeposits))
require.Equal(t, 1, len(pbState.Builders))
builder := pbState.Builders[0]
require.DeepSSZEqual(t, sk.PublicKey().Marshal(), builder.Pubkey)
require.DeepSSZEqual(t, builderCreds[12:], builder.ExecutionAddress)
require.Equal(t, primitives.Gwei(amount), builder.Balance)
require.Equal(t, slots.ToEpoch(depSlot), builder.DepositEpoch)
}
func builderWithdrawalCredentials(addrByte byte) []byte {
wc := make([]byte, fieldparams.RootLength)
wc[0] = params.BeaconConfig().BuilderWithdrawalPrefixByte
for i := 12; i < len(wc); i++ {
wc[i] = addrByte
}
return wc
}
func newPendingDeposit(
t *testing.T,
sk bls.SecretKey,
withdrawalCredentials []byte,
amount uint64,
slot primitives.Slot,
valid bool,
) *ethpb.PendingDeposit {
t.Helper()
signature := make([]byte, fieldparams.BLSSignatureLength)
if valid {
signature = signDeposit(t, sk, withdrawalCredentials, amount)
}
return &ethpb.PendingDeposit{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: signature,
Slot: slot,
}
}
func signDeposit(t *testing.T, sk bls.SecretKey, withdrawalCredentials []byte, amount uint64) []byte {
t.Helper()
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
msg := &ethpb.DepositMessage{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
}
signingRoot, err := signing.ComputeSigningRoot(msg, domain)
require.NoError(t, err)
sig := sk.Sign(signingRoot[:])
return sig.Marshal()
}

View File

@@ -6,6 +6,7 @@ go_library(
"attestation.go",
"beacon_committee.go",
"block.go",
"builder.go",
"deposit.go",
"genesis.go",
"legacy.go",

View File

@@ -27,8 +27,9 @@ import (
)
var (
committeeCache = cache.NewCommitteesCache()
proposerIndicesCache = cache.NewProposerIndicesCache()
committeeCache = cache.NewCommitteesCache()
proposerIndicesCache = cache.NewProposerIndicesCache()
payloadCommitteeCache = cache.NewPayloadCommitteeCache()
)
type beaconCommitteeFunc = func(
@@ -613,6 +614,28 @@ func ClearCache() {
proposerIndicesCache.Prune(0)
syncCommitteeCache.Clear()
balanceCache.Clear()
payloadCommitteeCache.Clear()
}
// PayloadCommitteeFromCache returns the cached payload committee for the given seed.
// Returns nil on cache miss.
func PayloadCommitteeFromCache(ctx context.Context, seed [32]byte) ([]primitives.ValidatorIndex, error) {
return payloadCommitteeCache.Get(ctx, seed)
}
// AddPayloadCommittee stores a payload committee result in the cache.
func AddPayloadCommittee(seed [32]byte, indices []primitives.ValidatorIndex) {
payloadCommitteeCache.Add(seed, indices)
}
// MarkPayloadCommitteeInProgress marks a payload committee seed as being computed.
func MarkPayloadCommitteeInProgress(seed [32]byte) error {
return payloadCommitteeCache.MarkInProgress(seed)
}
// MarkPayloadCommitteeNotInProgress releases the in-progress lock on a payload committee seed.
func MarkPayloadCommitteeNotInProgress(seed [32]byte) error {
return payloadCommitteeCache.MarkNotInProgress(seed)
}
// ComputeCommittee returns the requested shuffled committee out of the total committees using

View File

@@ -0,0 +1,12 @@
package helpers
import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
)
// IsBuilderWithdrawalCredential returns true if the withdrawal credentials indicate a builder.
func IsBuilderWithdrawalCredential(withdrawalCredentials []byte) bool {
return len(withdrawalCredentials) == fieldparams.RootLength &&
withdrawalCredentials[0] == params.BeaconConfig().BuilderWithdrawalPrefixByte
}

View File

@@ -108,6 +108,15 @@ func CanUpgradeToFulu(slot primitives.Slot) bool {
return epochStart && fuluEpoch
}
// CanUpgradeToGloas returns true if the input `slot` can upgrade to Gloas.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == GLOAS_FORK_EPOCH
func CanUpgradeToGloas(slot primitives.Slot) bool {
epochStart := slots.IsEpochStart(slot)
gloasEpoch := slots.ToEpoch(slot) == params.BeaconConfig().GloasForkEpoch
return epochStart && gloasEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -233,6 +233,11 @@ func TestCanUpgradeTo(t *testing.T) {
forkEpoch: &cfg.FuluForkEpoch,
upgradeFunc: time.CanUpgradeToFulu,
},
{
name: "Gloas",
forkEpoch: &cfg.GloasForkEpoch,
upgradeFunc: time.CanUpgradeToGloas,
},
}
for _, otc := range outerTestCases {

View File

@@ -26,6 +26,7 @@ go_library(
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/execution:go_default_library",
"//beacon-chain/core/fulu:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/requests:go_default_library",
"//beacon-chain/core/time:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/epoch/precompute"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/execution"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/fulu"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/features"
@@ -402,6 +403,15 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToGloas(slot) {
state, err = gloas.UpgradeToGloas(state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
upgraded = true
}
if upgraded {
log.WithField("version", version.String(state.Version())).Info("Upgraded state to")
}

View File

@@ -26,6 +26,7 @@ go_library(
"//testing/spectest:__subpackages__",
],
deps = [
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
@@ -101,6 +102,7 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/execution/types"
@@ -109,6 +110,8 @@ const (
GetBlobsV1 = "engine_getBlobsV1"
// GetBlobsV2 request string for JSON-RPC.
GetBlobsV2 = "engine_getBlobsV2"
// GetClientVersionV1 is the JSON-RPC method that identifies the execution client.
GetClientVersionV1 = "engine_getClientVersionV1"
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
defaultEngineTimeout = time.Second
)
@@ -145,6 +148,7 @@ type EngineCaller interface {
GetPayload(ctx context.Context, payloadId [8]byte, slot primitives.Slot) (*blocks.GetPayloadResponse, error)
ExecutionBlockByHash(ctx context.Context, hash common.Hash, withTxs bool) (*pb.ExecutionBlock, error)
GetTerminalBlockHash(ctx context.Context, transitionTime uint64) ([]byte, bool, error)
GetClientVersionV1(ctx context.Context) ([]*structs.ClientVersionV1, error)
}
var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
@@ -581,6 +585,39 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
return result, handleRPCError(err)
}
func (s *Service) GetClientVersionV1(ctx context.Context) ([]*structs.ClientVersionV1, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetClientVersionV1")
defer span.End()
commit := version.GitCommit()
if len(commit) >= 8 {
commit = commit[:8]
}
var result []*structs.ClientVersionV1
err := s.rpcClient.CallContext(
ctx,
&result,
GetClientVersionV1,
structs.ClientVersionV1{
Code: "PM",
Name: "Prysm",
Version: version.SemanticVersion(),
Commit: commit,
},
)
if err != nil {
return nil, handleRPCError(err)
}
if len(result) == 0 {
return nil, errors.New("execution client returned no result")
}
return result, nil
}
// ReconstructFullBlock takes in a blinded beacon block and reconstructs
// a beacon block with a full execution payload via the engine API.
func (s *Service) ReconstructFullBlock(

View File

@@ -13,6 +13,7 @@ import (
"strings"
"testing"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
@@ -999,6 +1000,99 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(GetClientVersionV1, func(t *testing.T) {
tests := []struct {
name string
want any
resp []*structs.ClientVersionV1
hasError bool
errMsg string
}{
{
name: "happy path",
want: []*structs.ClientVersionV1{{
Code: "GE",
Name: "go-ethereum",
Version: "1.15.11-stable",
Commit: "36b2371c",
}},
resp: []*structs.ClientVersionV1{{
Code: "GE",
Name: "go-ethereum",
Version: "1.15.11-stable",
Commit: "36b2371c",
}},
},
{
name: "empty response",
want: []*structs.ClientVersionV1{},
hasError: true,
errMsg: "execution client returned no result",
},
{
name: "RPC error",
want: "brokenMsg",
hasError: true,
errMsg: "unexpected error in JSON-RPC",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := io.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
// We expect the JSON string RPC request contains the right method name.
require.Equal(t, true, strings.Contains(
jsonRequestString, GetClientVersionV1,
))
require.Equal(t, true, strings.Contains(
jsonRequestString, "\"code\":\"PM\"",
))
require.Equal(t, true, strings.Contains(
jsonRequestString, "\"name\":\"Prysm\"",
))
require.Equal(t, true, strings.Contains(
jsonRequestString, fmt.Sprintf("\"version\":\"%s\"", version.SemanticVersion()),
))
require.Equal(t, true, strings.Contains(
jsonRequestString, fmt.Sprintf("\"commit\":\"%s\"", version.GitCommit()[:8]),
))
resp := map[string]any{
"jsonrpc": "2.0",
"id": 1,
"result": tc.want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{}
service.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := service.GetClientVersionV1(ctx)
if tc.hasError {
require.NotNil(t, err)
require.ErrorContains(t, tc.errMsg, err)
} else {
require.NoError(t, err)
}
require.DeepEqual(t, tc.resp, resp)
})
}
})
}
func TestReconstructFullBellatrixBlock(t *testing.T) {

View File

@@ -13,6 +13,7 @@ go_library(
"//visibility:public",
],
deps = [
"//api/server/structs:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/execution/types:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"context"
"math/big"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -42,6 +43,8 @@ type EngineClient struct {
ErrorBlobSidecars error
DataColumnSidecars []blocks.VerifiedRODataColumn
ErrorDataColumnSidecars error
ClientVersion []*structs.ClientVersionV1
ErrorClientVersion error
}
// NewPayload --
@@ -173,3 +176,8 @@ func (e *EngineClient) GetTerminalBlockHash(ctx context.Context, transitionTime
blk = parentBlk
}
}
// GetClientVersionV1 --
func (e *EngineClient) GetClientVersionV1(context.Context) ([]*structs.ClientVersionV1, error) {
return e.ClientVersion, e.ErrorClientVersion
}

View File

@@ -20,6 +20,7 @@ go_library(
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],

View File

@@ -52,6 +52,7 @@ go_test(
srcs = [
"ffg_update_test.go",
"forkchoice_test.go",
"gloas_test.go",
"no_vote_test.go",
"node_test.go",
"on_tick_test.go",
@@ -71,6 +72,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",

View File

@@ -161,7 +161,7 @@ func TestFFGUpdates_TwoBranches(t *testing.T) {
// 7 8
// | |
// 9 10
f.ProcessAttestation(t.Context(), []uint64{0}, indexToHash(1), 0)
f.ProcessAttestation(t.Context(), []uint64{0}, indexToHash(1), 0, true)
// With the additional vote to the left branch, the head should be 9:
// 0 <-- start
@@ -191,7 +191,7 @@ func TestFFGUpdates_TwoBranches(t *testing.T) {
// 7 8
// | |
// 9 10
f.ProcessAttestation(t.Context(), []uint64{1}, indexToHash(2), 0)
f.ProcessAttestation(t.Context(), []uint64{1}, indexToHash(2), 0, true)
// With the additional vote to the right branch, the head should be 10:
// 0 <-- start

View File

@@ -80,24 +80,25 @@ func (f *ForkChoice) Head(
// ProcessAttestation processes attestation for vote accounting, it iterates around validator indices
// and update their votes accordingly.
func (f *ForkChoice) ProcessAttestation(ctx context.Context, validatorIndices []uint64, blockRoot [32]byte, targetEpoch primitives.Epoch) {
func (f *ForkChoice) ProcessAttestation(ctx context.Context, validatorIndices []uint64, blockRoot [32]byte, slot primitives.Slot, payloadStatus bool) {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.ProcessAttestation")
defer span.End()
for _, index := range validatorIndices {
// Validator indices will grow the vote cache.
newVote := false
for index >= uint64(len(f.votes)) {
f.votes = append(f.votes, Vote{currentRoot: params.BeaconConfig().ZeroHash, nextRoot: params.BeaconConfig().ZeroHash})
newVote = true
}
// Newly allocated vote if the root fields are untouched.
newVote := f.votes[index].nextRoot == params.BeaconConfig().ZeroHash &&
f.votes[index].currentRoot == params.BeaconConfig().ZeroHash
// Vote gets updated if it's newly allocated or high target epoch.
if newVote || targetEpoch > f.votes[index].nextEpoch {
f.votes[index].nextEpoch = targetEpoch
targetEpoch := slots.ToEpoch(slot)
nextEpoch := slots.ToEpoch(f.votes[index].nextSlot)
if newVote || targetEpoch > nextEpoch {
f.votes[index].nextSlot = slot
f.votes[index].nextRoot = blockRoot
f.votes[index].nextPayloadStatus = payloadStatus
}
}
@@ -308,43 +309,57 @@ func (f *ForkChoice) updateBalances() error {
}
// Update only if the validator's balance or vote has changed.
if vote.currentRoot != vote.nextRoot || oldBalance != newBalance {
// Ignore the vote if the root is not in fork choice
// store, that means we have not seen the block before.
nextNode, ok := f.store.emptyNodeByRoot[vote.nextRoot]
if ok && vote.nextRoot != zHash {
// Protection against nil node
if nextNode == nil {
return errors.Wrap(ErrNilNode, "could not update balances")
if vote.currentRoot != vote.nextRoot || oldBalance != newBalance || vote.currentPayloadStatus != vote.nextPayloadStatus {
// Add new balance to the next vote target if the root is known.
pn, pending := f.store.resolveVoteNode(vote.nextRoot, vote.nextSlot, vote.nextPayloadStatus)
if pn != nil && vote.nextRoot != zHash {
if pending {
pn.node.balance += newBalance
} else {
pn.balance += newBalance
}
nextNode.balance += newBalance
}
currentNode, ok := f.store.emptyNodeByRoot[vote.currentRoot]
if ok && vote.currentRoot != zHash {
// Protection against nil node
if currentNode == nil {
return errors.Wrap(ErrNilNode, "could not update balances")
}
if currentNode.balance < oldBalance {
log.WithFields(logrus.Fields{
"nodeRoot": fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:])),
"oldBalance": oldBalance,
"nodeBalance": currentNode.balance,
"nodeWeight": currentNode.weight,
"proposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.proposerBoostRoot[:])),
"previousProposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.previousProposerBoostRoot[:])),
"previousProposerBoostScore": f.store.previousProposerBoostScore,
}).Warning("node with invalid balance, setting it to zero")
currentNode.balance = 0
// Subtract old balance from the current vote target if the root is known.
pn, pending = f.store.resolveVoteNode(vote.currentRoot, vote.currentSlot, vote.currentPayloadStatus)
if pn != nil && vote.currentRoot != zHash {
if pending {
if pn.node.balance < oldBalance {
log.WithFields(logrus.Fields{
"nodeRoot": fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:])),
"oldBalance": oldBalance,
"nodeBalance": pn.node.balance,
"nodeWeight": pn.node.weight,
"proposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.proposerBoostRoot[:])),
"previousProposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.previousProposerBoostRoot[:])),
"previousProposerBoostScore": f.store.previousProposerBoostScore,
}).Warning("node with invalid balance, setting it to zero")
pn.node.balance = 0
} else {
pn.node.balance -= oldBalance
}
} else {
currentNode.balance -= oldBalance
if pn.balance < oldBalance {
log.WithFields(logrus.Fields{
"nodeRoot": fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:])),
"oldBalance": oldBalance,
"nodeBalance": pn.balance,
"nodeWeight": pn.weight,
"proposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.proposerBoostRoot[:])),
"previousProposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.previousProposerBoostRoot[:])),
"previousProposerBoostScore": f.store.previousProposerBoostScore,
}).Warning("node with invalid balance, setting it to zero")
pn.balance = 0
} else {
pn.balance -= oldBalance
}
}
}
}
// Rotate the validator vote.
f.votes[index].currentRoot = vote.nextRoot
f.votes[index].currentSlot = vote.nextSlot
f.votes[index].currentPayloadStatus = vote.nextPayloadStatus
}
f.balances = newBalances
return nil
@@ -410,15 +425,23 @@ func (f *ForkChoice) InsertSlashedIndex(_ context.Context, index primitives.Vali
return
}
node, ok := f.store.emptyNodeByRoot[f.votes[index].currentRoot]
if !ok || node == nil {
v := f.votes[index]
pn, pending := f.store.resolveVoteNode(v.currentRoot, v.currentSlot, v.currentPayloadStatus)
if pn == nil {
return
}
if node.balance < f.balances[index] {
node.balance = 0
if pending {
if pn.node.balance < f.balances[index] {
pn.node.balance = 0
} else {
pn.node.balance -= f.balances[index]
}
return
}
if pn.balance < f.balances[index] {
pn.balance = 0
} else {
node.balance -= f.balances[index]
pn.balance -= f.balances[index]
}
}
@@ -617,11 +640,9 @@ func (f *ForkChoice) updateJustifiedBalances(ctx context.Context, root [32]byte)
}
f.justifiedBalances = balances
f.store.committeeWeight = 0
f.numActiveValidators = 0
for _, val := range balances {
if val > 0 {
f.store.committeeWeight += val
f.numActiveValidators++
}
}
f.store.committeeWeight /= uint64(params.BeaconConfig().SlotsPerEpoch)

View File

@@ -93,9 +93,9 @@ func TestForkChoice_UpdateBalancesPositiveChange(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, st, roblock))
f.votes = []Vote{
{indexToHash(1), indexToHash(1), 0},
{indexToHash(2), indexToHash(2), 0},
{indexToHash(3), indexToHash(3), 0},
{indexToHash(1), indexToHash(1), 0, 0, true, true},
{indexToHash(2), indexToHash(2), 0, 0, true, true},
{indexToHash(3), indexToHash(3), 0, 0, true, true},
}
// Each node gets one unique vote. The weight should look like 103 <- 102 <- 101 because
@@ -103,9 +103,9 @@ func TestForkChoice_UpdateBalancesPositiveChange(t *testing.T) {
f.justifiedBalances = []uint64{10, 20, 30}
require.NoError(t, f.updateBalances())
s := f.store
assert.Equal(t, uint64(10), s.emptyNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.emptyNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.emptyNodeByRoot[indexToHash(3)].balance)
assert.Equal(t, uint64(10), s.fullNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.fullNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.fullNodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_UpdateBalancesNegativeChange(t *testing.T) {
@@ -121,22 +121,22 @@ func TestForkChoice_UpdateBalancesNegativeChange(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
s := f.store
s.emptyNodeByRoot[indexToHash(1)].balance = 100
s.emptyNodeByRoot[indexToHash(2)].balance = 100
s.emptyNodeByRoot[indexToHash(3)].balance = 100
s.fullNodeByRoot[indexToHash(1)].balance = 100
s.fullNodeByRoot[indexToHash(2)].balance = 100
s.fullNodeByRoot[indexToHash(3)].balance = 100
f.balances = []uint64{100, 100, 100}
f.votes = []Vote{
{indexToHash(1), indexToHash(1), 0},
{indexToHash(2), indexToHash(2), 0},
{indexToHash(3), indexToHash(3), 0},
{indexToHash(1), indexToHash(1), 0, 0, true, true},
{indexToHash(2), indexToHash(2), 0, 0, true, true},
{indexToHash(3), indexToHash(3), 0, 0, true, true},
}
f.justifiedBalances = []uint64{10, 20, 30}
require.NoError(t, f.updateBalances())
assert.Equal(t, uint64(10), s.emptyNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.emptyNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.emptyNodeByRoot[indexToHash(3)].balance)
assert.Equal(t, uint64(10), s.fullNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.fullNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.fullNodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_UpdateBalancesUnderflow(t *testing.T) {
@@ -152,22 +152,22 @@ func TestForkChoice_UpdateBalancesUnderflow(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
s := f.store
s.emptyNodeByRoot[indexToHash(1)].balance = 100
s.emptyNodeByRoot[indexToHash(2)].balance = 100
s.emptyNodeByRoot[indexToHash(3)].balance = 100
s.fullNodeByRoot[indexToHash(1)].balance = 100
s.fullNodeByRoot[indexToHash(2)].balance = 100
s.fullNodeByRoot[indexToHash(3)].balance = 100
f.balances = []uint64{125, 125, 125}
f.votes = []Vote{
{indexToHash(1), indexToHash(1), 0},
{indexToHash(2), indexToHash(2), 0},
{indexToHash(3), indexToHash(3), 0},
{indexToHash(1), indexToHash(1), 0, 0, true, true},
{indexToHash(2), indexToHash(2), 0, 0, true, true},
{indexToHash(3), indexToHash(3), 0, 0, true, true},
}
f.justifiedBalances = []uint64{10, 20, 30}
require.NoError(t, f.updateBalances())
assert.Equal(t, uint64(0), s.emptyNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(0), s.emptyNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(5), s.emptyNodeByRoot[indexToHash(3)].balance)
assert.Equal(t, uint64(0), s.fullNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(0), s.fullNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(5), s.fullNodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_IsCanonical(t *testing.T) {
@@ -332,8 +332,8 @@ func TestForkChoice_RemoveEquivocating(t *testing.T) {
require.Equal(t, [32]byte{'c'}, head)
// Insert two attestations for block b, one for c it becomes head
f.ProcessAttestation(ctx, []uint64{1, 2}, [32]byte{'b'}, 1)
f.ProcessAttestation(ctx, []uint64{3}, [32]byte{'c'}, 1)
f.ProcessAttestation(ctx, []uint64{1, 2}, [32]byte{'b'}, params.BeaconConfig().SlotsPerEpoch, true)
f.ProcessAttestation(ctx, []uint64{3}, [32]byte{'c'}, params.BeaconConfig().SlotsPerEpoch, true)
f.justifiedBalances = []uint64{100, 200, 200, 300}
head, err = f.Head(ctx)
require.NoError(t, err)
@@ -341,21 +341,21 @@ func TestForkChoice_RemoveEquivocating(t *testing.T) {
// Process b's slashing, c is now head
f.InsertSlashedIndex(ctx, 1)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].balance)
require.Equal(t, uint64(200), f.store.fullNodeByRoot[[32]byte{'b'}].balance)
f.justifiedBalances = []uint64{100, 200, 200, 300}
head, err = f.Head(ctx)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.emptyNodeByRoot[[32]byte{'c'}].weight)
require.Equal(t, uint64(200), f.store.fullNodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.fullNodeByRoot[[32]byte{'c'}].weight)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, head)
// Process b's slashing again, should be a noop
f.InsertSlashedIndex(ctx, 1)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].balance)
require.Equal(t, uint64(200), f.store.fullNodeByRoot[[32]byte{'b'}].balance)
f.justifiedBalances = []uint64{100, 200, 200, 300}
head, err = f.Head(ctx)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.emptyNodeByRoot[[32]byte{'c'}].weight)
require.Equal(t, uint64(200), f.store.fullNodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.fullNodeByRoot[[32]byte{'c'}].weight)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, head)
@@ -709,7 +709,6 @@ func TestForkchoice_UpdateJustifiedBalances(t *testing.T) {
return balances, nil
}
require.NoError(t, f.updateJustifiedBalances(t.Context(), [32]byte{}))
require.Equal(t, uint64(7), f.numActiveValidators)
require.Equal(t, uint64(430)/32, f.store.committeeWeight)
require.DeepEqual(t, balances, f.justifiedBalances)
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"slices"
"time"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -211,7 +212,7 @@ func (s *Store) updateBestDescendantConsensusNode(ctx context.Context, n *Node,
n.bestDescendant = en.bestDescendant
return nil
}
// TODO GLOAS: pick between full or empty
// TODO Gloas: pick between full or empty
if err := s.updateBestDescendantPayloadNode(ctx, fn, justifiedEpoch, finalizedEpoch, currentEpoch); err != nil {
return err
}
@@ -287,3 +288,62 @@ func (s *Store) nodeTreeDump(ctx context.Context, n *Node, nodes []*forkchoice2.
}
return nodes, nil
}
// InsertPayload inserts a full node into forkchoice after the Gloas fork.
func (f *ForkChoice) InsertPayload(pe interfaces.ROExecutionPayloadEnvelope) error {
if pe.IsNil() {
return errors.New("cannot insert nil payload")
}
s := f.store
root := pe.BeaconBlockRoot()
en := s.emptyNodeByRoot[root]
if en == nil {
return errors.Wrap(ErrNilNode, "cannot insert full node without an empty one")
}
if _, ok := s.fullNodeByRoot[root]; ok {
// We don't import two payloads for the same root
return nil
}
fn := &PayloadNode{
node: en.node,
optimistic: true,
timestamp: time.Now(),
full: true,
children: make([]*Node, 0),
}
s.fullNodeByRoot[root] = fn
f.updateNewFullNodeWeight(fn)
return nil
}
func (f *ForkChoice) updateNewFullNodeWeight(fn *PayloadNode) {
for index, vote := range f.votes {
if vote.currentRoot == fn.node.root && vote.nextPayloadStatus && index < len(f.balances) {
fn.balance += f.balances[index]
}
}
fn.weight = fn.balance
}
// resolveVoteNode returns the node that should receive the balance of a vote. It returns always a PayloadNode, but the boolean indicates
// whether the vote should be applied to the pending node (true) or not.
func (s *Store) resolveVoteNode(r [32]byte, slot primitives.Slot, payloadStatus bool) (*PayloadNode, bool) {
en := s.emptyNodeByRoot[r]
if en == nil {
return nil, true
}
if payloadStatus {
return s.fullNodeByRoot[r], false
}
return en, slot == en.node.slot
}
// BlockHash returns the hash committed in the given block
func (f *ForkChoice) BlockHash(root [32]byte) ([32]byte, error) {
s := f.store
en := s.emptyNodeByRoot[root]
if en == nil || en.node == nil {
return [32]byte{}, errors.Wrap(ErrNilNode, "could not get block hash for root")
}
return en.node.blockHash, nil
}

View File

@@ -0,0 +1,330 @@
package doublylinkedtree
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func prepareGloasForkchoiceState(
_ context.Context,
slot primitives.Slot,
blockRoot [32]byte,
parentRoot [32]byte,
blockHash [32]byte,
parentBlockHash [32]byte,
justifiedEpoch primitives.Epoch,
finalizedEpoch primitives.Epoch,
) (state.BeaconState, blocks.ROBlock, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: justifiedEpoch,
}
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: finalizedEpoch,
}
builderPendingPayments := make([]*ethpb.BuilderPendingPayment, 64)
for i := range builderPendingPayments {
builderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
}
base := &ethpb.BeaconStateGloas{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentJustifiedCheckpoint: justifiedCheckpoint,
FinalizedCheckpoint: finalizedCheckpoint,
LatestBlockHeader: blockHeader,
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Builders: make([]*ethpb.Builder, 0),
BuilderPendingPayments: builderPendingPayments,
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
}
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
if err != nil {
return nil, blocks.ROBlock{}, err
}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
},
})
blk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{
SignedExecutionPayloadBid: bid,
},
},
})
signed, err := blocks.NewSignedBeaconBlock(blk)
if err != nil {
return nil, blocks.ROBlock{}, err
}
roblock, err := blocks.NewROBlockWithRoot(signed, blockRoot)
return st, roblock, err
}
func prepareGloasForkchoicePayload(
blockRoot [32]byte,
) (interfaces.ROExecutionPayloadEnvelope, error) {
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{},
}
return blocks.WrappedROExecutionPayloadEnvelope(env)
}
func TestInsertGloasBlock_EmptyNodeOnly(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Empty node should exist.
en := f.store.emptyNodeByRoot[root]
require.NotNil(t, en)
// Full node should NOT exist.
_, hasFull := f.store.fullNodeByRoot[root]
assert.Equal(t, false, hasFull)
// Parent should be the genesis full node.
genesisRoot := params.BeaconConfig().ZeroHash
genesisFull := f.store.fullNodeByRoot[genesisRoot]
require.NotNil(t, genesisFull)
assert.Equal(t, genesisFull, en.node.parent)
}
func TestInsertPayload_CreatesFullNode(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
require.Equal(t, 2, len(f.store.emptyNodeByRoot))
require.Equal(t, 1, len(f.store.fullNodeByRoot))
pe, err := prepareGloasForkchoicePayload(root)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(pe))
require.Equal(t, 2, len(f.store.fullNodeByRoot))
fn := f.store.fullNodeByRoot[root]
require.NotNil(t, fn)
en := f.store.emptyNodeByRoot[root]
require.NotNil(t, en)
// Empty and full share the same *Node.
assert.Equal(t, en.node, fn.node)
assert.Equal(t, true, fn.optimistic)
assert.Equal(t, true, fn.full)
}
func TestInsertPayload_DuplicateIsNoop(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
pe, err := prepareGloasForkchoicePayload(root)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(pe))
require.Equal(t, 2, len(f.store.fullNodeByRoot))
fn := f.store.fullNodeByRoot[root]
require.NotNil(t, fn)
// Insert again — should be a no-op.
require.NoError(t, f.InsertPayload(pe))
assert.Equal(t, fn, f.store.fullNodeByRoot[root])
require.Equal(t, 2, len(f.store.fullNodeByRoot))
}
func TestInsertPayload_WithoutEmptyNode_Errors(t *testing.T) {
f := setup(0, 0)
root := indexToHash(99)
pe, err := prepareGloasForkchoicePayload(root)
require.NoError(t, err)
err = f.InsertPayload(pe)
require.ErrorContains(t, ErrNilNode.Error(), err)
}
func TestGloasBlock_ChildBuildsOnEmpty(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
// Insert Gloas block A (empty only).
rootA := indexToHash(1)
blockHashA := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, rootA, params.BeaconConfig().ZeroHash, blockHashA, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert Gloas block B as child of (A, empty)
rootB := indexToHash(2)
blockHashB := indexToHash(200)
nonMatchingParentHash := indexToHash(999)
st, roblock, err = prepareGloasForkchoiceState(ctx, 2, rootB, rootA, blockHashB, nonMatchingParentHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
emptyA := f.store.emptyNodeByRoot[rootA]
require.NotNil(t, emptyA)
nodeB := f.store.emptyNodeByRoot[rootB]
require.NotNil(t, nodeB)
require.Equal(t, emptyA, nodeB.node.parent)
}
func TestGloasBlock_ChildrenOfEmptyAndFull(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
// Insert Gloas block A (empty only).
rootA := indexToHash(1)
blockHashA := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, rootA, params.BeaconConfig().ZeroHash, blockHashA, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert payload for A
pe, err := prepareGloasForkchoicePayload(rootA)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(pe))
// Insert Gloas block B as child of (A, empty)
rootB := indexToHash(2)
blockHashB := indexToHash(200)
nonMatchingParentHash := indexToHash(999)
st, roblock, err = prepareGloasForkchoiceState(ctx, 2, rootB, rootA, blockHashB, nonMatchingParentHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert Gloas block C as child of (A, full)
rootC := indexToHash(3)
blockHashC := indexToHash(201)
st, roblock, err = prepareGloasForkchoiceState(ctx, 3, rootC, rootA, blockHashC, blockHashA, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
emptyA := f.store.emptyNodeByRoot[rootA]
require.NotNil(t, emptyA)
nodeB := f.store.emptyNodeByRoot[rootB]
require.NotNil(t, nodeB)
require.Equal(t, emptyA, nodeB.node.parent)
nodeC := f.store.emptyNodeByRoot[rootC]
require.NotNil(t, nodeC)
fullA := f.store.fullNodeByRoot[rootA]
require.NotNil(t, fullA)
require.Equal(t, fullA, nodeC.node.parent)
}
func TestBlockHash_ReturnsBlockHash(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
got, err := f.BlockHash(root)
require.NoError(t, err)
assert.Equal(t, blockHash, got)
}
func TestBlockHash_UnknownRoot(t *testing.T) {
f := setup(0, 0)
unknownRoot := indexToHash(999)
_, err := f.BlockHash(unknownRoot)
require.ErrorContains(t, ErrNilNode.Error(), err)
}
func TestBlockHash_GenesisRoot(t *testing.T) {
f := setup(0, 0)
got, err := f.BlockHash(params.BeaconConfig().ZeroHash)
require.NoError(t, err)
assert.Equal(t, [32]byte{}, got)
}
func TestGloasBlock_ChildBuildsOnFull(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
// Insert Gloas block A (empty only).
rootA := indexToHash(1)
blockHashA := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, rootA, params.BeaconConfig().ZeroHash, blockHashA, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert payload for A → creates the full node.
pe, err := prepareGloasForkchoicePayload(rootA)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(pe))
fullA := f.store.fullNodeByRoot[rootA]
require.NotNil(t, fullA)
// Child for (A, full)
rootB := indexToHash(2)
blockHashB := indexToHash(200)
st, roblock, err = prepareGloasForkchoiceState(ctx, 2, rootB, rootA, blockHashB, blockHashA, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
nodeB := f.store.emptyNodeByRoot[rootB]
require.NotNil(t, nodeB)
assert.Equal(t, fullA, nodeB.node.parent)
}

View File

@@ -38,7 +38,6 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
f := setup(jEpoch, fEpoch)
f.justifiedBalances = balances
f.store.committeeWeight = uint64(len(balances)*10) / uint64(params.BeaconConfig().SlotsPerEpoch)
f.numActiveValidators = uint64(len(balances))
// The head should always start at the finalized block.
headRoot, err := f.Head(ctx)
@@ -64,7 +63,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{0}, newRoot, fEpoch)
f.ProcessAttestation(ctx, []uint64{0}, newRoot, primitives.Slot(fEpoch), true)
headRoot, err = f.Head(ctx)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 1")
@@ -90,7 +89,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{1}, newRoot, fEpoch)
f.ProcessAttestation(ctx, []uint64{1}, newRoot, primitives.Slot(fEpoch), true)
headRoot, err = f.Head(ctx)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 2")
@@ -118,7 +117,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{2}, newRoot, fEpoch)
f.ProcessAttestation(ctx, []uint64{2}, newRoot, primitives.Slot(fEpoch), true)
headRoot, err = f.Head(ctx)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 3")
@@ -147,7 +146,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.ProcessAttestation(ctx, []uint64{3}, newRoot, fEpoch)
f.ProcessAttestation(ctx, []uint64{3}, newRoot, primitives.Slot(fEpoch), true)
headRoot, err = f.Head(ctx)
require.NoError(t, err)
assert.Equal(t, newRoot, headRoot, "Incorrect head for justified epoch at slot 3")
@@ -177,7 +176,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
// Regression: process attestations for C, check that it
// becomes head, we need two attestations to have C.weight = 30 > 24 = D.weight
f.ProcessAttestation(ctx, []uint64{4, 5}, indexToHash(3), fEpoch)
f.ProcessAttestation(ctx, []uint64{4, 5}, indexToHash(3), primitives.Slot(fEpoch), true)
headRoot, err = f.Head(ctx)
require.NoError(t, err)
assert.Equal(t, indexToHash(3), headRoot, "Incorrect head for justified epoch at slot 4")
@@ -238,10 +237,10 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
// The maliciously withheld block has one vote.
votes := []uint64{1}
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, fEpoch)
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, primitives.Slot(fEpoch), true)
// The honest block has one vote.
votes = []uint64{2}
f.ProcessAttestation(ctx, votes, honestBlock, fEpoch)
f.ProcessAttestation(ctx, votes, honestBlock, primitives.Slot(fEpoch), true)
// Ensure the head is STILL C, the honest block, as the honest block had proposer boost.
r, err = f.Head(ctx)
@@ -307,7 +306,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
// An attestation is received for B that has more voting power than C with the proposer boost,
// allowing B to then become the head if their attestation has enough adversarial votes.
votes := []uint64{1, 2}
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, fEpoch)
f.ProcessAttestation(ctx, votes, maliciouslyWithheldBlock, primitives.Slot(fEpoch), true)
// Expect the head to have switched to B.
r, err = f.Head(ctx)
@@ -331,7 +330,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
f := setup(jEpoch, fEpoch)
f.justifiedBalances = balances
f.store.committeeWeight = uint64(len(balances)*10) / uint64(params.BeaconConfig().SlotsPerEpoch)
f.numActiveValidators = uint64(len(balances))
a := zeroHash
// The head should always start at the finalized block.
@@ -382,7 +381,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
// An attestation for C is received at slot N+3.
votes := []uint64{1}
f.ProcessAttestation(ctx, votes, c, fEpoch)
f.ProcessAttestation(ctx, votes, c, primitives.Slot(fEpoch), true)
// A block D, building on B, is received at slot N+3. It should not be able to win without boosting.
dSlot := primitives.Slot(3)
@@ -422,7 +421,7 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
votes = []uint64{2}
f.ProcessAttestation(ctx, votes, d2, fEpoch)
f.ProcessAttestation(ctx, votes, d2, primitives.Slot(fEpoch), true)
// Ensure D becomes the head thanks to boosting.
r, err = f.Head(ctx)
require.NoError(t, err)

View File

@@ -10,8 +10,8 @@ import (
func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
f := setup(0, 0)
f.numActiveValidators = 640
f.justifiedBalances = make([]uint64, f.numActiveValidators)
numValidators := uint64(640)
f.justifiedBalances = make([]uint64, numValidators)
for i := range f.justifiedBalances {
f.justifiedBalances[i] = uint64(10)
f.store.committeeWeight += uint64(10)
@@ -22,11 +22,11 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
st, blk, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blk))
attesters := make([]uint64, f.numActiveValidators-64)
attesters := make([]uint64, numValidators-64)
for i := range attesters {
attesters[i] = uint64(i + 64)
}
f.ProcessAttestation(ctx, attesters, blk.Root(), 0)
f.ProcessAttestation(ctx, attesters, blk.Root(), 0, true)
orphanLateBlockFirstThreshold := time.Duration(params.BeaconConfig().SecondsPerSlot/params.BeaconConfig().IntervalsPerSlot) * time.Second
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
@@ -107,8 +107,8 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
func TestForkChoice_GetProposerHead(t *testing.T) {
f := setup(0, 0)
f.numActiveValidators = 640
f.justifiedBalances = make([]uint64, f.numActiveValidators)
numValidators := uint64(640)
f.justifiedBalances = make([]uint64, numValidators)
for i := range f.justifiedBalances {
f.justifiedBalances[i] = uint64(10)
f.store.committeeWeight += uint64(10)
@@ -120,11 +120,11 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
st, blk, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{'A'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blk))
attesters := make([]uint64, f.numActiveValidators-64)
attesters := make([]uint64, numValidators-64)
for i := range attesters {
attesters[i] = uint64(i + 64)
}
f.ProcessAttestation(ctx, attesters, blk.Root(), 0)
f.ProcessAttestation(ctx, attesters, blk.Root(), 0, true)
driftGenesisTime(f, 3, 1*time.Second)
childRoot := [32]byte{'b'}

View File

@@ -136,6 +136,7 @@ func (s *Store) insert(ctx context.Context,
node: n,
optimistic: optimistic,
timestamp: time.Now(),
children: make([]*Node, 0),
}
s.emptyNodeByRoot[root] = pn
ret = pn
@@ -236,7 +237,7 @@ func (s *Store) pruneFinalizedNodeByRootMap(ctx context.Context, node, finalized
// prune prunes the fork choice store. It removes all nodes that compete with the finalized root.
// This function does not prune for invalid optimistically synced nodes, it deals only with pruning upon finalization
// TODO: GLOAS, to ensure that chains up to a full node are found, we may want to consider pruning only up to the latest full block that was finalized
// TODO: Gloas, to ensure that chains up to a full node are found, we may want to consider pruning only up to the latest full block that was finalized
func (s *Store) prune(ctx context.Context) error {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.Prune")
defer span.End()

View File

@@ -13,12 +13,11 @@ import (
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
type ForkChoice struct {
sync.RWMutex
store *Store
votes []Vote // tracks individual validator's last vote.
balances []uint64 // tracks individual validator's balances last accounted in votes.
justifiedBalances []uint64 // tracks individual validator's last justified balances.
numActiveValidators uint64 // tracks the total number of active validators.
balancesByRoot forkchoice.BalancesByRooter // handler to obtain balances for the state with a given root
store *Store
votes []Vote // tracks individual validator's last vote.
balances []uint64 // tracks individual validator's balances last accounted in votes.
justifiedBalances []uint64 // tracks individual validator's last justified balances.
balancesByRoot forkchoice.BalancesByRooter // handler to obtain balances for the state with a given root
}
var _ forkchoice.ForkChoicer = (*ForkChoice)(nil)
@@ -78,7 +77,10 @@ type PayloadNode struct {
// Vote defines an individual validator's vote.
type Vote struct {
currentRoot [fieldparams.RootLength]byte // current voting root.
nextRoot [fieldparams.RootLength]byte // next voting root.
nextEpoch primitives.Epoch // epoch of next voting period.
currentRoot [fieldparams.RootLength]byte // current voting root.
nextRoot [fieldparams.RootLength]byte // next voting root.
nextSlot primitives.Slot // slot of the next voting period.
currentSlot primitives.Slot // slot of the current voting period.
nextPayloadStatus bool // whether the next vote is for a full or empty payload
currentPayloadStatus bool // whether the current vote is for a full or empty payload
}

View File

@@ -76,7 +76,7 @@ func TestStore_LongFork(t *testing.T) {
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'c'}, 2))
// Add an attestation to c, it is head
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'c'}, 1)
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'c'}, params.BeaconConfig().SlotsPerEpoch, true)
f.justifiedBalances = []uint64{100}
c := f.store.emptyNodeByRoot[[32]byte{'c'}]
require.Equal(t, primitives.Epoch(2), slots.ToEpoch(c.node.slot))
@@ -98,8 +98,8 @@ func TestStore_LongFork(t *testing.T) {
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'c'}].weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].node.weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'c'}].node.weight)
}
// Epoch 1 Epoch 2 Epoch 3
@@ -153,7 +153,7 @@ func TestStore_NoDeadLock(t *testing.T) {
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'h'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'h'}, 1))
// Add an attestation for h
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'h'}, 1)
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'h'}, params.BeaconConfig().SlotsPerEpoch, true)
// Epoch 3
// Current Head is H
@@ -225,7 +225,7 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
// Insert an attestation to H, H is head
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'h'}, 1)
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'h'}, params.BeaconConfig().SlotsPerEpoch, true)
f.justifiedBalances = []uint64{100}
headRoot, err := f.Head(ctx)
require.NoError(t, err)
@@ -243,8 +243,8 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
require.Equal(t, primitives.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'h'}].weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].node.weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'h'}].node.weight)
// Set current epoch to 3, and H's unrealized checkpoint. Check it's head
driftGenesisTime(f, 99, 0)
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'h'}, 2))
@@ -252,8 +252,8 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
require.Equal(t, primitives.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'h'}].weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].node.weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'h'}].node.weight)
}
func TestStore_PullTips_Heuristics(t *testing.T) {

View File

@@ -46,7 +46,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 0
// / \
// 2 1 <- +vote, new head
f.ProcessAttestation(t.Context(), []uint64{0}, indexToHash(1), 2)
f.ProcessAttestation(t.Context(), []uint64{0}, indexToHash(1), 2*params.BeaconConfig().SlotsPerEpoch, true)
r, err = f.Head(t.Context())
require.NoError(t, err)
assert.Equal(t, indexToHash(1), r, "Incorrect head for with justified epoch at 1")
@@ -55,7 +55,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 0
// / \
// vote, new head -> 2 1
f.ProcessAttestation(t.Context(), []uint64{1}, indexToHash(2), 2)
f.ProcessAttestation(t.Context(), []uint64{1}, indexToHash(2), 2*params.BeaconConfig().SlotsPerEpoch, true)
r, err = f.Head(t.Context())
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
@@ -80,7 +80,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// head -> 2 1 <- old vote
// |
// 3 <- new vote
f.ProcessAttestation(t.Context(), []uint64{0}, indexToHash(3), 3)
f.ProcessAttestation(t.Context(), []uint64{0}, indexToHash(3), 3*params.BeaconConfig().SlotsPerEpoch, true)
r, err = f.Head(t.Context())
require.NoError(t, err)
assert.Equal(t, indexToHash(2), r, "Incorrect head for with justified epoch at 1")
@@ -91,7 +91,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// old vote -> 2 1 <- new vote
// |
// 3 <- head
f.ProcessAttestation(t.Context(), []uint64{1}, indexToHash(1), 3)
f.ProcessAttestation(t.Context(), []uint64{1}, indexToHash(1), 3*params.BeaconConfig().SlotsPerEpoch, true)
r, err = f.Head(t.Context())
require.NoError(t, err)
assert.Equal(t, indexToHash(3), r, "Incorrect head for with justified epoch at 1")
@@ -150,7 +150,7 @@ func TestVotes_CanFindHead(t *testing.T) {
assert.Equal(t, indexToHash(6), r, "Incorrect head for with justified epoch at 3")
// Moved 2 votes to block 5:
f.ProcessAttestation(t.Context(), []uint64{0, 1}, indexToHash(5), 4)
f.ProcessAttestation(t.Context(), []uint64{0, 1}, indexToHash(5), 4*params.BeaconConfig().SlotsPerEpoch, true)
// Inset blocks 7 and 8
// 6 should still be the head, even though 5 has all the votes.
@@ -227,7 +227,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// Move two votes for 10, verify it's head
f.ProcessAttestation(t.Context(), []uint64{0, 1}, indexToHash(10), 5)
f.ProcessAttestation(t.Context(), []uint64{0, 1}, indexToHash(10), 5*params.BeaconConfig().SlotsPerEpoch, true)
r, err = f.Head(t.Context())
require.NoError(t, err)
assert.Equal(t, indexToHash(10), r, "Incorrect head for with justified epoch at 3")
@@ -235,7 +235,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// Add 3 more validators to the system.
f.justifiedBalances = []uint64{1, 1, 1, 1, 1}
// The new validators voted for 9
f.ProcessAttestation(t.Context(), []uint64{2, 3, 4}, indexToHash(9), 5)
f.ProcessAttestation(t.Context(), []uint64{2, 3, 4}, indexToHash(9), 5*params.BeaconConfig().SlotsPerEpoch, true)
// The new head should be 9.
r, err = f.Head(t.Context())
require.NoError(t, err)

View File

@@ -9,6 +9,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
consensus_blocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
forkchoice2 "github.com/OffchainLabs/prysm/v7/consensus-types/forkchoice"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
@@ -23,6 +24,7 @@ type ForkChoicer interface {
Unlock()
HeadRetriever // to compute head.
BlockProcessor // to track new block for fork choice.
PayloadProcessor // to track new payloads for fork choice.
AttestationProcessor // to track new attestation for fork choice.
Getter // to retrieve fork choice information.
Setter // to set fork choice information.
@@ -47,9 +49,14 @@ type BlockProcessor interface {
InsertChain(context.Context, []*forkchoicetypes.BlockAndCheckpoints) error
}
// PayloadProcessor processes a payload envelope
type PayloadProcessor interface {
InsertPayload(interfaces.ROExecutionPayloadEnvelope) error
}
// AttestationProcessor processes the attestation that's used for accounting fork choice.
type AttestationProcessor interface {
ProcessAttestation(context.Context, []uint64, [32]byte, primitives.Epoch)
ProcessAttestation(context.Context, []uint64, [32]byte, primitives.Slot, bool)
}
// Getter returns fork choice related information.
@@ -84,6 +91,7 @@ type FastGetter interface {
UnrealizedJustifiedPayloadBlockHash() [32]byte
Weight(root [32]byte) (uint64, error)
ParentRoot(root [32]byte) ([32]byte, error)
BlockHash(root [32]byte) ([32]byte, error)
}
// Setter allows to set forkchoice information

View File

@@ -183,3 +183,10 @@ func (ro *ROForkChoice) ParentRoot(root [32]byte) ([32]byte, error) {
defer ro.l.RUnlock()
return ro.getter.ParentRoot(root)
}
// BlockHash delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) BlockHash(root [32]byte) ([32]byte, error) {
ro.l.RLock()
defer ro.l.RUnlock()
return ro.getter.BlockHash(root)
}

View File

@@ -38,6 +38,7 @@ const (
lastRootCalled
targetRootForEpochCalled
parentRootCalled
blockHashCalled
dependentRootCalled
dependentRootForEpochCalled
)
@@ -301,3 +302,8 @@ func (ro *mockROForkchoice) ParentRoot(_ [32]byte) ([32]byte, error) {
ro.calls = append(ro.calls, parentRootCalled)
return [32]byte{}, nil
}
func (ro *mockROForkchoice) BlockHash(_ [32]byte) ([32]byte, error) {
ro.calls = append(ro.calls, blockHashCalled)
return [32]byte{}, nil
}

View File

@@ -340,6 +340,17 @@ func (s *Service) validatorEndpoints(
handler: server.GetSyncCommitteeDuties,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/validator/duties/ptc/{epoch}",
name: namespace + ".GetPTCDuties",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.GetPTCDuties,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/validator/prepare_beacon_proposer",
name: namespace + ".PrepareBeaconProposer",
@@ -405,6 +416,7 @@ func (s *Service) nodeEndpoints() []endpoint {
MetadataProvider: s.cfg.MetadataProvider,
HeadFetcher: s.cfg.HeadFetcher,
ExecutionChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
ExecutionEngineCaller: s.cfg.ExecutionEngineCaller,
}
const namespace = "node"
@@ -469,6 +481,16 @@ func (s *Service) nodeEndpoints() []endpoint {
handler: server.GetVersion,
methods: []string{http.MethodGet},
},
{
template: "/eth/v2/node/version",
name: namespace + ".GetVersionV2",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.GetVersionV2,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/node/health",
name: namespace + ".GetHealth",

View File

@@ -86,6 +86,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/node/peers/{peer_id}": {http.MethodGet},
"/eth/v1/node/peer_count": {http.MethodGet},
"/eth/v1/node/version": {http.MethodGet},
"/eth/v2/node/version": {http.MethodGet},
"/eth/v1/node/syncing": {http.MethodGet},
"/eth/v1/node/health": {http.MethodGet},
}
@@ -94,6 +95,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/validator/duties/attester/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/proposer/{epoch}": {http.MethodGet},
"/eth/v1/validator/duties/sync/{epoch}": {http.MethodPost},
"/eth/v1/validator/duties/ptc/{epoch}": {http.MethodPost},
"/eth/v3/validator/blocks/{slot}": {http.MethodGet},
"/eth/v1/validator/attestation_data": {http.MethodGet},
"/eth/v2/validator/aggregate_attestation": {http.MethodGet},

View File

@@ -83,6 +83,7 @@ func TestGetSpec(t *testing.T) {
config.ElectraForkEpoch = 107
config.FuluForkVersion = []byte("FuluForkVersion")
config.FuluForkEpoch = 109
config.GloasForkVersion = []byte("GloasForkVersion")
config.GloasForkEpoch = 110
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
@@ -221,7 +222,7 @@ func TestGetSpec(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp))
data, ok := resp.Data.(map[string]any)
require.Equal(t, true, ok)
assert.Equal(t, 192, len(data))
assert.Equal(t, 193, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -301,6 +302,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("FuluForkVersion")), v)
case "FULU_FORK_EPOCH":
assert.Equal(t, "109", v)
case "GLOAS_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("GloasForkVersion")), v)
case "GLOAS_FORK_EPOCH":
assert.Equal(t, "110", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"handlers.go",
"handlers_peers.go",
"log.go",
"server.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/node",
@@ -30,6 +31,7 @@ go_library(
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
],
)
@@ -44,6 +46,7 @@ go_test(
deps = [
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",

View File

@@ -103,6 +103,8 @@ func (s *Server) GetIdentity(w http.ResponseWriter, r *http.Request) {
// GetVersion requests that the beacon node identify information about its implementation in a
// format similar to a HTTP User-Agent field.
//
// Deprecated: in favour of GetVersionV2.
func (*Server) GetVersion(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "node.GetVersion")
defer span.End()
@@ -116,6 +118,38 @@ func (*Server) GetVersion(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, resp)
}
// GetVersionV2 Retrieves structured information about the version of the beacon node and its attached
// execution client in the same format as used on the Engine API
func (s *Server) GetVersionV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "node.GetVersionV2")
defer span.End()
var elData *structs.ClientVersionV1
elDataList, err := s.ExecutionEngineCaller.GetClientVersionV1(ctx)
if err != nil {
log.WithError(err).WithField("endpoint", "GetVersionV2").Debug("Could not get execution client version")
} else if len(elDataList) > 0 {
elData = elDataList[0]
}
commit := version.GitCommit()
if len(commit) >= 8 {
commit = commit[:8]
}
resp := &structs.GetVersionV2Response{
Data: &structs.VersionV2{
BeaconNode: &structs.ClientVersionV1{
Code: "PM",
Name: "Prysm",
Version: version.SemanticVersion(),
Commit: commit,
},
ExecutionClient: elData,
},
}
httputil.WriteJson(w, resp)
}
// GetHealth returns node health status in http status codes. Useful for load balancers.
func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "node.GetHealth")

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
mock "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing"
mockengine "github.com/OffchainLabs/prysm/v7/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
mockp2p "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/testutil"
@@ -90,6 +91,75 @@ func TestGetVersion(t *testing.T) {
assert.StringContains(t, arch, resp.Data.Version)
}
func TestGetVersionV2(t *testing.T) {
t.Run("happy path", func(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/node/version", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s := &Server{
ExecutionEngineCaller: &mockengine.EngineClient{
ClientVersion: []*structs.ClientVersionV1{{
Code: "EL",
Name: "ExecutionClient",
Version: "v1.0.0",
Commit: "abcdef12",
}},
},
}
s.GetVersionV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetVersionV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
require.NotNil(t, resp.Data.BeaconNode)
require.NotNil(t, resp.Data.ExecutionClient)
require.Equal(t, "EL", resp.Data.ExecutionClient.Code)
require.Equal(t, "ExecutionClient", resp.Data.ExecutionClient.Name)
require.Equal(t, "v1.0.0", resp.Data.ExecutionClient.Version)
require.Equal(t, "abcdef12", resp.Data.ExecutionClient.Commit)
require.Equal(t, "PM", resp.Data.BeaconNode.Code)
require.Equal(t, "Prysm", resp.Data.BeaconNode.Name)
require.Equal(t, version.SemanticVersion(), resp.Data.BeaconNode.Version)
require.Equal(t, true, len(resp.Data.BeaconNode.Commit) <= 8)
})
t.Run("unhappy path", func(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/node/version", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s := &Server{
ExecutionEngineCaller: &mockengine.EngineClient{
ClientVersion: nil,
ErrorClientVersion: fmt.Errorf("error"),
},
}
s.GetVersionV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetVersionV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
require.NotNil(t, resp.Data.BeaconNode)
require.Equal(t, true, resp.Data.ExecutionClient == nil)
// make sure there is no 'execution_client' field
var payload map[string]any
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &payload))
data, ok := payload["data"].(map[string]any)
require.Equal(t, true, ok)
_, found := data["beacon_node"]
require.Equal(t, true, found)
_, found = data["execution_client"]
require.Equal(t, false, found)
})
}
func TestGetHealth(t *testing.T) {
checker := &syncmock.Sync{}
optimisticFetcher := &mock.ChainService{Optimistic: false}

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package node
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/rpc/eth/node")

View File

@@ -26,4 +26,5 @@ type Server struct {
GenesisTimeFetcher blockchain.TimeFetcher
HeadFetcher blockchain.HeadFetcher
ExecutionChainInfoFetcher execution.ChainInfoFetcher
ExecutionEngineCaller execution.EngineCaller
}

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"handlers.go",
"handlers_block.go",
"handlers_gloas.go",
"log.go",
"server.go",
],
@@ -18,6 +19,7 @@ go_library(
"//beacon-chain/builder:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",

View File

@@ -18,6 +18,7 @@ import (
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
rpchelpers "github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/eth/helpers"
@@ -1212,6 +1213,202 @@ func (s *Server) GetSyncCommitteeDuties(w http.ResponseWriter, r *http.Request)
httputil.WriteJson(w, resp)
}
// ptcDuty represents a validator's PTC assignment for a slot.
type ptcDuty struct {
validatorIndex primitives.ValidatorIndex
slot primitives.Slot
}
// ptcDuties returns PTC slot assignments for the requested validators in the epoch derived from the state's slot.
// Validators not in any PTC for the epoch will not appear in the result.
func ptcDuties(
ctx context.Context,
st state.ReadOnlyBeaconState,
validators map[primitives.ValidatorIndex]struct{},
) ([]ptcDuty, error) {
if len(validators) == 0 {
return nil, nil
}
epoch := slots.ToEpoch(st.Slot())
startSlot, err := slots.EpochStart(epoch)
if err != nil {
return nil, err
}
var duties []ptcDuty
endSlot := startSlot + params.BeaconConfig().SlotsPerEpoch
for slot := startSlot; slot < endSlot; slot++ {
if ctx.Err() != nil {
return nil, ctx.Err()
}
ptc, err := gloas.PayloadCommittee(ctx, st, slot)
if err != nil {
return nil, errors.Wrapf(err, "failed to get PTC for slot %d", slot)
}
// Check which requested validators are in this PTC, deduplicating within the slot.
seen := make(map[primitives.ValidatorIndex]struct{})
for _, valIdx := range ptc {
if _, ok := validators[valIdx]; !ok {
continue
}
if _, already := seen[valIdx]; already {
continue
}
seen[valIdx] = struct{}{}
duties = append(duties, ptcDuty{
validatorIndex: valIdx,
slot: slot,
})
}
}
return duties, nil
}
// GetPTCDuties retrieves the payload timeliness committee (PTC) duties for the requested epoch.
// The PTC is responsible for attesting to payload timeliness in ePBS (Gloas fork and later).
func (s *Server) GetPTCDuties(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.GetPTCDuties")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
_, requestedEpochUint, ok := shared.UintFromRoute(w, r, "epoch")
if !ok {
return
}
requestedEpoch := primitives.Epoch(requestedEpochUint)
// PTC duties are only available from Gloas fork onwards.
if requestedEpoch < params.BeaconConfig().GloasForkEpoch {
httputil.HandleError(w, "PTC duties are not available before Gloas fork", http.StatusBadRequest)
return
}
var indices []string
err := json.NewDecoder(r.Body).Decode(&indices)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(indices) == 0 {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
}
requestedValIndices := make([]primitives.ValidatorIndex, len(indices))
for i, ix := range indices {
valIx, valid := shared.ValidateUint(w, fmt.Sprintf("ValidatorIndices[%d]", i), ix)
if !valid {
return
}
requestedValIndices[i] = primitives.ValidatorIndex(valIx)
}
// Limit how far in the future we can query (current + 1 epoch).
cs := s.TimeFetcher.CurrentSlot()
currentEpoch := slots.ToEpoch(cs)
nextEpoch := currentEpoch + 1
if requestedEpoch > nextEpoch {
httputil.HandleError(w,
fmt.Sprintf("Request epoch %d can not be greater than next epoch %d", requestedEpoch, nextEpoch),
http.StatusBadRequest)
return
}
// For next epoch requests, we use the current epoch's state since PTC
// assignments for next epoch can be computed from current epoch's state.
// This mirrors the spec's get_ptc_assignment which asserts epoch <= next_epoch
// and uses the current state to compute assignments.
epochForState := requestedEpoch
if requestedEpoch == nextEpoch {
epochForState = currentEpoch
}
st, err := s.Stater.StateByEpoch(ctx, epochForState)
if err != nil {
shared.WriteStateFetchError(w, err)
return
}
// Build a set of requested validators (also deduplicates per spec's uniqueItems requirement).
// Validate that each index exists in the validator registry.
requestedSet := make(map[primitives.ValidatorIndex]struct{}, len(requestedValIndices))
var zeroPubkey [fieldparams.BLSPubkeyLength]byte
for _, idx := range requestedValIndices {
// Skip duplicates.
if _, exists := requestedSet[idx]; exists {
continue
}
// Validate index exists.
pubkey := st.PubkeyAtIndex(idx)
if bytes.Equal(pubkey[:], zeroPubkey[:]) {
httputil.HandleError(w, fmt.Sprintf("Invalid validator index %d", idx), http.StatusBadRequest)
return
}
requestedSet[idx] = struct{}{}
}
// Compute PTC duties.
computedDuties, err := ptcDuties(ctx, st, requestedSet)
if err != nil {
httputil.HandleError(w, "Could not compute PTC duties: "+err.Error(), http.StatusInternalServerError)
return
}
// Convert to response format.
duties := make([]*structs.PTCDuty, 0, len(computedDuties))
for _, duty := range computedDuties {
pubkey := st.PubkeyAtIndex(duty.validatorIndex)
duties = append(duties, &structs.PTCDuty{
Pubkey: hexutil.Encode(pubkey[:]),
ValidatorIndex: strconv.FormatUint(uint64(duty.validatorIndex), 10),
Slot: strconv.FormatUint(uint64(duty.slot), 10),
})
}
// Get dependent root. Per the spec, dependent_root is:
// get_block_root_at_slot(state, compute_start_slot_at_epoch(epoch - 1) - 1)
// or the genesis block root in the case of underflow.
var dependentRoot []byte
if requestedEpoch <= 1 {
r, err := s.BeaconDB.GenesisBlockRoot(ctx)
if err != nil {
httputil.HandleError(w, "Could not get genesis block root: "+err.Error(), http.StatusInternalServerError)
return
}
dependentRoot = r[:]
} else {
dependentRoot, err = attestationDependentRoot(st, requestedEpoch)
if err != nil {
httputil.HandleError(w, "Could not get dependent root: "+err.Error(), http.StatusInternalServerError)
return
}
}
isOptimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
return
}
resp := &structs.GetPTCDutiesResponse{
DependentRoot: hexutil.Encode(dependentRoot),
ExecutionOptimistic: isOptimistic,
Data: duties,
}
httputil.WriteJson(w, resp)
}
// GetLiveness requests the beacon node to indicate if a validator has been observed to be live in a given epoch.
// The beacon node might detect liveness by observing messages from the validator on the network,
// in the beacon chain, from its API or from any other source.

View File

@@ -0,0 +1,31 @@
package validator
import (
"net/http"
"github.com/OffchainLabs/prysm/v7/network/httputil"
)
// ProduceBlockV4 requests a beacon node to produce a valid Gloas block.
//
// TODO: Implement Gloas-specific block production.
// Endpoint: GET /eth/v4/validator/blocks/{slot}
func (s *Server) ProduceBlockV4(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "ProduceBlockV4 not yet implemented", http.StatusNotImplemented)
}
// ExecutionPayloadEnvelope retrieves a cached execution payload envelope.
//
// TODO: Implement envelope retrieval from cache.
// Endpoint: GET /eth/v1/validator/execution_payload_envelope/{slot}/{builder_index}
func (s *Server) ExecutionPayloadEnvelope(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "ExecutionPayloadEnvelope not yet implemented", http.StatusNotImplemented)
}
// PublishExecutionPayloadEnvelope broadcasts a signed execution payload envelope.
//
// TODO: Implement envelope validation and broadcast.
// Endpoint: POST /eth/v1/beacon/execution_payload_envelope
func (s *Server) PublishExecutionPayloadEnvelope(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "PublishExecutionPayloadEnvelope not yet implemented", http.StatusNotImplemented)
}

View File

@@ -2959,6 +2959,250 @@ func TestGetSyncCommitteeDuties(t *testing.T) {
})
}
func TestGetPTCDuties(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
// Use fixed slot 0 for deterministic tests.
slot := primitives.Slot(0)
genesisTime := time.Now()
// Need enough validators for PTC selection (PTC_SIZE is 512 on mainnet, 2 on minimal)
numVals := uint64(fieldparams.PTCSize * 2)
st, _ := util.DeterministicGenesisStateFulu(t, numVals)
require.NoError(t, st.SetGenesisTime(genesisTime))
// Initialize the committee cache for epoch 0.
require.NoError(t, helpers.UpdateCommitteeCache(t.Context(), st, 0))
// Set up a genesis block root for dependent_root calculation.
genesisRoot := [32]byte{1, 2, 3}
db := dbutil.SetupDB(t)
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), genesisRoot))
mockChainService := &mockChain.ChainService{Genesis: genesisTime, State: st, Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{BeaconState: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
TimeFetcher: mockChainService,
HeadFetcher: mockChainService,
OptimisticModeFetcher: mockChainService,
BeaconDB: db,
}
t.Run("single validator in PTC", func(t *testing.T) {
// Request duties for validator index 0
var body bytes.Buffer
_, err := body.WriteString("[\"0\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetPTCDutiesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.NotEmpty(t, resp.DependentRoot)
})
t.Run("verifies PTC duties correctness", func(t *testing.T) {
// Request duties for a range of validators.
// Some will be in the PTC, some won't.
var indices []string
requestedSet := make(map[primitives.ValidatorIndex]struct{})
for i := range 100 {
indices = append(indices, strconv.Itoa(i))
requestedSet[primitives.ValidatorIndex(i)] = struct{}{}
}
// Test ptcDuties directly.
directDuties, err := ptcDuties(t.Context(), st, requestedSet)
require.NoError(t, err)
// Should return some duties (not necessarily all 100, depends on PTC selection).
require.NotEmpty(t, directDuties, "Should return at least some duties")
// All returned duties should be for slots within epoch 0.
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
for _, duty := range directDuties {
if uint64(duty.slot) >= uint64(slotsPerEpoch) {
t.Errorf("Duty slot %d should be within epoch 0 (< %d)", duty.slot, slotsPerEpoch)
}
// Verify returned validator was in the request.
_, ok := requestedSet[duty.validatorIndex]
if !ok {
t.Errorf("Returned validator %d should be in requested set", duty.validatorIndex)
}
}
// Test via HTTP - should return same count.
indicesJSON, err := json.Marshal(indices)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", bytes.NewReader(indicesJSON))
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetPTCDutiesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// HTTP response should match direct call.
assert.Equal(t, len(directDuties), len(resp.Data), "HTTP response should match direct PTCDuties call")
})
t.Run("multiple validators", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[\"0\",\"1\",\"2\",\"3\",\"4\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetPTCDutiesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.NotEmpty(t, resp.DependentRoot)
// Verify any returned duties have correct structure
for _, duty := range resp.Data {
assert.NotEmpty(t, duty.Pubkey)
assert.NotEmpty(t, duty.ValidatorIndex)
assert.NotEmpty(t, duty.Slot)
}
})
t.Run("duplicate validator indices are deduplicated", func(t *testing.T) {
// Request the same validator multiple times - should be deduplicated.
var body bytes.Buffer
_, err := body.WriteString("[\"0\",\"0\",\"0\",\"1\",\"1\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetPTCDutiesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Each validator should appear at most once in the response.
seen := make(map[string]bool)
for _, duty := range resp.Data {
if seen[duty.ValidatorIndex] {
t.Errorf("Validator %s appears multiple times in response", duty.ValidatorIndex)
}
seen[duty.ValidatorIndex] = true
}
})
t.Run("pre-Gloas epoch returns error", func(t *testing.T) {
// Temporarily set GloasForkEpoch to 10
cfg := params.BeaconConfig()
cfg.GloasForkEpoch = 10
params.OverrideBeaconConfig(cfg)
defer func() {
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
}()
var body bytes.Buffer
_, err := body.WriteString("[\"0\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.StringContains(t, "PTC duties are not available before Gloas fork", e.Message)
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", nil)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.StringContains(t, "No data submitted", e.Message)
})
t.Run("empty body", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.StringContains(t, "No data submitted", e.Message)
})
t.Run("invalid validator index string", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[\"foo\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
})
t.Run("out of bounds validator index", func(t *testing.T) {
// Request a validator index that's way beyond the number of validators.
var body bytes.Buffer
_, err := body.WriteString("[\"999999999\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "0")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
// Invalid validator index should return 400, matching attester duties behavior.
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.StringContains(t, "Invalid validator index", e.Message)
})
t.Run("epoch too far in future", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[\"0\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://www.example.com/eth/v1/validator/duties/ptc/{epoch}", &body)
request.SetPathValue("epoch", "100") // Far future epoch
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetPTCDuties(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.StringContains(t, "can not be greater than next epoch", e.Message)
})
}
func TestPrepareBeaconProposer(t *testing.T) {
tests := []struct {
name string

View File

@@ -26,6 +26,9 @@ go_library(
"proposer_eth1data.go",
"proposer_execution_payload.go",
"proposer_exits.go",
"proposer_bid.go",
"proposer_payload_attestation.go",
"proposer_payload_envelope.go",
"proposer_slashings.go",
"proposer_sync_aggregate.go",
"server.go",
@@ -45,6 +48,7 @@ go_library(
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/operation:go_default_library",
@@ -157,6 +161,7 @@ common_deps = [
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/blst:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//genesis:go_default_library",
@@ -199,6 +204,7 @@ go_test(
"proposer_altair_test.go",
"proposer_attestations_electra_test.go",
"proposer_attestations_test.go",
"proposer_bid_test.go",
"proposer_bellatrix_test.go",
"proposer_builder_test.go",
"proposer_deneb_test.go",
@@ -206,6 +212,8 @@ go_test(
"proposer_empty_block_test.go",
"proposer_execution_payload_test.go",
"proposer_exits_test.go",
"proposer_payload_attestation_test.go",
"proposer_payload_envelope_test.go",
"proposer_slashings_test.go",
"proposer_sync_aggregate_test.go",
"proposer_test.go",

View File

@@ -57,6 +57,11 @@ func (vs *Server) constructGenericBeaconBlock(
return nil, fmt.Errorf("expected *BlobsBundleV2, got %T", blobsBundler)
}
return vs.constructFuluBlock(blockProto, isBlinded, bidStr, bundle), nil
case version.Gloas:
// Gloas blocks do not carry a separate payload value — the bid is part of the block body.
return &ethpb.GenericBeaconBlock{
Block: &ethpb.GenericBeaconBlock_Gloas{Gloas: blockProto.(*ethpb.BeaconBlockGloas)},
}, nil
default:
return nil, fmt.Errorf("unknown block version: %d", sBlk.Version())
}

View File

@@ -237,34 +237,49 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
// Set bls to execution change. New in Capella.
vs.setBlsToExecData(sBlk, head)
// Set payload attestations. New in Gloas.
if sBlk.Version() >= version.Gloas {
if err := sBlk.SetPayloadAttestations(vs.getPayloadAttestations(ctx, head)); err != nil {
log.WithError(err).Error("Could not set payload attestations")
}
}
})
winningBid := primitives.ZeroWei()
var bundle enginev1.BlobsBundler
var local *blocks.GetPayloadResponse
if sBlk.Version() >= version.Bellatrix {
local, err := vs.getLocalPayload(ctx, sBlk.Block(), head)
var err error
local, err = vs.getLocalPayload(ctx, sBlk.Block(), head)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get local payload: %v", err)
}
// There's no reason to try to get a builder bid if local override is true.
var builderBid builderapi.Bid
if !(local.OverrideBuilder || skipMevBoost) {
latestHeader, err := head.LatestExecutionPayloadHeader()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get latest execution payload header: %v", err)
if sBlk.Version() < version.Gloas {
// There's no reason to try to get a builder bid if local override is true.
var builderBid builderapi.Bid
if !(local.OverrideBuilder || skipMevBoost) {
latestHeader, err := head.LatestExecutionPayloadHeader()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get latest execution payload header: %v", err)
}
parentGasLimit := latestHeader.GasLimit()
builderBid, err = vs.getBuilderPayloadAndBlobs(ctx, sBlk.Block().Slot(), sBlk.Block().ProposerIndex(), parentGasLimit)
if err != nil {
builderGetPayloadMissCount.Inc()
log.WithError(err).Error("Could not get builder payload")
}
}
parentGasLimit := latestHeader.GasLimit()
builderBid, err = vs.getBuilderPayloadAndBlobs(ctx, sBlk.Block().Slot(), sBlk.Block().ProposerIndex(), parentGasLimit)
if err != nil {
builderGetPayloadMissCount.Inc()
log.WithError(err).Error("Could not get builder payload")
}
}
winningBid, bundle, err = setExecutionData(ctx, sBlk, local, builderBid, builderBoostFactor)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not set execution data: %v", err)
winningBid, bundle, err = setExecutionData(ctx, sBlk, local, builderBid, builderBoostFactor)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not set execution data: %v", err)
}
} else {
if err := vs.setSelfBuildExecutionPayloadBid(ctx, sBlk, local); err != nil {
return nil, status.Errorf(codes.Internal, "Could not set execution data for Gloas: %v", err)
}
}
}
@@ -276,6 +291,15 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
}
sBlk.SetStateRoot(sr)
// For Gloas, build and cache the execution payload envelope now that the block
// is fully built (state root set). The envelope needs the final block HTR as
// BeaconBlockRoot and the post-payload state root as StateRoot.
if sBlk.Version() >= version.Gloas {
if err := vs.storeExecutionPayloadEnvelope(sBlk, local); err != nil {
return nil, status.Errorf(codes.Internal, "Could not build execution payload envelope: %v", err)
}
}
return vs.constructGenericBeaconBlock(sBlk, bundle, winningBid)
}
@@ -320,7 +344,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
} else if block.Version() >= version.Deneb {
} else if block.Version() >= version.Deneb && block.Version() < version.Gloas {
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
}
if err != nil {
@@ -341,8 +365,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
wg.Wait()
if err := vs.broadcastAndReceiveSidecars(ctx, block, root, blobSidecars, dataColumnSidecars); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive sidecars: %v", err)
if block.Version() < version.Gloas {
if err := vs.broadcastAndReceiveSidecars(ctx, block, root, blobSidecars, dataColumnSidecars); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive sidecars: %v", err)
}
}
if err := <-errChan; err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive block: %v", err)

View File

@@ -0,0 +1,76 @@
package validator
import (
"context"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/pkg/errors"
)
// setSelfBuildExecutionPayloadBid creates an execution payload bid from the local payload
// and sets it on the block body. The envelope is created and cached later by
// storeExecutionPayloadEnvelope once the block is fully built.
func (vs *Server) setSelfBuildExecutionPayloadBid(
ctx context.Context,
sBlk interfaces.SignedBeaconBlock,
local *consensusblocks.GetPayloadResponse,
) error {
_, span := trace.StartSpan(ctx, "ProposerServer.setSelfBuildExecutionPayloadBid")
defer span.End()
if local == nil || local.ExecutionData == nil {
return errors.New("local execution payload is nil")
}
// Create execution payload bid from the local payload.
bid, err := vs.createSelfBuildExecutionPayloadBid(local, sBlk.Block())
if err != nil {
return errors.Wrap(err, "could not create execution payload bid")
}
// Per spec, self-build bids must use G2 point-at-infinity as the signature.
// Only the execution payload envelope requires a real signature from the proposer.
signedBid := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: common.InfiniteSignature[:],
}
if err := sBlk.SetSignedExecutionPayloadBid(signedBid); err != nil {
return errors.Wrap(err, "could not set signed execution payload bid")
}
return nil
}
// createSelfBuildExecutionPayloadBid creates an ExecutionPayloadBid for self-building,
// where the proposer acts as its own builder. Per spec, the bid value must be zero
// and the builder index must be BUILDER_INDEX_SELF_BUILD.
func (vs *Server) createSelfBuildExecutionPayloadBid(
local *consensusblocks.GetPayloadResponse,
block interfaces.ReadOnlyBeaconBlock,
) (*ethpb.ExecutionPayloadBid, error) {
ed := local.ExecutionData
if ed == nil || ed.IsNil() {
return nil, errors.New("execution data is nil")
}
parentBlockRoot := block.ParentRoot()
return &ethpb.ExecutionPayloadBid{
ParentBlockHash: ed.ParentHash(),
ParentBlockRoot: bytesutil.SafeCopyBytes(parentBlockRoot[:]),
BlockHash: ed.BlockHash(),
PrevRandao: ed.PrevRandao(),
FeeRecipient: ed.FeeRecipient(),
GasLimit: ed.GasLimit(),
BuilderIndex: params.BeaconConfig().BuilderIndexSelfBuild,
Slot: block.Slot(),
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitments: [][]byte{}, // TODO: handle DA in gloas
}, nil
}

View File

@@ -0,0 +1,95 @@
package validator
import (
"math/big"
"testing"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestSetSelfBuildExecutionPayloadBid(t *testing.T) {
parentRoot := [32]byte{1, 2, 3}
slot := primitives.Slot(100)
proposerIndex := primitives.ValidatorIndex(42)
sBlk, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{},
},
})
require.NoError(t, err)
payload := &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
ExtraData: make([]byte, 0),
}
ed, err := consensusblocks.WrappedExecutionPayloadDeneb(payload)
require.NoError(t, err)
// 5 Gwei = 5,000,000,000 Wei
bidValue := big.NewInt(5_000_000_000)
local := &consensusblocks.GetPayloadResponse{
ExecutionData: ed,
Bid: bidValue,
BlobsBundler: nil,
ExecutionRequests: &enginev1.ExecutionRequests{},
}
vs := &Server{}
err = vs.setSelfBuildExecutionPayloadBid(t.Context(), sBlk, local)
require.NoError(t, err)
// Verify the signed bid was set on the block.
signedBid, err := sBlk.Block().Body().SignedExecutionPayloadBid()
require.NoError(t, err)
require.NotNil(t, signedBid)
require.NotNil(t, signedBid.Message)
// Per spec (process_execution_payload_bid): for self-builds,
// signature must be G2 point-at-infinity.
require.DeepEqual(t, common.InfiniteSignature[:], signedBid.Signature)
// Verify bid fields.
bid := signedBid.Message
require.Equal(t, slot, bid.Slot)
require.Equal(t, params.BeaconConfig().BuilderIndexSelfBuild, bid.BuilderIndex)
require.DeepEqual(t, parentRoot[:], bid.ParentBlockRoot)
require.Equal(t, primitives.Gwei(0), bid.Value)
require.Equal(t, primitives.Gwei(0), bid.ExecutionPayment)
}
func TestSetSelfBuildExecutionPayloadBid_NilPayload(t *testing.T) {
sBlk, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: 1,
ParentRoot: make([]byte, 32),
Body: &ethpb.BeaconBlockBodyGloas{},
},
})
require.NoError(t, err)
vs := &Server{}
err = vs.setSelfBuildExecutionPayloadBid(t.Context(), sBlk, nil)
require.ErrorContains(t, "local execution payload is nil", err)
err = vs.setSelfBuildExecutionPayloadBid(t.Context(), sBlk, &consensusblocks.GetPayloadResponse{})
require.ErrorContains(t, "local execution payload is nil", err)
}

View File

@@ -16,6 +16,11 @@ func getEmptyBlock(slot primitives.Slot) (interfaces.SignedBeaconBlock, error) {
var err error
epoch := slots.ToEpoch(slot)
switch {
case epoch >= params.BeaconConfig().GloasForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{Block: &ethpb.BeaconBlockGloas{Body: &ethpb.BeaconBlockBodyGloas{}}})
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not initialize block for proposal: %v", err)
}
case epoch >= params.BeaconConfig().FuluForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{}}})
if err != nil {

View File

@@ -135,8 +135,8 @@ func (vs *Server) getLocalPayloadFromEngine(
return nil, err
}
var attr payloadattribute.Attributer
switch st.Version() {
case version.Deneb, version.Electra, version.Fulu:
switch {
case st.Version() >= version.Deneb:
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
return nil, err
@@ -151,7 +151,7 @@ func (vs *Server) getLocalPayloadFromEngine(
if err != nil {
return nil, err
}
case version.Capella:
case st.Version() == version.Capella:
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
return nil, err
@@ -165,7 +165,7 @@ func (vs *Server) getLocalPayloadFromEngine(
if err != nil {
return nil, err
}
case version.Bellatrix:
case st.Version() == version.Bellatrix:
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: random,

View File

@@ -0,0 +1,23 @@
package validator
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// getPayloadAttestations returns payload attestations for inclusion in a Gloas block.
// PTC members broadcast PayloadAttestationMessages via P2P gossip during slot N.
// All nodes collect these in a pool. The slot N+1 proposer retrieves and aggregates
// them into PayloadAttestations for block inclusion.
func (vs *Server) getPayloadAttestations(ctx context.Context, head state.BeaconState) []*ethpb.PayloadAttestation {
if slots.ToEpoch(head.Slot()) < params.BeaconConfig().GloasForkEpoch {
return nil
}
// TODO: Retrieve and aggregate PayloadAttestationMessages from the pool
// for the previous slot. Blocks are valid without payload attestations.
return []*ethpb.PayloadAttestation{}
}

View File

@@ -0,0 +1,41 @@
package validator
import (
"testing"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func TestGetPayloadAttestations_BeforeGloasFork(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 10
params.OverrideBeaconConfig(cfg)
// State at slot 0 (epoch 0), which is before GloasForkEpoch 10.
headState, err := util.NewBeaconStateGloas()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(0))
vs := &Server{}
result := vs.getPayloadAttestations(t.Context(), headState)
require.Equal(t, true, result == nil)
}
func TestGetPayloadAttestations_AtGloasFork(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
headState, err := util.NewBeaconStateGloas()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(0))
vs := &Server{}
result := vs.getPayloadAttestations(t.Context(), headState)
require.NotNil(t, result)
require.Equal(t, 0, len(result))
}

View File

@@ -0,0 +1,203 @@
package validator
import (
"bytes"
"context"
"fmt"
coregloas "github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/emptypb"
)
// storeExecutionPayloadEnvelope creates and caches the execution payload envelope
// after the block is fully built (state root set). The envelope is cached with a
// zeroed state root; the actual post-payload state root is computed lazily in
// GetExecutionPayloadEnvelope once the block has been submitted and the post-block
// state is available via StateGen.
func (vs *Server) storeExecutionPayloadEnvelope(
sBlk interfaces.SignedBeaconBlock,
local *consensusblocks.GetPayloadResponse,
) error {
blockRoot, err := sBlk.Block().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute block hash tree root")
}
payload := extractExecutionPayloadDeneb(local)
envelope := &ethpb.ExecutionPayloadEnvelope{
Payload: payload,
ExecutionRequests: local.ExecutionRequests,
BuilderIndex: params.BeaconConfig().BuilderIndexSelfBuild,
BeaconBlockRoot: blockRoot[:],
Slot: sBlk.Block().Slot(),
StateRoot: make([]byte, 32), // zeroed; computed lazily in GetExecutionPayloadEnvelope
}
vs.setExecutionPayloadEnvelope(envelope)
return nil
}
func extractExecutionPayloadDeneb(local *consensusblocks.GetPayloadResponse) *enginev1.ExecutionPayloadDeneb {
if local == nil || local.ExecutionData == nil || local.ExecutionData.IsNil() {
return nil
}
if p, ok := local.ExecutionData.Proto().(*enginev1.ExecutionPayloadDeneb); ok {
return p
}
return nil
}
func (vs *Server) setExecutionPayloadEnvelope(envelope *ethpb.ExecutionPayloadEnvelope) {
if envelope == nil {
return
}
vs.executionPayloadEnvelopeMu.Lock()
defer vs.executionPayloadEnvelopeMu.Unlock()
vs.executionPayloadEnvelope = envelope
}
func (vs *Server) getExecutionPayloadEnvelope(slot primitives.Slot) (*ethpb.ExecutionPayloadEnvelope, bool) {
vs.executionPayloadEnvelopeMu.RLock()
envelope := vs.executionPayloadEnvelope
vs.executionPayloadEnvelopeMu.RUnlock()
if envelope == nil {
return nil, false
}
if envelope.Slot != slot {
return nil, false
}
return envelope, true
}
// GetExecutionPayloadEnvelope implements the gRPC endpoint:
// /eth/v1alpha1/validator/execution_payload_envelope/{slot}/{builder_index}
// It returns the stored execution payload envelope for a slot/builder and, for
// self-build envelopes, computes the post-payload state root on demand.
func (vs *Server) GetExecutionPayloadEnvelope(
ctx context.Context,
req *ethpb.ExecutionPayloadEnvelopeRequest,
) (*ethpb.ExecutionPayloadEnvelopeResponse, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.GetExecutionPayloadEnvelope")
defer span.End()
if req == nil {
return nil, status.Error(codes.InvalidArgument, "request cannot be nil")
}
span.SetAttributes(trace.Int64Attribute("slot", int64(req.Slot)))
if slots.ToEpoch(req.Slot) < params.BeaconConfig().GloasForkEpoch {
return nil, status.Errorf(codes.InvalidArgument,
"execution payload envelopes are not supported before Gloas fork (slot %d)", req.Slot)
}
envelope, found := vs.getExecutionPayloadEnvelope(req.Slot)
if !found {
return nil, status.Errorf(codes.NotFound,
"execution payload envelope not found for slot %d", req.Slot)
}
if bytes.Equal(envelope.StateRoot, make([]byte, 32)) {
// Lazily set the state root in the envelope by applying the payload evelope on the post block state
roEnvelope, err := consensusblocks.WrappedROExecutionPayloadEnvelope(envelope)
if err != nil {
return nil, status.Errorf(codes.Internal, "could not wrap envelope: %v", err)
}
stateRoot, err := vs.computePostPayloadStateRoot(ctx, roEnvelope)
if err != nil {
return nil, status.Errorf(codes.Internal, "could not compute post-payload state root: %v", err)
}
vs.executionPayloadEnvelopeMu.Lock()
envelope.StateRoot = stateRoot
vs.executionPayloadEnvelopeMu.Unlock()
}
return &ethpb.ExecutionPayloadEnvelopeResponse{
Envelope: envelope,
}, nil
}
// computePostPayloadStateRoot retrieves the post-block state (after the block has
// been submitted and processed) and applies the execution payload state mutations
// to compute the post-payload state root for the envelope.
func (vs *Server) computePostPayloadStateRoot(ctx context.Context, envelope interfaces.ROExecutionPayloadEnvelope) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.computePostPayloadStateRoot")
defer span.End()
beaconState, err := vs.StateGen.StateByRoot(ctx, envelope.BeaconBlockRoot())
if err != nil {
return nil, errors.Wrap(err, "could not retrieve post-block state")
}
beaconState = beaconState.Copy()
if err := coregloas.ApplyExecutionPayload(ctx, beaconState, envelope); err != nil {
return nil, errors.Wrapf(err, "could not apply execution payload at slot %d", beaconState.Slot())
}
root, err := beaconState.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "could not compute post-payload state root at slot %d", beaconState.Slot())
}
return root[:], nil
}
// PublishExecutionPayloadEnvelope validates and broadcasts a signed execution payload envelope.
// This is called by validators after signing the envelope retrieved from GetExecutionPayloadEnvelope.
//
// gRPC endpoint: POST /eth/v1alpha1/validator/execution_payload_envelope
func (vs *Server) PublishExecutionPayloadEnvelope(
ctx context.Context,
req *ethpb.SignedExecutionPayloadEnvelope,
) (*emptypb.Empty, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.PublishExecutionPayloadEnvelope")
defer span.End()
if req == nil || req.Message == nil {
return nil, status.Error(codes.InvalidArgument, "signed envelope cannot be nil")
}
if slots.ToEpoch(req.Message.Slot) < params.BeaconConfig().GloasForkEpoch {
return nil, status.Errorf(codes.InvalidArgument,
"execution payload envelopes are not supported before Gloas fork (slot %d)", req.Message.Slot)
}
beaconBlockRoot := bytesutil.ToBytes32(req.Message.BeaconBlockRoot)
span.SetAttributes(
trace.Int64Attribute("slot", int64(req.Message.Slot)),
trace.Int64Attribute("builderIndex", int64(req.Message.BuilderIndex)),
trace.StringAttribute("beaconBlockRoot", fmt.Sprintf("%#x", beaconBlockRoot[:8])),
)
log := log.WithFields(logrus.Fields{
"slot": req.Message.Slot,
"builderIndex": req.Message.BuilderIndex,
"beaconBlockRoot": fmt.Sprintf("%#x", beaconBlockRoot[:8]),
})
log.Info("Publishing signed execution payload envelope")
if err := vs.P2P.Broadcast(ctx, req); err != nil {
return nil, status.Errorf(codes.Internal, "failed to broadcast execution payload envelope: %v", err)
}
// TODO: Receive the envelope locally following the broadcastReceiveBlock pattern.
// TODO: Build and broadcast data column sidecars from the cached blobs bundle.
// In Gloas, blob data is delivered alongside the execution payload envelope
// rather than with the beacon block (which only carries the bid). Not needed
// for devnet-0.
log.Info("Successfully published execution payload envelope")
return &emptypb.Empty{}, nil
}

View File

@@ -0,0 +1,257 @@
package validator
import (
"math/big"
"testing"
mockp2p "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
mockstategen "github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen/mock"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func TestExtractExecutionPayloadDeneb(t *testing.T) {
payload := &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
ExtraData: make([]byte, 0),
}
ed, err := consensusblocks.WrappedExecutionPayloadDeneb(payload)
require.NoError(t, err)
local := &consensusblocks.GetPayloadResponse{
ExecutionData: ed,
Bid: big.NewInt(0),
}
result := extractExecutionPayloadDeneb(local)
require.NotNil(t, result)
require.DeepEqual(t, payload, result)
}
func TestExtractExecutionPayloadDeneb_Nil(t *testing.T) {
require.Equal(t, true, extractExecutionPayloadDeneb(nil) == nil)
require.Equal(t, true, extractExecutionPayloadDeneb(&consensusblocks.GetPayloadResponse{}) == nil)
}
func TestSetGetExecutionPayloadEnvelope(t *testing.T) {
slot := primitives.Slot(42)
envelope := &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BuilderIndex: primitives.BuilderIndex(7),
BeaconBlockRoot: make([]byte, 32),
Slot: slot,
StateRoot: make([]byte, 32),
}
vs := &Server{}
vs.setExecutionPayloadEnvelope(envelope)
got, found := vs.getExecutionPayloadEnvelope(slot)
require.Equal(t, true, found)
require.DeepEqual(t, envelope, got)
}
func TestGetExecutionPayloadEnvelope_SlotMismatch(t *testing.T) {
envelope := &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BuilderIndex: primitives.BuilderIndex(7),
BeaconBlockRoot: make([]byte, 32),
Slot: 42,
StateRoot: make([]byte, 32),
}
vs := &Server{}
vs.setExecutionPayloadEnvelope(envelope)
_, found := vs.getExecutionPayloadEnvelope(999)
require.Equal(t, false, found)
}
func TestGetExecutionPayloadEnvelope_Nil(t *testing.T) {
vs := &Server{}
_, found := vs.getExecutionPayloadEnvelope(1)
require.Equal(t, false, found)
}
func TestGetExecutionPayloadEnvelopeRPC_NilRequest(t *testing.T) {
vs := &Server{}
_, err := vs.GetExecutionPayloadEnvelope(t.Context(), nil)
require.ErrorContains(t, "request cannot be nil", err)
}
func TestGetExecutionPayloadEnvelopeRPC_PreFork(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 10
params.OverrideBeaconConfig(cfg)
vs := &Server{}
_, err := vs.GetExecutionPayloadEnvelope(t.Context(), &ethpb.ExecutionPayloadEnvelopeRequest{
Slot: 0, // epoch 0, before GloasForkEpoch 10
})
require.ErrorContains(t, "not supported before Gloas fork", err)
}
func TestPublishExecutionPayloadEnvelope_NilRequest(t *testing.T) {
vs := &Server{}
_, err := vs.PublishExecutionPayloadEnvelope(t.Context(), nil)
require.ErrorContains(t, "signed envelope cannot be nil", err)
_, err = vs.PublishExecutionPayloadEnvelope(t.Context(), &ethpb.SignedExecutionPayloadEnvelope{})
require.ErrorContains(t, "signed envelope cannot be nil", err)
}
func TestPublishExecutionPayloadEnvelope_PreFork(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 10
params.OverrideBeaconConfig(cfg)
vs := &Server{}
_, err := vs.PublishExecutionPayloadEnvelope(t.Context(), &ethpb.SignedExecutionPayloadEnvelope{
Message: &ethpb.ExecutionPayloadEnvelope{
Slot: 0, // epoch 0, before GloasForkEpoch 10
},
})
require.ErrorContains(t, "not supported before Gloas fork", err)
}
func TestGetExecutionPayloadEnvelopeRPC_StateRootAlreadySet(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
stateRoot := make([]byte, 32)
stateRoot[0] = 0xAB // Non-zero state root
envelope := &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BuilderIndex: primitives.BuilderIndex(0),
BeaconBlockRoot: make([]byte, 32),
Slot: 1,
StateRoot: stateRoot,
}
vs := &Server{}
vs.setExecutionPayloadEnvelope(envelope)
resp, err := vs.GetExecutionPayloadEnvelope(t.Context(), &ethpb.ExecutionPayloadEnvelopeRequest{
Slot: 1,
})
require.NoError(t, err)
require.NotNil(t, resp)
require.DeepEqual(t, envelope, resp.Envelope)
require.DeepEqual(t, stateRoot, resp.Envelope.StateRoot)
}
func TestGetExecutionPayloadEnvelopeRPC_ZeroStateRootComputesRoot(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
envelope := &ethpb.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BuilderIndex: primitives.BuilderIndex(0),
BeaconBlockRoot: make([]byte, 32),
Slot: 1,
StateRoot: make([]byte, 32), // Zero state root triggers computation
}
// Set up a mock state gen with a Gloas state for the beacon block root.
sg := mockstategen.NewService()
st, err := util.NewBeaconStateGloas()
require.NoError(t, err)
sg.AddStateForRoot(st, [32]byte{}) // envelope.BeaconBlockRoot is all zeros
vs := &Server{
StateGen: sg,
}
vs.setExecutionPayloadEnvelope(envelope)
// The call should enter the lazy computation path. It will fail during
// ApplyExecutionPayload because the mock state doesn't satisfy all consistency
// checks, but that proves we entered the zero-state-root branch.
_, err = vs.GetExecutionPayloadEnvelope(t.Context(), &ethpb.ExecutionPayloadEnvelopeRequest{
Slot: 1,
})
require.ErrorContains(t, "could not compute post-payload state root", err)
}
func TestPublishExecutionPayloadEnvelope_Success(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.GloasForkEpoch = 0
params.OverrideBeaconConfig(cfg)
broadcaster := &mockp2p.MockBroadcaster{}
vs := &Server{
P2P: broadcaster,
}
req := &ethpb.SignedExecutionPayloadEnvelope{
Message: &ethpb.ExecutionPayloadEnvelope{
Slot: 1,
BuilderIndex: 0,
BeaconBlockRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
resp, err := vs.PublishExecutionPayloadEnvelope(t.Context(), req)
require.NoError(t, err)
require.NotNil(t, resp)
require.Equal(t, true, broadcaster.BroadcastCalled.Load())
require.Equal(t, 1, len(broadcaster.BroadcastMessages))
}

View File

@@ -6,6 +6,7 @@ package validator
import (
"bytes"
"context"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
@@ -27,7 +28,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
prysmSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -44,46 +45,48 @@ import (
// and committees in which particular validators need to perform their responsibilities,
// and more.
type Server struct {
Ctx context.Context
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
HeadFetcher blockchain.HeadFetcher
ForkFetcher blockchain.ForkFetcher
ForkchoiceFetcher blockchain.ForkchoiceFetcher
GenesisFetcher blockchain.GenesisFetcher
FinalizationFetcher blockchain.FinalizationFetcher
TimeFetcher blockchain.TimeFetcher
BlockFetcher execution.POWBlockFetcher
DepositFetcher cache.DepositFetcher
ChainStartFetcher execution.ChainStartFetcher
Eth1InfoFetcher execution.ChainInfoFetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
SyncChecker sync.Checker
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
P2P p2p.Broadcaster
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
SlashingsPool slashings.PoolManager
ExitPool voluntaryexits.PoolManager
SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher
OperationNotifier opfeed.Notifier
StateGen stategen.StateManager
ReplayerBuilder stategen.ReplayerBuilder
BeaconDB db.HeadAccessDatabase
ExecutionEngineCaller execution.EngineCaller
BlockBuilder builder.BlockBuilder
BLSChangesPool blstoexec.PoolManager
ClockWaiter startup.ClockWaiter
CoreService *core.Service
AttestationStateFetcher blockchain.AttestationStateFetcher
GraffitiInfo *execution.GraffitiInfo
Ctx context.Context
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
executionPayloadEnvelopeMu sync.RWMutex
executionPayloadEnvelope *ethpb.ExecutionPayloadEnvelope
HeadFetcher blockchain.HeadFetcher
ForkFetcher blockchain.ForkFetcher
ForkchoiceFetcher blockchain.ForkchoiceFetcher
GenesisFetcher blockchain.GenesisFetcher
FinalizationFetcher blockchain.FinalizationFetcher
TimeFetcher blockchain.TimeFetcher
BlockFetcher execution.POWBlockFetcher
DepositFetcher cache.DepositFetcher
ChainStartFetcher execution.ChainStartFetcher
Eth1InfoFetcher execution.ChainInfoFetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
SyncChecker prysmSync.Checker
StateNotifier statefeed.Notifier
BlockNotifier blockfeed.Notifier
P2P p2p.Broadcaster
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
SlashingsPool slashings.PoolManager
ExitPool voluntaryexits.PoolManager
SyncCommitteePool synccommittee.Pool
BlockReceiver blockchain.BlockReceiver
BlobReceiver blockchain.BlobReceiver
DataColumnReceiver blockchain.DataColumnReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
PendingDepositsFetcher depositsnapshot.PendingDepositsFetcher
OperationNotifier opfeed.Notifier
StateGen stategen.StateManager
ReplayerBuilder stategen.ReplayerBuilder
BeaconDB db.HeadAccessDatabase
ExecutionEngineCaller execution.EngineCaller
BlockBuilder builder.BlockBuilder
BLSChangesPool blstoexec.PoolManager
ClockWaiter startup.ClockWaiter
CoreService *core.Service
AttestationStateFetcher blockchain.AttestationStateFetcher
GraffitiInfo *execution.GraffitiInfo
}
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.

View File

@@ -30,6 +30,7 @@ type writeOnlyGloasFields interface {
IncreaseBuilderBalance(index primitives.BuilderIndex, amount uint64) error
AddBuilderFromDeposit(pubkey [fieldparams.BLSPubkeyLength]byte, withdrawalCredentials [fieldparams.RootLength]byte, amount uint64) error
UpdatePendingPaymentWeight(att ethpb.Att, indices []uint64, participatedFlags map[uint8]bool) error
OnboardBuildersFromPendingDeposits() error
}
type readOnlyGloasFields interface {

View File

@@ -26,6 +26,7 @@ go_library(
"getters_withdrawal.go",
"gloas.go",
"hasher.go",
"log.go",
"multi_value_slices.go",
"proofs.go",
"readonly_validator.go",
@@ -85,6 +86,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -134,6 +136,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native/types:go_default_library",

View File

@@ -304,6 +304,10 @@ func (b *BeaconState) BuilderIndexByPubkey(pubkey [fieldparams.BLSPubkeyLength]b
b.lock.RLock()
defer b.lock.RUnlock()
return b.builderIndexByPubkey(pubkey)
}
func (b *BeaconState) builderIndexByPubkey(pubkey [fieldparams.BLSPubkeyLength]byte) (primitives.BuilderIndex, bool) {
for i, builder := range b.builders {
if builder == nil {
continue

View File

@@ -139,6 +139,11 @@ func (b *BeaconState) ValidatorIndexByPubkey(key [fieldparams.BLSPubkeyLength]by
b.lock.RLock()
defer b.lock.RUnlock()
return b.validatorIndexByPubkey(key)
}
// Lock free version of ValidatorIndexByPubkey. This assumes that a lock is already held on BeaconState.
func (b *BeaconState) validatorIndexByPubkey(key [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
numOfVals := b.validatorsMultiValue.Len(b)
idx, ok := b.valMapHandler.Get(key)

View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package state_native
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "beacon-chain/state/state-native")

View File

@@ -3,6 +3,7 @@ package state_native
import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -251,6 +252,10 @@ func (b *BeaconState) IncreaseBuilderBalance(index primitives.BuilderIndex, amou
b.lock.Lock()
defer b.lock.Unlock()
return b.increaseBuilderBalance(index, amount)
}
func (b *BeaconState) increaseBuilderBalance(index primitives.BuilderIndex, amount uint64) error {
if b.builders == nil || uint64(index) >= uint64(len(b.builders)) {
return fmt.Errorf("builder index %d out of bounds", index)
}
@@ -284,6 +289,14 @@ func (b *BeaconState) AddBuilderFromDeposit(pubkey [fieldparams.BLSPubkeyLength]
b.lock.Lock()
defer b.lock.Unlock()
return b.addBuilderFromDepositAtEpoch(pubkey, withdrawalCredentials, amount, slots.ToEpoch(b.slot))
}
func (b *BeaconState) addBuilderFromDepositAtEpoch(pubkey [fieldparams.BLSPubkeyLength]byte, withdrawalCredentials [fieldparams.RootLength]byte, amount uint64, depositEpoch primitives.Epoch) error {
if b.version < version.Gloas {
return errNotSupported("AddBuilderFromDeposit", b.version)
}
currentEpoch := slots.ToEpoch(b.slot)
index := b.builderInsertionIndex(currentEpoch)
@@ -292,7 +305,7 @@ func (b *BeaconState) AddBuilderFromDeposit(pubkey [fieldparams.BLSPubkeyLength]
Version: []byte{withdrawalCredentials[0]},
ExecutionAddress: bytesutil.SafeCopyBytes(withdrawalCredentials[12:]),
Balance: primitives.Gwei(amount),
DepositEpoch: currentEpoch,
DepositEpoch: depositEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
}
@@ -445,3 +458,132 @@ func (b *BeaconState) UpdatePendingPaymentWeight(att ethpb.Att, indices []uint64
return nil
}
// OnboardBuildersFromPendingDeposits applies any pending builder deposits at the fork.
// It mutates the state and prunes pending deposits accordingly.
//
// <spec fn="onboard_builders_from_pending_deposits" fork="gloas">
// def onboard_builders_from_pending_deposits(state: BeaconState) -> None:
// """
// Applies any pending deposit for builders, effectively
// onboarding builders at the fork.
// """
// validator_pubkeys = [v.pubkey for v in state.validators]
//
// pending_deposits = []
// for deposit in state.pending_deposits:
// # Deposits for existing validators stay in pending queue
// if deposit.pubkey in validator_pubkeys:
// pending_deposits.append(deposit)
// continue
//
// # If the pubkey is associated with a builder that was created in a
// # previous iteration or it is a builder deposit, try to apply the
// # deposit to the new/existing builder. Note that the function
// # apply_deposit_for_builder can mutate the state and may add a builder
// # to the registry. For this reason, the list of builder pubkeys must
// # be recomputed each iteration.
// builder_pubkeys = [b.pubkey for b in state.builders]
// is_existing_builder = deposit.pubkey in builder_pubkeys
// has_builder_credentials = is_builder_withdrawal_credential(deposit.withdrawal_credentials)
// if is_existing_builder or has_builder_credentials:
// apply_deposit_for_builder(
// state,
// deposit.pubkey,
// deposit.withdrawal_credentials,
// deposit.amount,
// deposit.signature,
// deposit.slot,
// )
// continue
//
// # If there is a pending deposit for a new validator that has a valid
// # signature, track the pubkey so that subsequent builder deposits for
// # the same pubkey stay in pending (applied to the validator later)
// # rather than creating a builder. Deposits with invalid signatures are
// # dropped here since they would fail in apply_pending_deposit anyway.
// if is_valid_deposit_signature(
// deposit.pubkey, deposit.withdrawal_credentials, deposit.amount, deposit.signature
// ):
// validator_pubkeys.append(deposit.pubkey)
// pending_deposits.append(deposit)
//
// state.pending_deposits = pending_deposits
// </spec>
func (b *BeaconState) OnboardBuildersFromPendingDeposits() error {
if b.version < version.Gloas {
return errNotSupported("OnboardBuildersFromPendingDeposits", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
pendingDeposits := b.pendingDeposits
newPendingDeposits := make([]*ethpb.PendingDeposit, 0, len(pendingDeposits))
newValidatorPubkeys := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
for _, deposit := range pendingDeposits {
pubkey := bytesutil.ToBytes48(deposit.PublicKey)
if _, ok := newValidatorPubkeys[pubkey]; ok {
newPendingDeposits = append(newPendingDeposits, deposit)
continue
}
if _, ok := b.validatorIndexByPubkey(pubkey); ok {
newPendingDeposits = append(newPendingDeposits, deposit)
continue
}
if idx, ok := b.builderIndexByPubkey(pubkey); ok {
if err := b.increaseBuilderBalance(idx, deposit.Amount); err != nil {
return err
}
continue
}
if helpers.IsBuilderWithdrawalCredential(deposit.WithdrawalCredentials) {
valid, err := helpers.IsValidDepositSignature(&ethpb.Deposit_Data{
PublicKey: deposit.PublicKey,
WithdrawalCredentials: deposit.WithdrawalCredentials,
Amount: deposit.Amount,
Signature: deposit.Signature,
})
if err != nil {
log.WithField("pubkey", fmt.Sprintf("%x", deposit.PublicKey)).WithError(err).Debug("Could not verify builder deposit signature")
continue
}
if valid {
depositEpoch := slots.ToEpoch(deposit.Slot)
if err := b.addBuilderFromDepositAtEpoch(pubkey, bytesutil.ToBytes32(deposit.WithdrawalCredentials), deposit.Amount, depositEpoch); err != nil {
log.WithField("pubkey", fmt.Sprintf("%x", deposit.PublicKey)).WithError(err).Debug("Failed to apply builder deposit")
continue
}
} else {
log.WithField("pubkey", fmt.Sprintf("%x", deposit.PublicKey)).Debug("Invalid signature for builder deposit")
}
continue
}
valid, err := helpers.IsValidDepositSignature(&ethpb.Deposit_Data{
PublicKey: deposit.PublicKey,
WithdrawalCredentials: deposit.WithdrawalCredentials,
Amount: deposit.Amount,
Signature: deposit.Signature,
})
if err != nil {
log.WithField("pubkey", fmt.Sprintf("%x", deposit.PublicKey)).WithError(err).Debug("Could not verify validator deposit signature")
}
if valid {
newValidatorPubkeys[pubkey] = true
newPendingDeposits = append(newPendingDeposits, deposit)
} else {
log.WithField("pubkey", fmt.Sprintf("%x", deposit.PublicKey)).Debug("Invalid signature for validator deposit")
}
}
b.sharedFieldReferences[types.PendingDeposits].MinusRef()
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1)
b.pendingDeposits = newPendingDeposits
b.markFieldAsDirty(types.PendingDeposits)
return nil
}

View File

@@ -4,10 +4,13 @@ import (
"bytes"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/require"
@@ -810,3 +813,182 @@ func TestAddBuilderFromDeposit_CopyOnWrite(t *testing.T) {
require.Equal(t, uint(1), st.sharedFieldReferences[types.Builders].Refs())
require.Equal(t, uint(1), copied.sharedFieldReferences[types.Builders].Refs())
}
func TestOnboardBuildersFromPendingDeposits(t *testing.T) {
t.Run("returns error before gloas", func(t *testing.T) {
st := &BeaconState{version: version.Fulu}
err := st.OnboardBuildersFromPendingDeposits()
require.ErrorContains(t, "OnboardBuildersFromPendingDeposits", err)
})
t.Run("keeps pending deposits for existing validator", func(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
pubkey := sk.PublicKey().Marshal()
validator := &ethpb.Validator{PublicKey: pubkey}
builderCreds := builderWithdrawalCredentials(0xAB)
deposit := &ethpb.PendingDeposit{
PublicKey: pubkey,
WithdrawalCredentials: builderCreds,
Amount: 10,
Signature: make([]byte, fieldparams.BLSSignatureLength),
Slot: 0,
}
st := newGloasState(t, []*ethpb.Validator{validator}, nil, []*ethpb.PendingDeposit{deposit}, 0)
require.NoError(t, st.OnboardBuildersFromPendingDeposits())
require.Equal(t, 1, len(st.pendingDeposits))
require.Equal(t, 0, len(st.builders))
})
t.Run("creates builder for valid builder deposit", func(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
builderCreds := builderWithdrawalCredentials(0xCC)
amount := uint64(42)
depSlot := primitives.Slot(params.BeaconConfig().SlotsPerEpoch*2 + 1)
deposit := newPendingDeposit(t, sk, builderCreds, amount, depSlot, true)
st := newGloasState(t, nil, nil, []*ethpb.PendingDeposit{deposit}, 0)
require.NoError(t, st.OnboardBuildersFromPendingDeposits())
require.Equal(t, 0, len(st.pendingDeposits))
require.Equal(t, 1, len(st.builders))
builder := st.builders[0]
require.DeepEqual(t, sk.PublicKey().Marshal(), builder.Pubkey)
require.DeepEqual(t, builderCreds[12:], builder.ExecutionAddress)
require.Equal(t, primitives.Gwei(amount), builder.Balance)
require.Equal(t, slots.ToEpoch(depSlot), builder.DepositEpoch)
})
t.Run("increases balance for existing builder", func(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
pubkey := sk.PublicKey().Marshal()
builder := &ethpb.Builder{
Pubkey: pubkey,
Balance: 10,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
}
nonBuilderCreds := nonBuilderWithdrawalCredentials()
deposit := newPendingDeposit(t, sk, nonBuilderCreds, 5, 0, false)
st := newGloasState(t, nil, []*ethpb.Builder{builder}, []*ethpb.PendingDeposit{deposit}, 0)
require.NoError(t, st.OnboardBuildersFromPendingDeposits())
require.Equal(t, 0, len(st.pendingDeposits))
require.Equal(t, 1, len(st.builders))
require.Equal(t, primitives.Gwei(15), st.builders[0].Balance)
})
t.Run("drops invalid builder deposit", func(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
builderCreds := builderWithdrawalCredentials(0xDD)
deposit := newPendingDeposit(t, sk, builderCreds, 10, 0, false)
st := newGloasState(t, nil, nil, []*ethpb.PendingDeposit{deposit}, 0)
require.NoError(t, st.OnboardBuildersFromPendingDeposits())
require.Equal(t, 0, len(st.pendingDeposits))
require.Equal(t, 0, len(st.builders))
})
t.Run("validator deposit blocks later builder deposit for same pubkey", func(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
validatorCreds := nonBuilderWithdrawalCredentials()
builderCreds := builderWithdrawalCredentials(0xEE)
depositValidator := newPendingDeposit(t, sk, validatorCreds, 5, 0, true)
depositBuilder := newPendingDeposit(t, sk, builderCreds, 7, 0, true)
st := newGloasState(t, nil, nil, []*ethpb.PendingDeposit{depositValidator, depositBuilder}, 0)
require.NoError(t, st.OnboardBuildersFromPendingDeposits())
require.Equal(t, 2, len(st.pendingDeposits))
require.Equal(t, 0, len(st.builders))
})
t.Run("drops invalid non-builder deposit", func(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
validatorCreds := nonBuilderWithdrawalCredentials()
deposit := newPendingDeposit(t, sk, validatorCreds, 5, 0, false)
st := newGloasState(t, nil, nil, []*ethpb.PendingDeposit{deposit}, 0)
require.NoError(t, st.OnboardBuildersFromPendingDeposits())
require.Equal(t, 0, len(st.pendingDeposits))
require.Equal(t, 0, len(st.builders))
})
}
func newGloasState(
t *testing.T,
validators []*ethpb.Validator,
builders []*ethpb.Builder,
pendingDeposits []*ethpb.PendingDeposit,
slot primitives.Slot,
) *BeaconState {
t.Helper()
statePb, err := InitializeFromProtoUnsafeGloas(&ethpb.BeaconStateGloas{
Slot: slot,
Validators: validators,
Builders: builders,
PendingDeposits: pendingDeposits,
})
require.NoError(t, err)
st, ok := statePb.(*BeaconState)
require.Equal(t, true, ok)
return st
}
func builderWithdrawalCredentials(addrByte byte) []byte {
wc := make([]byte, fieldparams.RootLength)
wc[0] = params.BeaconConfig().BuilderWithdrawalPrefixByte
for i := 12; i < len(wc); i++ {
wc[i] = addrByte
}
return wc
}
func nonBuilderWithdrawalCredentials() []byte {
wc := make([]byte, fieldparams.RootLength)
wc[0] = params.BeaconConfig().BLSWithdrawalPrefixByte
return wc
}
func newPendingDeposit(
t *testing.T,
sk bls.SecretKey,
withdrawalCredentials []byte,
amount uint64,
slot primitives.Slot,
valid bool,
) *ethpb.PendingDeposit {
t.Helper()
signature := make([]byte, fieldparams.BLSSignatureLength)
if valid {
signature = signDeposit(t, sk, withdrawalCredentials, amount)
}
return &ethpb.PendingDeposit{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: signature,
Slot: slot,
}
}
func signDeposit(t *testing.T, sk bls.SecretKey, withdrawalCredentials []byte, amount uint64) []byte {
t.Helper()
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
msg := &ethpb.DepositMessage{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
}
signingRoot, err := signing.ComputeSigningRoot(msg, domain)
require.NoError(t, err)
sig := sk.Sign(signingRoot[:])
return sig.Marshal()
}

View File

@@ -54,6 +54,7 @@ go_library(
"validate_aggregate_proof.go",
"validate_attester_slashing.go",
"validate_beacon_attestation.go",
"validate_beacon_block_gloas.go",
"validate_beacon_blocks.go",
"validate_blob.go",
"validate_bls_to_execution_change.go",
@@ -211,6 +212,7 @@ go_test(
"validate_aggregate_proof_test.go",
"validate_attester_slashing_test.go",
"validate_beacon_attestation_test.go",
"validate_beacon_block_gloas_test.go",
"validate_beacon_blocks_test.go",
"validate_blob_test.go",
"validate_bls_to_execution_change_test.go",

View File

@@ -0,0 +1,61 @@
package sync
import (
"context"
"github.com/OffchainLabs/prysm/v7/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/pkg/errors"
)
// validateExecutionPayloadBid validates execution payload bid gossip rules.
// [REJECT] The bid's parent (defined by bid.parent_block_root) equals the block's parent (defined by block.parent_root).
// [REJECT] The length of KZG commitments is less than or equal to the limitation defined in the consensus layer --
// i.e. validate that len(bid.blob_kzg_commitments) <= get_blob_parameters(get_current_epoch(state)).max_blobs_per_block
func (s *Service) validateExecutionPayloadBid(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock) (pubsub.ValidationResult, error) {
if blk.Version() < version.Gloas {
return pubsub.ValidationAccept, nil
}
signedBid, err := blk.Body().SignedExecutionPayloadBid()
if err != nil {
return pubsub.ValidationIgnore, errors.Wrap(err, "unable to read bid from block")
}
wrappedBid, err := consensusblocks.WrappedROSignedExecutionPayloadBid(signedBid)
if err != nil {
return pubsub.ValidationIgnore, errors.Wrap(err, "unable to wrap signed execution payload bid")
}
bid, err := wrappedBid.Bid()
if err != nil {
return pubsub.ValidationIgnore, errors.Wrap(err, "unable to read bid from signed execution payload bid")
}
if bid.ParentBlockRoot() != blk.ParentRoot() {
return pubsub.ValidationReject, errors.New("bid parent block root does not match block parent root")
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlockAtEpoch(slots.ToEpoch(blk.Slot()))
if bid.BlobKzgCommitmentCount() > uint64(maxBlobsPerBlock) {
return pubsub.ValidationReject, errors.Wrapf(errRejectCommitmentLen, "%d > %d", bid.BlobKzgCommitmentCount(), maxBlobsPerBlock)
}
return pubsub.ValidationAccept, nil
}
// validateExecutionPayloadBidParentSeen validates parent payload gossip rules.
// [IGNORE] The block's parent execution payload (defined by bid.parent_block_hash) has been seen
// (via gossip or non-gossip sources) (a client MAY queue blocks for processing once the parent payload is retrieved).
func (s *Service) validateExecutionPayloadBidParentSeen(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock) (pubsub.ValidationResult, error) {
// TODO: Requires blockchain service changes to expose parent payload seen status
return pubsub.ValidationAccept, nil
}
// validateExecutionPayloadBidParentValid validates parent payload verification status.
// If execution_payload verification of block's execution payload parent by an execution node is complete:
// [REJECT] The block's execution payload parent (defined by bid.parent_block_hash) passes all validation.
func (s *Service) validateExecutionPayloadBidParentValid(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock) (pubsub.ValidationResult, error) {
// TODO: Requires blockchain service changes to expose execution payload parent validation status.
return pubsub.ValidationAccept, nil
}

View File

@@ -0,0 +1,75 @@
package sync
import (
"context"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/testing/util"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/stretchr/testify/require"
)
func TestValidateExecutionPayloadBid_Accept(t *testing.T) {
params.SetupTestConfigCleanup(t)
ctx := context.Background()
parentRoot := bytesutil.PadTo([]byte{0x01}, fieldparams.RootLength)
block := util.NewBeaconBlockGloas()
block.Block.ParentRoot = parentRoot
block.Block.Body.SignedExecutionPayloadBid.Message.ParentBlockRoot = parentRoot
block.Block.Body.SignedExecutionPayloadBid.Message.BlobKzgCommitments = nil
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
s := &Service{}
res, err := s.validateExecutionPayloadBid(ctx, wsb.Block())
require.NoError(t, err)
require.Equal(t, pubsub.ValidationAccept, res)
}
func TestValidateExecutionPayloadBid_RejectParentRootMismatch(t *testing.T) {
params.SetupTestConfigCleanup(t)
ctx := context.Background()
block := util.NewBeaconBlockGloas()
block.Block.ParentRoot = bytesutil.PadTo([]byte{0x01}, fieldparams.RootLength)
block.Block.Body.SignedExecutionPayloadBid.Message.ParentBlockRoot = bytesutil.PadTo([]byte{0x02}, fieldparams.RootLength)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
s := &Service{}
res, err := s.validateExecutionPayloadBid(ctx, wsb.Block())
require.Error(t, err)
require.Equal(t, pubsub.ValidationReject, res)
}
func TestValidateExecutionPayloadBid_RejectTooManyCommitments(t *testing.T) {
params.SetupTestConfigCleanup(t)
ctx := context.Background()
parentRoot := bytesutil.PadTo([]byte{0x01}, fieldparams.RootLength)
block := util.NewBeaconBlockGloas()
block.Block.ParentRoot = parentRoot
block.Block.Body.SignedExecutionPayloadBid.Message.ParentBlockRoot = parentRoot
maxBlobs := params.BeaconConfig().MaxBlobsPerBlockAtEpoch(0)
commitments := make([][]byte, maxBlobs+1)
for i := range commitments {
commitments[i] = bytesutil.PadTo([]byte{0x02}, fieldparams.BLSPubkeyLength)
}
block.Block.Body.SignedExecutionPayloadBid.Message.BlobKzgCommitments = commitments
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
s := &Service{}
res, err := s.validateExecutionPayloadBid(ctx, wsb.Block())
require.Error(t, err)
require.Equal(t, pubsub.ValidationReject, res)
}

View File

@@ -127,6 +127,9 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
log.WithError(err).WithFields(getBlockFields(blk)).Debug("Received block with an invalid parent")
return pubsub.ValidationReject, err
}
if res, err := s.validateExecutionPayloadBidParentValid(ctx, blk.Block()); err != nil {
return res, err
}
s.pendingQueueLock.RLock()
if s.seenPendingBlocks[blockRoot] {
@@ -198,6 +201,16 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
log.WithError(err).WithFields(getBlockFields(blk)).Debug("Could not identify parent for block")
return pubsub.ValidationIgnore, err
}
if res, err := s.validateExecutionPayloadBidParentSeen(ctx, blk.Block()); err != nil {
return res, err
}
if res, err := s.validateExecutionPayloadBid(ctx, blk.Block()); err != nil {
if res == pubsub.ValidationReject {
s.setBadBlock(ctx, blockRoot)
}
return res, err
}
err = s.validateBeaconBlock(ctx, blk, blockRoot)
if err != nil {
@@ -365,7 +378,7 @@ func (s *Service) blockVerifyingState(ctx context.Context, blk interfaces.ReadOn
}
func validateDenebBeaconBlock(blk interfaces.ReadOnlyBeaconBlock) error {
if blk.Version() < version.Deneb {
if blk.Version() < version.Deneb || blk.Version() >= version.Gloas {
return nil
}
commits, err := blk.Body().BlobKzgCommitments()
@@ -398,6 +411,10 @@ func validateDenebBeaconBlock(blk interfaces.ReadOnlyBeaconBlock) error {
// [IGNORE] The block's parent (defined by block.parent_root) passes all validation (including execution
// node verification of the block.body.execution_payload).
func (s *Service) validateBellatrixBeaconBlock(ctx context.Context, verifyingState state.ReadOnlyBeaconState, blk interfaces.ReadOnlyBeaconBlock) error {
if blk.Version() >= version.Gloas {
return nil
}
// Error if block and state are not the same version
if verifyingState.Version() != blk.Version() {
return errors.New("block and state are not the same version")

View File

@@ -0,0 +1,3 @@
### Added
- New beacon API endpoint `eth/v2/node/version`.

View File

@@ -0,0 +1,2 @@
### Fixed
- Wait for all nodes to reach mid-epoch before comparing heads to avoid epoch-boundary flakiness in E2E sync checks.

Some files were not shown because too many files have changed in this diff Show More