Compare commits

...

37 Commits

Author SHA1 Message Date
Radosław Kapka
8623a144d9 Write Content-Encoding header in the response properly when gzip encoding is requested (#15499)
* Write gzip header properly

* changelog

* regression test
2025-07-17 19:14:48 +00:00
Bastin
f3314d2d24 Abstract validation logic for saving LC updates into the store function (#15504)
* Unify LC API (updates)

* Remove unused fields in LC beacon API server

* refactor lc logic into core

* fix tests
2025-07-17 14:58:18 +00:00
Bastin
bcd65e7a4d Remove extra lc server fields (#15493)
* Unify LC API (updates)

* Remove unused fields in LC beacon API server
2025-07-17 11:50:16 +00:00
Bastin
dce89a1627 Unify LC API 2/2 (updates) (#15488)
* Unify LC API (updates)

* Update beacon-chain/core/light-client/store.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-17 10:59:51 +00:00
Manu NALEPA
09485c2062 Add bundle v2 support for submit blind block (#15503)
* Add bundle v2 support for submit blind block

* add tests

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2025-07-17 07:51:28 +00:00
Manu NALEPA
9e014da0b9 Log: Add milliseconds to log timestamps (#15496) 2025-07-16 10:07:12 +00:00
Manu NALEPA
d8fedacc26 beaconBlockSubscriber: Implement data column sidecars reconstruction with data retrieved from the execution client when receiving a block via gossip. (#15483) 2025-07-16 06:00:23 +00:00
Potuz
3def16caaa Fix InitializeProposerLookahead (#15450)
* Fix InitializeProposerLookahead

Get the right Active validator indices for each epoch after the fork
transition.

Co-Authored-By: Claude <noreply@anthropic.com>

* Add test

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-16 03:16:11 +00:00
Rose Jethani
4e5bfa9760 Fixes server ignores request to gzip data (#14982)
* files changed

* Update api/server/middleware/middleware.go

* fixed AcceptEncodingHeaderHandler

* Update changelog/rose2221-develop.md

* Added tests'

* Update api/server/middleware/middleware_test.go

* updated bazel

* formatting

* fixed e2e_test

* zip only json

* zip only json

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-15 16:39:20 +00:00
terence
70ac53f991 fix(execution): skip genesis block retrieval when EIP-6110 is active (#15494) 2025-07-15 16:00:53 +00:00
terence
6f5ff03b42 optimize data column seen cache memory usage with slot-aware pruning (#15477)
* optimize data column seen cache memory usage with slot-aware pruning

* Manu's feedback

* Fix lint
2025-07-15 16:00:36 +00:00
Preston Van Loon
499d27b6ae Use time.Time instead of uint64 for genesis time (#15419)
* Convert genesis times from seconds to time.Time

* Fixing failed forkchoice tests in a new commit so it doesn't get worse

Fixing failed spectest tests in a new commit so it doesn't get worse

Fixing forkchoice tests, then spectests

* Fixing forkchoice tests, then spectests. Now asking for help...

* Fix TestForkChoice_GetProposerHead

* Fix broken build

* Resolve TODO(preston) items

* Changelog fragment

* Resolve TODO(preston) items again

* Resolve lint issues

* Use consistant field names for sinceSlotStart (no spaces)

* Manu's feedback

* Renamed StartTime -> UnsafeStartTime, marked as deprecated because it doesn't handle overflow scenarios.
Renamed SlotTime -> StartTime
Renamed SlotAt -> At
Handled the error in cases where StartTime was used.

@james-prysm feedback

* Revert beacon-chain/blockchain/receive_block_test.go from 1b7844de

* Fixing issues after rebase

* Accepted suggestions from @potuz

* Remove CanonicalHeadSlot from merge conflicts

---------

Co-authored-by: potuz <potuz@prysmaticlabs.com>
2025-07-14 21:04:50 +00:00
Kaloyan Tanev
56e8881bc1 Rework DV aggregation selection proofs (#15156)
* Reword DV selection proofs

* Add changelog

* Expect 1 call to DomainData in DV update duties test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-14 15:33:26 +00:00
james-prysm
78f8411ad2 move validator run slot ticker (#15479)
* moving the ticker from chain start to right before the main loop and also after the wait for activation edge case

* fixing unit test

* adding in a unit test

* adding in comment based on potuz's feedback
2025-07-11 19:39:52 +00:00
james-prysm
83943b5dd8 validator client: removing need to call canonical head api (#15480)
* removing need to call cononical head api

* typo

* removing unneeded tests

* fixing unit tests
2025-07-10 20:25:17 +00:00
james-prysm
bc7e4f7751 switching enable to disable for duties (#15445) 2025-07-10 18:57:36 +00:00
Manu NALEPA
02f8726e46 VerifiedROBlobFromDisk and VerifiedRODataColumnFromDisk: Add tests. (#15484) 2025-07-10 16:11:12 +00:00
terence
16b567f6af Add log capitalization analyzer and apply changes (#15452)
* Add log capitalization analyzer and apply fixes across codebase

Implements a new nogo analyzer to enforce proper log message capitalization and applies the fixes to all affected log statements throughout the beacon chain, validator, and supporting components.

Co-Authored-By: Claude <noreply@anthropic.com>

* Radek's feedback

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-10 13:43:38 +00:00
Bastin
5c1d827335 Unify LC API 1/2 (bootstrap) (#15476)
* add versiotToForkEpoch map

* Unify LC API (bootstrap)
2025-07-10 13:07:46 +00:00
Bastin
68d7df0e4f add versiotToForkEpoch map (#15482) 2025-07-10 12:57:28 +00:00
kasey
288b33750d Isolate committee cache (#15478)
* always init service through NewService

* move head state cache to service struct

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-07-09 21:43:00 +00:00
terence
f4f48d6372 Optimize BuildBlobSidecars Merkle proof computation by pre-computing subtrees (#15473)
* Optimize BuildBlobSidecars Merkle proof computation by pre-computing subtrees

Co-Authored-By: Claude <noreply@anthropic.com>

* Add change log

* Fix change log

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-09 16:12:14 +00:00
james-prysm
f2d57f0b5f changes for safe validator shutdown and restarts on healthcheck (#15401)
* poc changes for safe validator shutdown

* simplifying health routine and adding safe shutdown after max restarts reached

* fixing health tests

* fixing tests

* changelog

* gofmt

* fixing runner

* improve how runner times out

* improvements to ux on logs

* linting

* adding in max healthcheck flag

* changelog

* Update james-prysm_safe-validator-shutdown.md

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/service.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing some feedback from radek

* addressing some more feedback

* fixing name based on feedback

* fixing mistake on max health checks

* conflict accidently checked in

* go 1.23 no longer needs you to stop for the ticker

* Update flags.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* wip no unit test for recursive healthy host find

* rework healthcheck

* gaz

* fixing bugs and improving logs with new monitor

* removing health tracker, fixing runner tests, and adding placeholder for monitor tests

* fixing event stream check

* linting

* adding in health monitor tests

* gaz

* improving test

* removing some log.fatals

* forgot to remove comment

* missed fatal removal

* doppleganger should exit the node safely now

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek review

* Update validator/client/validator.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/validator.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/validator.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek feedback

* read up on more suggestions and making fixes to channel

* suggested updates after more reading

* reverting some of this because it froze the validator after healthcheck failed

* fully reverting

* some improvements I found during testing

* Update cmd/validator/flags/flags.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* preston's feedback

* clarifications on changelog

* converted to using an event feed instead of my own channel publishing implementation, adding relevant logs

* preston log suggestion

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-07-09 15:39:06 +00:00
Radosław Kapka
7025e50a6c Make the multi-value slice permanent (#15414)
* Remove Old State Paths

* Changelog

* Gazelle

* Lint

* Fix State Tests

* fix tests, update native state code

* move ErrOutOfBounds error from consensus types to mvslice

* fix TestStreamEvents_OperationsEvents

* add missing gc to fuzz tests

* more test fixes

* build fix

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
2025-07-08 17:45:20 +00:00
Radosław Kapka
961ea05454 Allow SSZ requests for pending deposits, partial withdrawals and consolidations (#15474) 2025-07-08 17:45:05 +00:00
terence
da5525096c Move data col reconstruction log to after reconstruction / save complete (#15475)
* Move data col reconstruction / save log to after they are done

* Rename to reconstructionDuration

* Rename to reconstructionAndSaveDuration
2025-07-08 14:49:31 +00:00
terence
2d2507b907 Attest timely by default (#15410)
* Attest timely by default

* Fix deprecated flag naming
2025-07-04 18:28:48 +00:00
terence
a701f07f3a Increase mainnet default builder gas limit to 45M (#15455) 2025-07-04 15:58:32 +00:00
terence
f4bbe5ca40 Add batch verifier limit (#15467) 2025-07-04 15:57:33 +00:00
Manu NALEPA
4be8de2476 PeerDAS: Implement reconstruction of data column sidecars retrieved from the execution client. (#15469)
* `BestFinalized`: No functional change. Improve comments and reduce scope.

* PeerDAS execution: Implement engine method `GetBlobsV2` and `ReconstructDataColumnSidecars`.

* Fix James' comment.

* Fix James' comment.
2025-07-03 22:35:28 +00:00
james-prysm
fac509a3e6 duties v2 fix no assignment panic (#15466)
* adding fix

* preston's suggestion
2025-07-03 15:44:51 +00:00
Manu NALEPA
b1ac8209b2 PeerDAS: Implement reconstruct. (#15454)
* PeerDAS: Implement reconstruct.

* Fix Preston's comment.

* Fix Preston's comment.
2025-07-02 19:02:55 +00:00
Bastin
74c9586c66 refactor lc kv tests (#15465)
* refactor lc kv tests

* clean up

* Update lightclient_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-02 13:24:32 +00:00
Bastin
f0ad3dfaeb Refactor lc bootstrap tests (#15462)
* fix versioning

* changelog

* fix blockchain tests

* fix linter issue

* fix spec tests

* fix default lc update version

* fix lc header version

* gzl

* clean up the code

* Update testing/spectest/shared/common/light_client/update_ranking.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* add fulu set up in update ranking

* pass att block to createDefaultLCUpdate

* address comments

* linter

* Update lightclient.go

* refactor lc bootstrap tests

* changelog

* sort imports

* refactor lc bootstrap tests

* changelog

* Implement the new Fulu Metadata. (#15440)

* clean up

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-07-02 13:23:39 +00:00
Bastin
2540196747 Put the initialization of LC Store behind the enable-light-client flag (#15464)
* put lc store behind flag

* lint

* remove extra line
2025-07-02 11:09:37 +00:00
Manu NALEPA
f133751cce --chain-config-file: Do not use any more mainnet boot nodes. (#15460) 2025-07-02 09:36:31 +00:00
Bastin
bddcc158e4 Fix LC versioning bug (#15400)
* fix versioning

* changelog

* fix blockchain tests

* fix linter issue

* fix spec tests

* fix default lc update version

* fix lc header version

* gzl

* clean up the code

* Update testing/spectest/shared/common/light_client/update_ranking.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* add fulu set up in update ranking

* pass att block to createDefaultLCUpdate

* address comments

* linter

* Update lightclient.go

* sort imports

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-07-01 21:40:18 +00:00
366 changed files with 7620 additions and 6266 deletions

View File

@@ -194,6 +194,7 @@ nogo(
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/nop:go_default_library",

View File

@@ -1,20 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"health.go",
"interfaces.go",
"mock.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/beacon/health",
visibility = ["//visibility:public"],
deps = ["@org_uber_go_mock//gomock:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = ["health_test.go"],
embed = [":go_default_library"],
deps = ["@org_uber_go_mock//gomock:go_default_library"],
)

View File

@@ -1,58 +0,0 @@
package health
import (
"context"
"sync"
)
type NodeHealthTracker struct {
isHealthy *bool
healthChan chan bool
node Node
sync.RWMutex
}
func NewTracker(node Node) Tracker {
return &NodeHealthTracker{
node: node,
healthChan: make(chan bool, 1),
}
}
// HealthUpdates provides a read-only channel for health updates.
func (n *NodeHealthTracker) HealthUpdates() <-chan bool {
return n.healthChan
}
func (n *NodeHealthTracker) IsHealthy(_ context.Context) bool {
n.RLock()
defer n.RUnlock()
if n.isHealthy == nil {
return false
}
return *n.isHealthy
}
func (n *NodeHealthTracker) CheckHealth(ctx context.Context) bool {
n.Lock()
defer n.Unlock()
newStatus := n.node.IsHealthy(ctx)
if n.isHealthy == nil {
n.isHealthy = &newStatus
}
isStatusChanged := newStatus != *n.isHealthy
if isStatusChanged {
// Update the health status
n.isHealthy = &newStatus
// Send the new status to the health channel, potentially overwriting the existing value
select {
case <-n.healthChan:
n.healthChan <- newStatus
default:
n.healthChan <- newStatus
}
}
return newStatus
}

View File

@@ -1,110 +0,0 @@
package health
import (
"sync"
"testing"
"go.uber.org/mock/gomock"
)
func TestNodeHealth_IsHealthy(t *testing.T) {
tests := []struct {
name string
isHealthy bool
want bool
}{
{"initially healthy", true, true},
{"initially unhealthy", false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
n := &NodeHealthTracker{
isHealthy: &tt.isHealthy,
healthChan: make(chan bool, 1),
}
if got := n.IsHealthy(t.Context()); got != tt.want {
t.Errorf("IsHealthy() = %v, want %v", got, tt.want)
}
})
}
}
func TestNodeHealth_UpdateNodeHealth(t *testing.T) {
tests := []struct {
name string
initial bool // Initial health status
newStatus bool // Status to update to
shouldSend bool // Should a message be sent through the channel
}{
{"healthy to unhealthy", true, false, true},
{"unhealthy to healthy", false, true, true},
{"remain healthy", true, true, false},
{"remain unhealthy", false, false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := NewMockHealthClient(ctrl)
client.EXPECT().IsHealthy(gomock.Any()).Return(tt.newStatus)
n := &NodeHealthTracker{
isHealthy: &tt.initial,
node: client,
healthChan: make(chan bool, 1),
}
s := n.CheckHealth(t.Context())
// Check if health status was updated
if s != tt.newStatus {
t.Errorf("UpdateNodeHealth() failed to update isHealthy from %v to %v", tt.initial, tt.newStatus)
}
select {
case status := <-n.HealthUpdates():
if !tt.shouldSend {
t.Errorf("UpdateNodeHealth() unexpectedly sent status %v to HealthCh", status)
} else if status != tt.newStatus {
t.Errorf("UpdateNodeHealth() sent wrong status %v, want %v", status, tt.newStatus)
}
default:
if tt.shouldSend {
t.Error("UpdateNodeHealth() did not send any status to HealthCh when expected")
}
}
})
}
}
func TestNodeHealth_Concurrency(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := NewMockHealthClient(ctrl)
n := NewTracker(client)
var wg sync.WaitGroup
// Number of goroutines to spawn for both reading and writing
numGoroutines := 6
wg.Add(numGoroutines * 2) // for readers and writers
// Concurrently update health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
client.EXPECT().IsHealthy(gomock.Any()).Return(false).Times(1)
n.CheckHealth(t.Context())
client.EXPECT().IsHealthy(gomock.Any()).Return(true).Times(1)
n.CheckHealth(t.Context())
}()
}
// Concurrently read health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
_ = n.IsHealthy(t.Context()) // Just read the value
}()
}
wg.Wait() // Wait for all goroutines to finish
}

View File

@@ -1,13 +0,0 @@
package health
import "context"
type Tracker interface {
HealthUpdates() <-chan bool
CheckHealth(ctx context.Context) bool
Node
}
type Node interface {
IsHealthy(ctx context.Context) bool
}

View File

@@ -1,58 +0,0 @@
package health
import (
"context"
"reflect"
"sync"
"go.uber.org/mock/gomock"
)
var (
_ = Node(&MockHealthClient{})
)
// MockHealthClient is a mock of HealthClient interface.
type MockHealthClient struct {
ctrl *gomock.Controller
recorder *MockHealthClientMockRecorder
sync.Mutex
}
// MockHealthClientMockRecorder is the mock recorder for MockHealthClient.
type MockHealthClientMockRecorder struct {
mock *MockHealthClient
}
// IsHealthy mocks base method.
func (m *MockHealthClient) IsHealthy(arg0 context.Context) bool {
m.Lock()
defer m.Unlock()
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsHealthy", arg0)
ret0, ok := ret[0].(bool)
if !ok {
return false
}
return ret0
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockHealthClient) EXPECT() *MockHealthClientMockRecorder {
return m.recorder
}
// IsHealthy indicates an expected call of IsHealthy.
func (mr *MockHealthClientMockRecorder) IsHealthy(arg0 any) *gomock.Call {
mr.mock.Lock()
defer mr.mock.Unlock()
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockHealthClient)(nil).IsHealthy), arg0)
}
// NewMockHealthClient creates a new mock instance.
func NewMockHealthClient(ctrl *gomock.Controller) *MockHealthClient {
mock := &MockHealthClient{ctrl: ctrl}
mock.recorder = &MockHealthClientMockRecorder{mock}
return mock
}

View File

@@ -50,6 +50,7 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",

View File

@@ -72,7 +72,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
log.WithFields(log.Fields{
"bodyBase64": "(nil value)",
"url": r.URL.String(),
}).Info("builder http request")
}).Info("Builder http request")
return nil
}
t := io.TeeReader(r.Body, b)
@@ -89,7 +89,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
log.WithFields(log.Fields{
"bodyBase64": string(body),
"url": r.URL.String(),
}).Info("builder http request")
}).Info("Builder http request")
return nil
}
@@ -101,7 +101,7 @@ type BuilderClient interface {
NodeURL() string
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error)
RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
Status(ctx context.Context) error
}
@@ -446,6 +446,9 @@ func sszValidatorRegisterRequest(svr []*ethpb.SignedValidatorRegistrationV1) ([]
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")
func getVersionsBlockToPayload(blockVersion int) (int, error) {
if blockVersion >= version.Fulu {
return version.Fulu, nil
}
if blockVersion >= version.Deneb {
return version.Deneb, nil
}
@@ -460,7 +463,7 @@ func getVersionsBlockToPayload(blockVersion int) (int, error) {
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full execution payload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
body, postOpts, err := c.buildBlindedBlockRequest(sb)
if err != nil {
return nil, nil, err
@@ -558,7 +561,7 @@ func (c *Client) buildBlindedBlockRequest(sb interfaces.ReadOnlySignedBeaconBloc
func (c *Client) parseBlindedBlockResponse(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
) (interfaces.ExecutionData, v1.BlobsBundler, error) {
if c.sszEnabled {
return c.parseBlindedBlockResponseSSZ(respBytes, forkVersion)
}
@@ -568,8 +571,18 @@ func (c *Client) parseBlindedBlockResponse(
func (c *Client) parseBlindedBlockResponseSSZ(
respBytes []byte,
forkVersion int,
) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
if forkVersion >= version.Deneb {
) (interfaces.ExecutionData, v1.BlobsBundler, error) {
if forkVersion >= version.Fulu {
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundleV2{}
if err := payloadAndBlobs.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadDenebAndBlobsBundleV2 SSZ")
}
ed, err := blocks.NewWrappedExecutionData(payloadAndBlobs.Payload)
if err != nil {
return nil, nil, errors.Wrapf(err, "unable to wrap execution data for %s", version.String(forkVersion))
}
return ed, payloadAndBlobs.BlobsBundle, nil
} else if forkVersion >= version.Deneb {
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundle{}
if err := payloadAndBlobs.UnmarshalSSZ(respBytes); err != nil {
return nil, nil, errors.Wrap(err, "unable to unmarshal ExecutionPayloadDenebAndBlobsBundle SSZ")

View File

@@ -2,6 +2,7 @@ package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
@@ -13,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
v1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
@@ -1573,3 +1575,166 @@ func TestRequestLogger(t *testing.T) {
err = c.Status(ctx)
require.NoError(t, err)
}
func TestGetVersionsBlockToPayload(t *testing.T) {
tests := []struct {
name string
blockVersion int
expectedVersion int
expectedError bool
}{
{
name: "Fulu version",
blockVersion: 6, // version.Fulu
expectedVersion: 6,
expectedError: false,
},
{
name: "Deneb version",
blockVersion: 4, // version.Deneb
expectedVersion: 4,
expectedError: false,
},
{
name: "Capella version",
blockVersion: 3, // version.Capella
expectedVersion: 3,
expectedError: false,
},
{
name: "Bellatrix version",
blockVersion: 2, // version.Bellatrix
expectedVersion: 2,
expectedError: false,
},
{
name: "Unsupported version",
blockVersion: 0,
expectedError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
version, err := getVersionsBlockToPayload(tt.blockVersion)
if tt.expectedError {
assert.NotNil(t, err)
} else {
assert.NoError(t, err)
assert.Equal(t, tt.expectedVersion, version)
}
})
}
}
func TestParseBlindedBlockResponseSSZ_WithBlobsBundleV2(t *testing.T) {
c := &Client{sszEnabled: true}
// Create test payload
payload := &v1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BlockNumber: 123456,
GasLimit: 30000000,
GasUsed: 21000,
Timestamp: 1234567890,
ExtraData: []byte("test-extra-data"),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: [][]byte{},
Withdrawals: []*v1.Withdrawal{},
BlobGasUsed: 1024,
ExcessBlobGas: 2048,
}
// Create test BlobsBundleV2
bundleV2 := &v1.BlobsBundleV2{
KzgCommitments: [][]byte{make([]byte, 48), make([]byte, 48)},
Proofs: [][]byte{make([]byte, 48), make([]byte, 48)},
Blobs: [][]byte{make([]byte, 131072), make([]byte, 131072)},
}
// Test Fulu version (should use ExecutionPayloadDenebAndBlobsBundleV2)
t.Run("Fulu version with BlobsBundleV2", func(t *testing.T) {
payloadAndBlobsV2 := &v1.ExecutionPayloadDenebAndBlobsBundleV2{
Payload: payload,
BlobsBundle: bundleV2,
}
respBytes, err := payloadAndBlobsV2.MarshalSSZ()
require.NoError(t, err)
ed, bundle, err := c.parseBlindedBlockResponseSSZ(respBytes, 6) // version.Fulu
require.NoError(t, err)
require.NotNil(t, ed)
require.NotNil(t, bundle)
// Verify the bundle is BlobsBundleV2
bundleV2Result, ok := bundle.(*v1.BlobsBundleV2)
assert.Equal(t, true, ok, "Expected BlobsBundleV2 type")
require.Equal(t, len(bundleV2.KzgCommitments), len(bundleV2Result.KzgCommitments))
require.Equal(t, len(bundleV2.Proofs), len(bundleV2Result.Proofs))
require.Equal(t, len(bundleV2.Blobs), len(bundleV2Result.Blobs))
})
// Test Deneb version (should use regular BlobsBundle)
t.Run("Deneb version with regular BlobsBundle", func(t *testing.T) {
regularBundle := &v1.BlobsBundle{
KzgCommitments: bundleV2.KzgCommitments,
Proofs: bundleV2.Proofs,
Blobs: bundleV2.Blobs,
}
payloadAndBlobs := &v1.ExecutionPayloadDenebAndBlobsBundle{
Payload: payload,
BlobsBundle: regularBundle,
}
respBytes, err := payloadAndBlobs.MarshalSSZ()
require.NoError(t, err)
ed, bundle, err := c.parseBlindedBlockResponseSSZ(respBytes, 4) // version.Deneb
require.NoError(t, err)
require.NotNil(t, ed)
require.NotNil(t, bundle)
// Verify the bundle is regular BlobsBundle
regularBundleResult, ok := bundle.(*v1.BlobsBundle)
assert.Equal(t, true, ok, "Expected BlobsBundle type")
require.Equal(t, len(regularBundle.KzgCommitments), len(regularBundleResult.KzgCommitments))
})
// Test invalid SSZ data
t.Run("Invalid SSZ data", func(t *testing.T) {
invalidBytes := []byte("invalid-ssz-data")
ed, bundle, err := c.parseBlindedBlockResponseSSZ(invalidBytes, 6)
assert.NotNil(t, err)
assert.Equal(t, true, ed == nil)
assert.Equal(t, true, bundle == nil)
})
}
func TestSubmitBlindedBlock_BlobsBundlerInterface(t *testing.T) {
// Note: The full integration test is complex due to version detection logic
// The key functionality is tested in the parseBlindedBlockResponseSSZ tests above
// and in the mock service tests which verify the interface changes work correctly
t.Run("Interface signature verification", func(t *testing.T) {
// This test verifies that the SubmitBlindedBlock method signature
// has been updated to return BlobsBundler interface
client := &Client{}
// Verify the method exists with the correct signature
// by using reflection or by checking it compiles with the interface
var _ func(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) = client.SubmitBlindedBlock
// This test passes if the signature is correct
assert.Equal(t, true, true)
})
}

View File

@@ -41,7 +41,7 @@ func (m MockClient) RegisterValidator(_ context.Context, svr []*ethpb.SignedVali
}
// SubmitBlindedBlock --
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
return nil, nil, nil
}

View File

@@ -8,7 +8,11 @@ go_library(
],
importpath = "github.com/OffchainLabs/prysm/v6/api/server/middleware",
visibility = ["//visibility:public"],
deps = ["@com_github_rs_cors//:go_default_library"],
deps = [
"//api:go_default_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
@@ -22,5 +26,6 @@ go_test(
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,11 +1,14 @@
package middleware
import (
"compress/gzip"
"fmt"
"net/http"
"strings"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/rs/cors"
log "github.com/sirupsen/logrus"
)
type Middleware func(http.Handler) http.Handler
@@ -112,6 +115,55 @@ func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
}
}
// AcceptEncodingHeaderHandler compresses the response before sending it back to the client, if gzip is supported.
func AcceptEncodingHeaderHandler() Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
next.ServeHTTP(w, r)
return
}
gz := gzip.NewWriter(w)
gzipRW := &gzipResponseWriter{gz: gz, ResponseWriter: w}
defer func() {
if !gzipRW.zip {
return
}
if err := gz.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip writer")
}
}()
next.ServeHTTP(gzipRW, r)
})
}
}
type gzipResponseWriter struct {
gz *gzip.Writer
http.ResponseWriter
zip bool
}
func (g *gzipResponseWriter) WriteHeader(statusCode int) {
if strings.Contains(g.Header().Get("Content-Type"), api.JsonMediaType) {
// Removing the current Content-Length because zipping will change it.
g.Header().Del("Content-Length")
g.Header().Set("Content-Encoding", "gzip")
g.zip = true
}
g.ResponseWriter.WriteHeader(statusCode)
}
func (g *gzipResponseWriter) Write(b []byte) (int, error) {
if g.zip {
return g.gz.Write(b)
}
return g.ResponseWriter.Write(b)
}
func MiddlewareChain(h http.Handler, mw []Middleware) http.Handler {
if len(mw) < 1 {
return h

View File

@@ -1,14 +1,35 @@
package middleware
import (
"bytes"
"compress/gzip"
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/testing/require"
log "github.com/sirupsen/logrus"
)
// frozenHeaderRecorder allows asserting that response headers were not modified
// after the call to WriteHeader.
//
// Its purpose is to have a regression test for https://github.com/OffchainLabs/prysm/pull/15499.
type frozenHeaderRecorder struct {
*httptest.ResponseRecorder
frozenHeader http.Header
}
func (r *frozenHeaderRecorder) WriteHeader(code int) {
if r.frozenHeader != nil {
return
}
r.ResponseRecorder.WriteHeader(code)
r.frozenHeader = r.ResponseRecorder.Header().Clone()
}
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
@@ -124,6 +145,90 @@ func TestContentTypeHandler(t *testing.T) {
}
}
func TestAcceptEncodingHeaderHandler(t *testing.T) {
dummyContent := "Test gzip middleware content"
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", r.Header.Get("Accept"))
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte(dummyContent))
require.NoError(t, err)
})
handler := AcceptEncodingHeaderHandler()(nextHandler)
tests := []struct {
name string
accept string
acceptEncoding string
expectCompressed bool
}{
{
name: "Accept gzip",
accept: api.JsonMediaType,
acceptEncoding: "gzip",
expectCompressed: true,
},
{
name: "Accept multiple encodings",
accept: api.JsonMediaType,
acceptEncoding: "deflate, gzip",
expectCompressed: true,
},
{
name: "Accept unsupported encoding",
accept: api.JsonMediaType,
acceptEncoding: "deflate",
expectCompressed: false,
},
{
name: "No accept encoding header",
accept: api.JsonMediaType,
acceptEncoding: "",
expectCompressed: false,
},
{
name: "SSZ",
accept: api.OctetStreamMediaType,
acceptEncoding: "gzip",
expectCompressed: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
req.Header.Set("Accept", tt.accept)
if tt.acceptEncoding != "" {
req.Header.Set("Accept-Encoding", tt.acceptEncoding)
}
rr := &frozenHeaderRecorder{ResponseRecorder: httptest.NewRecorder()}
handler.ServeHTTP(rr, req)
if tt.expectCompressed {
require.Equal(t, "gzip", rr.frozenHeader.Get("Content-Encoding"), "Expected Content-Encoding header to be 'gzip'")
compressedBody := rr.Body.Bytes()
require.NotEqual(t, dummyContent, string(compressedBody), "Response body should be compressed and differ from the original")
gzReader, err := gzip.NewReader(bytes.NewReader(compressedBody))
require.NoError(t, err, "Failed to create gzipReader")
defer func() {
if err := gzReader.Close(); err != nil {
log.WithError(err).Error("Failed to close gzip reader")
}
}()
decompressedBody, err := io.ReadAll(gzReader)
require.NoError(t, err, "Failed to decompress response body")
require.Equal(t, dummyContent, string(decompressedBody), "Decompressed content should match the original")
} else {
require.Equal(t, dummyContent, rr.Body.String(), "Response body should be uncompressed and match the original")
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}

View File

@@ -74,7 +74,7 @@ func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error)
}
return &BeaconState{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -177,7 +177,7 @@ func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAlt
}
return &BeaconStateAltair{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -295,7 +295,7 @@ func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconState
}
return &BeaconStateBellatrix{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -430,7 +430,7 @@ func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCa
}
return &BeaconStateCapella{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -568,7 +568,7 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
}
return &BeaconStateDeneb{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -742,7 +742,7 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
}
return &BeaconStateElectra{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
@@ -932,7 +932,7 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
lookahead[i] = fmt.Sprintf("%d", uint64(v))
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisTime: fmt.Sprintf("%d", st.GenesisTime().Unix()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),

View File

@@ -19,10 +19,10 @@ func RunEvery(ctx context.Context, period time.Duration, f func()) {
for {
select {
case <-ticker.C:
log.WithField("function", funcName).Trace("running")
log.WithField("function", funcName).Trace("Running")
f()
case <-ctx.Done():
log.WithField("function", funcName).Debug("context is closed, exiting")
log.WithField("function", funcName).Debug("Context is closed, exiting")
ticker.Stop()
return
}

View File

@@ -41,7 +41,7 @@ type ForkchoiceFetcher interface {
Ancestor(context.Context, []byte, primitives.Slot) ([]byte, error)
CachedHeadRoot() [32]byte
GetProposerHead() [32]byte
SetForkChoiceGenesisTime(uint64)
SetForkChoiceGenesisTime(time.Time)
UpdateHead(context.Context, primitives.Slot)
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
@@ -514,7 +514,7 @@ func (s *Service) Ancestor(ctx context.Context, root []byte, slot primitives.Slo
// SetGenesisTime sets the genesis time of beacon chain.
func (s *Service) SetGenesisTime(t time.Time) {
s.genesisTime = t
s.genesisTime = t.Truncate(time.Second) // Genesis time has a precision of 1 second.
}
func (s *Service) recoverStateSummary(ctx context.Context, blockRoot [32]byte) (*ethpb.StateSummary, error) {

View File

@@ -2,6 +2,7 @@ package blockchain
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
consensus_blocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -27,7 +28,7 @@ func (s *Service) GetProposerHead() [32]byte {
}
// SetForkChoiceGenesisTime sets the genesis time in Forkchoice
func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
func (s *Service) SetForkChoiceGenesisTime(timestamp time.Time) {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
s.cfg.ForkChoiceStore.SetGenesisTime(timestamp)

View File

@@ -3,9 +3,6 @@ package blockchain
import (
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -14,10 +11,7 @@ import (
)
func TestHeadSlot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
s := testServiceWithDB(t)
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
@@ -31,11 +25,8 @@ func TestHeadSlot_DataRace(t *testing.T) {
}
func TestHeadRoot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{root: [32]byte{'A'}},
}
s := testServiceWithDB(t)
s.head = &head{root: [32]byte{'A'}}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
@@ -51,13 +42,10 @@ func TestHeadRoot_DataRace(t *testing.T) {
}
func TestHeadBlock_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
wsb, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}})
require.NoError(t, err)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{block: wsb},
}
s := testServiceWithDB(t)
s.head = &head{block: wsb}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
@@ -73,10 +61,8 @@ func TestHeadBlock_DataRace(t *testing.T) {
}
func TestHeadState_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
}
s := testServiceWithDB(t)
beaconDB := s.cfg.BeaconDB
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})

View File

@@ -6,7 +6,6 @@ import (
"time"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
@@ -138,7 +137,7 @@ func TestFinalizedBlockHash(t *testing.T) {
func TestUnrealizedJustifiedBlockHash(t *testing.T) {
ctx := t.Context()
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
service := testServiceWithDB(t)
ojc := &ethpb.Checkpoint{Root: []byte{'j'}}
ofc := &ethpb.Checkpoint{Root: []byte{'f'}}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -153,7 +152,7 @@ func TestUnrealizedJustifiedBlockHash(t *testing.T) {
}
func TestHeadSlot_CanRetrieve(t *testing.T) {
c := &Service{}
c := testServiceNoDB(t)
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
@@ -200,7 +199,7 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{block: wsb, state: s}
received, err := c.HeadBlock(t.Context())
@@ -213,7 +212,7 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
func TestHeadState_CanRetrieve(t *testing.T) {
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: 2, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
headState, err := c.HeadState(t.Context())
require.NoError(t, err)
@@ -221,7 +220,8 @@ func TestHeadState_CanRetrieve(t *testing.T) {
}
func TestGenesisTime_CanRetrieve(t *testing.T) {
c := &Service{genesisTime: time.Unix(999, 0)}
c := testServiceNoDB(t)
c.genesisTime = time.Unix(999, 0)
wanted := time.Unix(999, 0)
assert.Equal(t, wanted, c.GenesisTime(), "Did not get wanted genesis time")
}
@@ -230,7 +230,7 @@ func TestCurrentFork_CanRetrieve(t *testing.T) {
f := &ethpb.Fork{Epoch: 999}
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Fork: f})
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
if !proto.Equal(c.CurrentFork(), f) {
t.Error("Received incorrect fork version")
@@ -242,7 +242,7 @@ func TestCurrentFork_NilHeadSTate(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
}
c := &Service{}
c := testServiceNoDB(t)
if !proto.Equal(c.CurrentFork(), f) {
t.Error("Received incorrect fork version")
}
@@ -250,7 +250,7 @@ func TestCurrentFork_NilHeadSTate(t *testing.T) {
func TestGenesisValidatorsRoot_CanRetrieve(t *testing.T) {
// Should not panic if head state is nil.
c := &Service{}
c := testServiceNoDB(t)
assert.Equal(t, [32]byte{}, c.GenesisValidatorsRoot(), "Did not get correct genesis validators root")
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{GenesisValidatorsRoot: []byte{'a'}})
@@ -269,7 +269,7 @@ func TestHeadETH1Data_CanRetrieve(t *testing.T) {
d := &ethpb.Eth1Data{DepositCount: 999}
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Eth1Data: d})
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
if !proto.Equal(c.HeadETH1Data(), d) {
t.Error("Received incorrect eth1 data")
@@ -298,7 +298,7 @@ func TestIsCanonical_Ok(t *testing.T) {
func TestService_HeadValidatorsIndices(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{}
indices, err := c.HeadValidatorsIndices(t.Context(), 0)
@@ -313,7 +313,7 @@ func TestService_HeadValidatorsIndices(t *testing.T) {
func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 1)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{}
root := c.HeadGenesisValidatorsRoot()
@@ -332,7 +332,7 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
func TestService_ChainHeads(t *testing.T) {
ctx := t.Context()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
c := testServiceWithDB(t)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -366,7 +366,7 @@ func TestService_ChainHeads(t *testing.T) {
func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
_, e := c.HeadPublicKeyToValidatorIndex([fieldparams.BLSPubkeyLength]byte{})
@@ -381,7 +381,7 @@ func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
}
func TestService_HeadPublicKeyToValidatorIndexNil(t *testing.T) {
c := &Service{}
c := testServiceNoDB(t)
c.head = nil
idx, e := c.HeadPublicKeyToValidatorIndex([fieldparams.BLSPubkeyLength]byte{})
@@ -396,7 +396,7 @@ func TestService_HeadPublicKeyToValidatorIndexNil(t *testing.T) {
func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
@@ -409,7 +409,7 @@ func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
}
func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
c := &Service{}
c := testServiceNoDB(t)
c.head = nil
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
@@ -431,7 +431,9 @@ func TestService_IsOptimistic(t *testing.T) {
ctx := t.Context()
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c := testServiceWithDB(t)
c.SetGenesisTime(time.Now())
c.head = &head{root: [32]byte{'b'}}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -450,14 +452,15 @@ func TestService_IsOptimistic(t *testing.T) {
require.Equal(t, true, opt)
// If head is nil, for some reason, an error should be returned rather than panic.
c = &Service{}
c = testServiceNoDB(t)
_, err = c.IsOptimistic(ctx)
require.ErrorIs(t, err, ErrNilHead)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
ctx := t.Context()
c := &Service{genesisTime: time.Now()}
c := testServiceNoDB(t)
c.genesisTime = time.Now()
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, opt)
@@ -465,7 +468,8 @@ func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
func TestService_IsOptimisticForRoot(t *testing.T) {
ctx := t.Context()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c := testServiceWithDB(t)
c.head = &head{root: [32]byte{'b'}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -481,9 +485,10 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
}
func TestService_IsOptimisticForRoot_DB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c := testServiceWithDB(t)
c.head = &head{root: [32]byte{'b'}}
beaconDB := c.cfg.BeaconDB
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
@@ -534,9 +539,9 @@ func TestService_IsOptimisticForRoot_DB(t *testing.T) {
}
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c := testServiceWithDB(t)
beaconDB := c.cfg.BeaconDB
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
@@ -573,9 +578,9 @@ func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
}
func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c := testServiceWithDB(t)
beaconDB := c.cfg.BeaconDB
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
@@ -593,9 +598,9 @@ func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
}
func TestService_IsFinalized(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}}
c := testServiceWithDB(t)
beaconDB := c.cfg.BeaconDB
r1 := [32]byte{'a'}
require.NoError(t, c.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
Root: r1,

View File

@@ -2,7 +2,6 @@ package blockchain
import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -17,9 +16,6 @@ var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)

View File

@@ -141,7 +141,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}
if err := s.saveHead(ctx, r, b, st); err != nil {
log.WithError(err).Error("could not save head after pruning invalid blocks")
log.WithError(err).Error("Could not save head after pruning invalid blocks")
}
log.WithFields(logrus.Fields{
@@ -354,7 +354,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
}
// Get timestamp.
t, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
t, err := slots.StartTime(s.genesisTime, slot)
if err != nil {
log.WithError(err).Error("Could not get timestamp to get payload attribute")
return emptyAttri

View File

@@ -76,14 +76,14 @@ func (s *Service) sendFCUWithAttributes(cfg *postBlockProcessConfig, fcuArgs *fc
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if err := s.computePayloadAttributes(cfg, fcuArgs); err != nil {
log.WithError(err).Error("could not compute payload attributes")
log.WithError(err).Error("Could not compute payload attributes")
return
}
if fcuArgs.attributes.IsEmpty() {
return
}
if _, err := s.notifyForkchoiceUpdate(cfg.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice with payload attributes for proposal")
log.WithError(err).Error("Could not update forkchoice with payload attributes for proposal")
}
}
@@ -99,7 +99,7 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
}
if err := s.saveHead(ctx, args.headRoot, args.headBlock, args.headState); err != nil {
log.WithError(err).Error("could not save head")
log.WithError(err).Error("Could not save head")
}
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), args.headBlock, args.headRoot, s.CurrentSlot()+1)
@@ -114,7 +114,7 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitives.Slot) bool {
headWeight, err := s.cfg.ForkChoiceStore.Weight(newHeadRoot)
if err != nil {
log.WithError(err).WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
log.WithError(err).WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("Could not determine node weight")
}
currentSlot := s.CurrentSlot()
if proposingSlot == currentSlot {
@@ -132,17 +132,17 @@ func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitiv
if s.cfg.ForkChoiceStore.ShouldOverrideFCU() {
return true
}
secs, err := slots.SecondsSinceSlotStart(currentSlot,
uint64(s.genesisTime.Unix()), uint64(time.Now().Unix()))
sss, err := slots.SinceSlotStart(currentSlot, s.genesisTime, time.Now())
if err != nil {
log.WithError(err).Error("could not compute seconds since slot start")
log.WithError(err).Error("Could not compute seconds since slot start")
}
if secs >= doublylinkedtree.ProcessAttestationsThreshold {
if sss >= doublylinkedtree.ProcessAttestationsThreshold {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
}).Infof("Attempted late block reorg aborted due to attestations at %d seconds",
doublylinkedtree.ProcessAttestationsThreshold)
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
"sinceSlotStart": sss,
"threshold": doublylinkedtree.ProcessAttestationsThreshold,
}).Info("Attempted late block reorg aborted due to attestations after threshold")
lateBlockFailedAttemptFirstThreshold.Inc()
}
}

View File

@@ -160,6 +160,7 @@ func TestShouldOverrideFCU(t *testing.T) {
ctx, fcs := tr.ctx, tr.fcs
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second))
fcs.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second))
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
@@ -180,11 +181,12 @@ func TestShouldOverrideFCU(t *testing.T) {
require.NoError(t, err)
require.Equal(t, headRoot, head)
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 29)
wantLog := "aborted due to attestations after threshold"
fcs.SetGenesisTime(time.Now().Add(-29 * time.Second))
require.Equal(t, true, service.shouldOverrideFCU(parentRoot, 3))
require.LogsDoNotContain(t, hook, "10 seconds")
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 24)
require.LogsDoNotContain(t, hook, wantLog)
fcs.SetGenesisTime(time.Now().Add(-24 * time.Second))
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot+10) * time.Second))
require.Equal(t, false, service.shouldOverrideFCU(parentRoot, 3))
require.LogsContain(t, hook, "10 seconds")
require.LogsContain(t, hook, wantLog)
}

View File

@@ -98,7 +98,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
oldHeadRoot := bytesutil.ToBytes32(r)
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
if err != nil {
log.WithError(err).Error("could not check if node is optimistically synced")
log.WithError(err).Error("Could not check if node is optimistically synced")
}
if headBlock.Block().ParentRoot() != oldHeadRoot {
// A chain re-org occurred, so we fire an event notifying the rest of the services.
@@ -111,11 +111,11 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
dep := math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
oldWeight, err := s.cfg.ForkChoiceStore.Weight(oldHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("Could not determine node weight")
}
newWeight, err := s.cfg.ForkChoiceStore.Weight(newHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("Could not determine node weight")
}
log.WithFields(logrus.Fields{
"newSlot": fmt.Sprintf("%d", newHeadSlot),

View File

@@ -18,9 +18,6 @@ import (
"github.com/pkg/errors"
)
// Initialize the state cache for sync committees.
var syncCommitteeHeadStateCache = cache.NewSyncCommitteeHeadState()
// HeadSyncCommitteeFetcher is the interface that wraps the head sync committee related functions.
// The head sync committee functions return callers sync committee indices and public keys with respect to current head state.
type HeadSyncCommitteeFetcher interface {
@@ -143,7 +140,7 @@ func (s *Service) getSyncCommitteeHeadState(ctx context.Context, slot primitives
defer mLock.Unlock()
// If there's already a head state exists with the request slot, we don't need to process slots.
cachedState, err := syncCommitteeHeadStateCache.Get(slot)
cachedState, err := s.syncCommitteeHeadState.Get(slot)
switch {
case err == nil:
syncHeadStateHit.Inc()
@@ -166,7 +163,7 @@ func (s *Service) getSyncCommitteeHeadState(ctx context.Context, slot primitives
return nil, err
}
syncHeadStateMiss.Inc()
err = syncCommitteeHeadStateCache.Put(slot, headState)
err = s.syncCommitteeHeadState.Put(slot, headState)
return headState, err
default:
// In the event, we encounter another error

View File

@@ -5,7 +5,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
dbTest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -15,7 +14,7 @@ import (
func TestService_HeadSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Current period
@@ -38,7 +37,7 @@ func TestService_HeadSyncCommitteeIndices(t *testing.T) {
func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
@@ -52,7 +51,7 @@ func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
func TestService_headNextSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
@@ -66,7 +65,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Process slot up to 2 * `EpochsPerSyncCommitteePeriod` so it can run `ProcessSyncCommitteeUpdates` twice.
@@ -81,7 +80,7 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
func TestService_HeadSyncCommitteeDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())
@@ -95,7 +94,7 @@ func TestService_HeadSyncCommitteeDomain(t *testing.T) {
func TestService_HeadSyncContributionProofDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := testServiceWithDB(t)
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainContributionAndProof, s.GenesisValidatorsRoot())
@@ -109,7 +108,7 @@ func TestService_HeadSyncContributionProofDomain(t *testing.T) {
func TestService_HeadSyncSelectionProofDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := testServiceWithDB(t)
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommitteeSelectionProof, s.GenesisValidatorsRoot())
@@ -122,17 +121,14 @@ func TestService_HeadSyncSelectionProofDomain(t *testing.T) {
}
func TestSyncCommitteeHeadStateCache_RoundTrip(t *testing.T) {
c := syncCommitteeHeadStateCache
t.Cleanup(func() {
syncCommitteeHeadStateCache = cache.NewSyncCommitteeHeadState()
})
s := testServiceNoDB(t)
beaconState, _ := util.DeterministicGenesisStateAltair(t, 100)
require.NoError(t, beaconState.SetSlot(100))
cachedState, err := c.Get(101)
cachedState, err := s.syncCommitteeHeadState.Get(101)
require.ErrorContains(t, cache.ErrNotFound.Error(), err)
require.Equal(t, nil, cachedState)
require.NoError(t, c.Put(101, beaconState))
cachedState, err = c.Get(101)
require.NoError(t, s.syncCommitteeHeadState.Put(101, beaconState))
cachedState, err = s.syncCommitteeHeadState.Get(101)
require.NoError(t, err)
require.DeepEqual(t, beaconState, cachedState)
}

View File

@@ -9,7 +9,6 @@ import (
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -154,14 +153,10 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("genesis_state_root", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
notifier := &mock.MockStateNotifier{RecordEvents: true}
srv := &Service{
cfg: &config{
StateNotifier: notifier,
ForkChoiceStore: doublylinkedtree.New(),
},
originBlockRoot: [32]byte{1},
}
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
@@ -185,15 +180,11 @@ func Test_notifyNewHeadEvent(t *testing.T) {
})
t.Run("non_genesis_values", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
notifier := &mock.MockStateNotifier{RecordEvents: true}
genesisRoot := [32]byte{1}
srv := &Service{
cfg: &config{
StateNotifier: notifier,
ForkChoiceStore: doublylinkedtree.New(),
},
originBlockRoot: genesisRoot,
}
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
srv.originBlockRoot = genesisRoot
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
@@ -407,7 +398,7 @@ func TestSaveOrphanedOps(t *testing.T) {
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
service.SetGenesisTime(time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
// Chain setup
// 0 -- 1 -- 2 -- 3

View File

@@ -86,8 +86,8 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64, daWaitedTime time.Duration) error {
startTime, err := slots.ToTime(genesisTime, block.Slot())
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
}

View File

@@ -8,16 +8,6 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestReportEpochMetrics_BadHeadState(t *testing.T) {
s, err := util.NewBeaconState()
require.NoError(t, err)
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.SetValidators(nil))
err = reportEpochMetrics(t.Context(), s, h)
require.ErrorContains(t, "failed to initialize precompute: state has nil validator slice", err)
}
func TestReportEpochMetrics_BadAttestation(t *testing.T) {
s, err := util.NewBeaconState()
require.NoError(t, err)

View File

@@ -8,9 +8,10 @@ import (
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func testServiceOptsWithDB(t *testing.T) []Option {
func testServiceOptsWithDB(t testing.TB) []Option {
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
cs := startup.NewClockSynchronizer()
@@ -31,3 +32,15 @@ func testServiceOptsNoDB() []Option {
cs := startup.NewClockSynchronizer()
return []Option{WithClockSynchronizer(cs)}
}
func testServiceNoDB(t testing.TB) *Service {
s, err := NewService(t.Context(), testServiceOptsNoDB()...)
require.NoError(t, err)
return s
}
func testServiceWithDB(t testing.TB) *Service {
s, err := NewService(t.Context(), testServiceOptsWithDB(t)...)
require.NoError(t, err)
return s
}

View File

@@ -33,6 +33,14 @@ func WithMaxGoroutines(x int) Option {
}
}
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB)
return nil
}
}
// WithWeakSubjectivityCheckpoint for checkpoint sync.
func WithWeakSubjectivityCheckpoint(c *ethpb.Checkpoint) Option {
return func(s *Service) error {
@@ -246,7 +254,7 @@ func WithSlasherEnabled(enabled bool) Option {
// WithGenesisTime sets the genesis time for the blockchain service.
func WithGenesisTime(genesisTime time.Time) Option {
return func(s *Service) error {
s.genesisTime = genesisTime
s.genesisTime = genesisTime.Truncate(time.Second) // Genesis time has a precision of 1 second.
return nil
}
}

View File

@@ -60,10 +60,8 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
return err
}
genesisTime := uint64(s.genesisTime.Unix())
// Verify attestation target is from current epoch or previous epoch.
if err := verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Add(disparity).Unix()), tgt); err != nil {
if err := verifyAttTargetEpoch(ctx, s.genesisTime, time.Now().Add(disparity), tgt); err != nil {
return err
}
@@ -76,7 +74,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// validate_aggregate_proof.go and validate_beacon_attestation.go
// Verify attestations can only affect the fork choice of subsequent slots.
if err := slots.VerifyTime(genesisTime, a.GetData().Slot+1, disparity); err != nil {
if err := slots.VerifyTime(s.genesisTime, a.GetData().Slot+1, disparity); err != nil {
return err
}

View File

@@ -4,12 +4,12 @@ import (
"context"
"fmt"
"strconv"
"time"
"github.com/OffchainLabs/prysm/v6/async"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -139,8 +139,8 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
}
// verifyAttTargetEpoch validates attestation is from the current or previous epoch.
func verifyAttTargetEpoch(_ context.Context, genesisTime, nowTime uint64, c *ethpb.Checkpoint) error {
currentSlot := primitives.Slot((nowTime - genesisTime) / params.BeaconConfig().SecondsPerSlot)
func verifyAttTargetEpoch(_ context.Context, genesis, now time.Time, c *ethpb.Checkpoint) error {
currentSlot := slots.At(genesis, now)
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch primitives.Epoch
// Prevents previous epoch under flow

View File

@@ -355,22 +355,22 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := t.Context()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
nowTime := time.Unix(int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot), 0)
require.NoError(t, verifyAttTargetEpoch(ctx, time.Unix(0, 0), nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
}
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := t.Context()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
nowTime := time.Unix(int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot), 0)
require.NoError(t, verifyAttTargetEpoch(ctx, time.Unix(0, 0), nowTime, &ethpb.Checkpoint{Epoch: 1}))
}
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := t.Context()
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
err := verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
nowTime := time.Unix(2*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot), 0)
err := verifyAttTargetEpoch(ctx, time.Unix(0, 0), nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
assert.ErrorContains(t, "target epoch 0 does not match current epoch 2 or prev epoch 1", err)
}

View File

@@ -329,7 +329,7 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
// The latest block header is from the previous epoch
r, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
}
// The proposer indices cache takes the target root for the previous
@@ -339,12 +339,12 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
}
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
}
err = helpers.UpdateCachedCheckpointToStateRoot(st, &forkchoicetypes.Checkpoint{Epoch: e, Root: target})
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
log.WithError(err).Error("Could not update proposer index state-root map")
}
return nil
}
@@ -562,7 +562,7 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
func (s *Service) runLateBlockTasks() {
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("failed to wait for initial sync")
log.WithError(err).Error("Failed to wait for initial sync")
return
}
@@ -737,7 +737,10 @@ func (s *Service) areDataColumnsAvailable(
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
nextSlot, err := slots.StartTime(s.genesisTime, block.Slot()+1)
if err != nil {
return fmt.Errorf("unable to determine slot start time: %w", err)
}
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
@@ -855,7 +858,10 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
nc := s.blobNotifiers.forRoot(root, block.Slot())
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
nextSlot, err := slots.StartTime(s.genesisTime, block.Slot()+1)
if err != nil {
return fmt.Errorf("unable to determine slot start time: %w", err)
}
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
@@ -906,7 +912,7 @@ func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
// it also updates the next slot cache and the proposer index cache to deal with skipped slots.
func (s *Service) lateBlockTasks(ctx context.Context) {
currentSlot := s.CurrentSlot()
if s.CurrentSlot() == s.HeadSlot() {
if currentSlot == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
@@ -927,10 +933,10 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
log.WithError(err).Error("Could not update epoch boundary caches")
}
// return early if we already started building a block for the current
// head root
@@ -944,7 +950,7 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if attribute.IsEmpty() {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).WithField("head_root", headRoot).Error("unable to retrieve head block to fire payload attributes event")
log.WithError(err).WithField("head_root", headRoot).Error("Unable to retrieve head block to fire payload attributes event")
}
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
// call notifyForkchoiceUpdate, so the event is fired here.

View File

@@ -33,7 +33,7 @@ import (
// CurrentSlot returns the current slot based on time.
func (s *Service) CurrentSlot() primitives.Slot {
return slots.CurrentSlot(uint64(s.genesisTime.Unix()))
return slots.CurrentSlot(s.genesisTime)
}
// getFCUArgs returns the arguments to call forkchoice update
@@ -45,7 +45,7 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
return nil
}
slot := cfg.roblock.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
if slots.WithinVotingWindow(s.genesisTime, slot) {
return nil
}
return s.computePayloadAttributes(cfg, fcuArgs)
@@ -68,11 +68,11 @@ func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcu
func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]byte) {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("Could not determine node weight")
}
headWeight, err := s.cfg.ForkChoiceStore.Weight(headRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("Could not determine node weight")
}
log.WithFields(logrus.Fields{
"receivedRoot": fmt.Sprintf("%#x", blockRoot),
@@ -189,33 +189,7 @@ func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
return s.lcStore.SaveLightClientUpdate(cfg.ctx, period, update)
}
// processLightClientBootstrap saves a light client bootstrap for this block
@@ -226,7 +200,7 @@ func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error
if err != nil {
return errors.Wrapf(err, "could not create light client bootstrap")
}
if err := s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap); err != nil {
if err := s.lcStore.SaveLightClientBootstrap(cfg.ctx, blockRoot, bootstrap); err != nil {
return errors.Wrapf(err, "could not save light client bootstrap")
}
return nil
@@ -432,7 +406,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
}
// Verify block slot time is not from the future.
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), b.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
if err := slots.VerifyTime(s.genesisTime, b.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
return nil, err
}
@@ -527,7 +501,7 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
// is meant to be asynchronous and run in the background rather than being
// tied to the execution of a block.
if err := s.cfg.StateGen.MigrateToCold(s.ctx, fRoot); err != nil {
log.WithError(err).Error("could not migrate to cold")
log.WithError(err).Error("Could not migrate to cold")
}
}()
return nil
@@ -616,7 +590,7 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
// Update deposit cache.
finalizedState, err := s.cfg.StateGen.StateByRoot(ctx, fRoot)
if err != nil {
log.WithError(err).Error("could not fetch finalized state")
log.WithError(err).Error("Could not fetch finalized state")
return
}
@@ -634,7 +608,7 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
eth1DepositIndex, err := mathutil.Int(finalizedState.Eth1DepositIndex())
if err != nil {
log.WithError(err).Error("could not cast eth1 deposit index")
log.WithError(err).Error("Could not cast eth1 deposit index")
return
}
// The deposit index in the state is always the index of the next deposit
@@ -643,12 +617,12 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
finalizedEth1DepIdx := eth1DepositIndex - 1
if err = s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(finalizedEth1DepIdx), common.Hash(finalizedState.Eth1Data().BlockHash),
0 /* Setting a zero value as we have no access to block height */); err != nil {
log.WithError(err).Error("could not insert finalized deposits")
log.WithError(err).Error("Could not insert finalized deposits")
return
}
// Deposit proofs are only used during state transition and can be safely removed to save space.
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(finalizedEth1DepIdx)); err != nil {
log.WithError(err).Error("could not prune deposit proofs")
log.WithError(err).Error("Could not prune deposit proofs")
}
// Prune deposits which have already been finalized, the below method prunes all pending deposits (non-inclusive) up
// to the provided eth1 deposit index.

View File

@@ -59,11 +59,8 @@ func Test_pruneAttsFromPool_Electra(t *testing.T) {
cfg.TargetCommitteeSize = 8
params.OverrideBeaconConfig(cfg)
s := Service{
cfg: &config{
AttPool: kv.NewAttCaches(),
},
}
s := testServiceNoDB(t)
s.cfg.AttPool = kv.NewAttCaches()
data := &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, 32),
@@ -471,7 +468,8 @@ func blockTree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][]byt
}
func TestCurrentSlot_HandlesOverflow(t *testing.T) {
svc := Service{genesisTime: prysmTime.Now().Add(1 * time.Hour)}
svc := testServiceNoDB(t)
svc.genesisTime = prysmTime.Now().Add(1 * time.Hour)
slot := svc.CurrentSlot()
require.Equal(t, primitives.Slot(0), slot, "Unexpected slot")
@@ -1359,7 +1357,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1380,7 +1378,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1401,7 +1399,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
}
for i := 12; i < 18; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1550,7 +1548,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1570,7 +1568,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1591,7 +1589,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
}
for i := 12; i < 18; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1742,7 +1740,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1763,7 +1761,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1814,7 +1812,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
// import blocks 13 through 18 to justify 12
invalidRoots := make([][32]byte, 19-13)
for i := 13; i < 19; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -1909,7 +1907,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
// Import blocks 21--30 (Epoch 3 was not enough to justify 2)
for i := 21; i < 30; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
@@ -1995,7 +1993,8 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveGenesisBlockRoot(ctx, genesisRoot), "Could not save genesis state")
for i := 1; i < 6; i++ {
driftGenesisTime(service, int64(i), 0)
t.Log(i)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -2015,7 +2014,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
}
for i := 6; i < 12; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockAltair(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -2064,7 +2063,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
// import blocks 13 through 18 to justify 12
for i := 13; i < 19; i++ {
driftGenesisTime(service, int64(i), 0)
driftGenesisTime(service, primitives.Slot(i), 0)
st, err := service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
@@ -2307,6 +2306,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
func TestFillMissingBlockPayloadId_DiffSlotExitEarly(t *testing.T) {
logHook := logTest.NewGlobal()
service, tr := minimalTestService(t)
service.SetGenesisTime(time.Now())
service.lateBlockTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "could not perform late block tasks")
}
@@ -2319,17 +2319,20 @@ func TestFillMissingBlockPayloadId_PrepareAllPayloads(t *testing.T) {
defer resetCfg()
service, tr := minimalTestService(t)
service.SetGenesisTime(time.Now())
service.SetForkChoiceGenesisTime(time.Now())
service.lateBlockTasks(tr.ctx)
require.LogsDoNotContain(t, logHook, "could not perform late block tasks")
}
// Helper function to simulate the block being on time or delayed for proposer
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(s *Service, slot, delay int64) {
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) + delay
newTime := time.Unix(time.Now().Unix()-offset, 0)
s.SetGenesisTime(newTime)
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(newTime.Unix()))
func driftGenesisTime(s *Service, slot primitives.Slot, delay time.Duration) {
now := time.Now()
slotDuration := time.Duration(slot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
genesis := now.Add(-slotDuration - delay)
s.SetGenesisTime(genesis)
s.cfg.ForkChoiceStore.SetGenesisTime(genesis)
}
func TestMissingBlobIndices(t *testing.T) {
@@ -2710,61 +2713,16 @@ func TestProcessLightClientUpdate(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
defer reset()
s, tr := minimalTestService(t)
s, tr := minimalTestService(t, WithLCStore())
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().VersionToForkEpochMap()[testVersion])*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
@@ -2795,517 +2753,113 @@ func TestProcessLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
t.Run("no old update", func(t *testing.T) {
require.NoError(t, s.processLightClientUpdate(cfg))
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
// Check that the light client update is saved
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), testVersion)
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
t.Run("new update is better", func(t *testing.T) {
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Altair)
})
})
require.NoError(t, s.processLightClientUpdate(cfg))
t.Run("Capella", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), testVersion)
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
t.Run("old update is better", func(t *testing.T) {
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
// set a better sync aggregate
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Capella)
})
})
err = s.lcStore.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
t.Run("Deneb", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
require.NoError(t, s.processLightClientUpdate(cfg))
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
u, err := s.lcStore.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), testVersion)
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Deneb)
})
})
reset()
}
}
func TestProcessLightClientBootstrap(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
defer reset()
s, tr := minimalTestService(t)
s, tr := minimalTestService(t, WithLCStore())
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().VersionToForkEpochMap()[testVersion])*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
require.NoError(t, s.processLightClientBootstrap(cfg))
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, currentBlockRoot)
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Altair)
})
t.Run("Capella", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Capella)
})
t.Run("Deneb", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
require.NoError(t, s.processLightClientBootstrap(cfg))
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Deneb)
})
reset()
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), testVersion)
})
}
}
type testIsAvailableParams struct {
@@ -3647,7 +3201,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Altair
expectedVersion = version.Bellatrix
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella
@@ -3656,7 +3210,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Deneb
case 5:
forkEpoch = uint64(params.BeaconConfig().ElectraForkEpoch)
expectedVersion = version.Deneb
expectedVersion = version.Electra
default:
t.Errorf("Unsupported fork version %s", version.String(testVersion))
}
@@ -3801,7 +3355,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Altair
expectedVersion = version.Bellatrix
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella

View File

@@ -43,7 +43,7 @@ func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Chec
if err != nil {
return nil, err
}
if err := slots.ValidateClock(ss, uint64(s.genesisTime.Unix())); err != nil {
if err := slots.ValidateClock(ss, s.genesisTime); err != nil {
return nil, err
}
// We acquire the lock here instead than on gettAttPreState because that function gets called from UpdateHead that holds a write lock
@@ -69,7 +69,7 @@ func (s *Service) spawnProcessAttestationsRoutine() {
go func() {
_, err := s.clockWaiter.WaitForClock(s.ctx)
if err != nil {
log.WithError(err).Error("spawnProcessAttestationsRoutine failed to receive genesis data")
log.WithError(err).Error("Failed to receive genesis data")
return
}
if s.genesisTime.IsZero() {
@@ -103,7 +103,7 @@ func (s *Service) spawnProcessAttestationsRoutine() {
} else {
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, slotInterval.Slot); err != nil {
log.WithError(err).Error("could not process new slot")
log.WithError(err).Error("Could not process new slot")
}
s.cfg.ForkChoiceStore.Unlock()
@@ -144,7 +144,7 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
log.WithField("newHeadRoot", fmt.Sprintf("%#x", newHeadRoot)).Debug("Head changed due to attestations")
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("could not get head block")
log.WithError(err).Error("Could not get head block")
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
@@ -161,7 +161,7 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
return
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice")
log.WithError(err).Error("Could not update forkchoice")
}
}
@@ -179,7 +179,7 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
// This delays consideration in the fork choice until their slot is in the past.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#validate_on_attestation
nextSlot := a.GetData().Slot + 1
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), nextSlot, disparity); err != nil {
if err := slots.VerifyTime(s.genesisTime, nextSlot, disparity); err != nil {
continue
}

View File

@@ -70,7 +70,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
service.genesisTime = prysmTime.Now().Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
genesisState, pks := util.DeterministicGenesisState(t, 64)
require.NoError(t, genesisState.SetGenesisTime(uint64(prysmTime.Now().Unix())-params.BeaconConfig().SecondsPerSlot))
require.NoError(t, genesisState.SetGenesisTime(time.Now().Add(-1*time.Duration(params.BeaconConfig().SecondsPerSlot)*time.Second)))
require.NoError(t, service.saveGenesisData(ctx, genesisState))
atts, err := util.GenerateAttestations(genesisState, pks, 1, 0, false)
require.NoError(t, err)

View File

@@ -183,7 +183,7 @@ func (s *Service) updateCheckpoints(
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
log.WithError(err).Error("Could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, cp.j); err != nil {
@@ -283,7 +283,7 @@ func (s *Service) reportPostBlockProcessing(
// Log block sync status.
cp = s.cfg.ForkChoiceStore.JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, s.genesisTime, daWaitedTime); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data

View File

@@ -8,6 +8,7 @@ import (
blockchainTesting "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/voluntaryexits"
"github.com/OffchainLabs/prysm/v6/config/features"
@@ -338,7 +339,7 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
hook := logTest.NewGlobal()
s, _ := minimalTestService(t)
s.genesisTime = time.Now()
s.SetGenesisTime(time.Now())
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
assert.LogsDoNotContain(t, hook, "Entering mode to save hot states in DB")
@@ -348,8 +349,9 @@ func TestHandleCaches_EnablingLargeSize(t *testing.T) {
hook := logTest.NewGlobal()
s, _ := minimalTestService(t)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
s.SetGenesisTime(time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
helpers.ClearCache()
require.NoError(t, s.handleCaches())
assert.LogsContain(t, hook, "Expanding committee cache size")
}

View File

@@ -69,6 +69,7 @@ type Service struct {
slasherEnabled bool
lcStore *lightClient.Store
startWaitingDataColumnSidecars chan bool // for testing purposes only
syncCommitteeHeadState *cache.SyncCommitteeHeadStateCache
}
// config options for the service.
@@ -180,14 +181,15 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
seenIndex: make(map[[32]byte][]bool),
}
srv := &Service{
ctx: ctx,
cancel: cancel,
boundaryRoots: [][32]byte{},
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
ctx: ctx,
cancel: cancel,
boundaryRoots: [][32]byte{},
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
syncCommitteeHeadState: cache.NewSyncCommitteeHeadState(),
}
for _, opt := range opts {
if err := opt(srv); err != nil {
@@ -266,7 +268,7 @@ func (s *Service) Status() error {
// StartFromSavedState initializes the blockchain using a previously saved finalized checkpoint.
func (s *Service) StartFromSavedState(saved state.BeaconState) error {
log.Info("Blockchain data already exists in DB, initializing...")
s.genesisTime = time.Unix(int64(saved.GenesisTime()), 0) // lint:ignore uintcast -- Genesis time will not exceed int64 in your lifetime.
s.genesisTime = saved.GenesisTime()
s.cfg.AttService.SetGenesisTime(saved.GenesisTime())
originRoot, err := s.originRootFromSavedState(s.ctx)
@@ -371,7 +373,7 @@ func (s *Service) startFromExecutionChain() error {
if e.Type == statefeed.ChainStarted {
data, ok := e.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("event data is not type *statefeed.ChainStartedData")
log.Error("Event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("startTime", data.StartTime).Debug("Received chain start event")
@@ -408,7 +410,7 @@ func (s *Service) onExecutionChainStart(ctx context.Context, genesisTime time.Ti
vr := bytesutil.ToBytes32(initializedState.GenesisValidatorsRoot())
if err := s.clockSetter.SetClock(startup.NewClock(genesisTime, vr)); err != nil {
log.WithError(err).Fatal("failed to initialize blockchain service from execution start event")
log.WithError(err).Fatal("Failed to initialize blockchain service from execution start event")
}
}
@@ -422,7 +424,7 @@ func (s *Service) initializeBeaconChain(
eth1data *ethpb.Eth1Data) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "beacon-chain.Service.initializeBeaconChain")
defer span.End()
s.genesisTime = genesisTime
s.genesisTime = genesisTime.Truncate(time.Second) // Genesis time has a precision of 1 second.
unixTime := uint64(genesisTime.Unix())
genesisState, err := transition.OptimizedGenesisBeaconState(unixTime, preGenesisState, eth1data)
@@ -483,7 +485,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, genesisBlkRoot); err != nil {
return errors.Wrap(err, "Could not set optimistic status of genesis block to false")
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
s.cfg.ForkChoiceStore.SetGenesisTime(s.genesisTime)
if err := s.setHead(&head{
genesisBlkRoot,

View File

@@ -4,7 +4,6 @@ import (
"io"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -17,10 +16,7 @@ func init() {
}
func TestChainService_SaveHead_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
s := testServiceWithDB(t)
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, err)

View File

@@ -127,7 +127,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetGenesisTime(uint64(gt.Unix())))
require.NoError(t, s.SetGenesisTime(gt))
require.NoError(t, s.SetSlot(1))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
@@ -164,7 +164,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
wsb := util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetGenesisTime(uint64(gt.Unix())))
require.NoError(t, s.SetGenesisTime(gt))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, blkRoot))
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
@@ -238,7 +238,7 @@ func TestChainService_CorrectGenesisRoots(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, genesisBlk)
s, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, s.SetGenesisTime(uint64(gt.Unix())))
require.NoError(t, s.SetGenesisTime(gt))
require.NoError(t, s.SetSlot(0))
require.NoError(t, beaconDB.SaveState(ctx, s, blkRoot))
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, blkRoot))
@@ -345,12 +345,8 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
}
func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := t.Context()
fc := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, fc), ForkChoiceStore: fc},
}
s := testServiceWithDB(t)
blk := util.NewBeaconBlock()
blk.Block.Slot = 1
r, err := blk.HashTreeRoot()
@@ -371,10 +367,7 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
s := testServiceWithDB(t)
b := util.NewBeaconBlock()
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
@@ -391,22 +384,16 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
}
func TestServiceStop_SaveCachedBlocks(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
ctx: ctx,
cancel: cancel,
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
}
s := testServiceWithDB(t)
s.initSyncBlocks = make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock)
bb := util.NewBeaconBlock()
r, err := bb.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
require.NoError(t, s.saveInitSyncBlock(ctx, r, wsb))
require.NoError(t, s.saveInitSyncBlock(s.ctx, r, wsb))
require.NoError(t, s.Stop())
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(ctx, r))
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(s.ctx, r))
}
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
@@ -428,11 +415,8 @@ func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
}
func BenchmarkHasBlockDB(b *testing.B) {
beaconDB := testDB.SetupDB(b)
ctx := b.Context()
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
s := testServiceWithDB(b)
blk := util.NewBeaconBlock()
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(b, err)
@@ -448,10 +432,7 @@ func BenchmarkHasBlockDB(b *testing.B) {
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := b.Context()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
s := testServiceWithDB(b)
blk := util.NewBeaconBlock()
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)

View File

@@ -38,7 +38,7 @@ func (s *Service) startupHeadRoot() [32]byte {
if headStr == "head" {
root, err := s.cfg.BeaconDB.HeadBlockRoot()
if err != nil {
log.WithError(err).Error("could not get head block root, starting with finalized block as head")
log.WithError(err).Error("Could not get head block root, starting with finalized block as head")
return fRoot
}
log.Infof("Using Head root of %#x", root)
@@ -46,7 +46,7 @@ func (s *Service) startupHeadRoot() [32]byte {
}
root, err := bytesutil.DecodeHexWithLength(headStr, 32)
if err != nil {
log.WithError(err).Error("could not parse head root, starting with finalized block as head")
log.WithError(err).Error("Could not parse head root, starting with finalized block as head")
return fRoot
}
return [32]byte(root)
@@ -64,16 +64,16 @@ func (s *Service) setupForkchoiceTree(st state.BeaconState) error {
}
blk, err := s.cfg.BeaconDB.Block(s.ctx, headRoot)
if err != nil {
log.WithError(err).Error("could not get head block, starting with finalized block as head")
log.WithError(err).Error("Could not get head block, starting with finalized block as head")
return nil
}
if slots.ToEpoch(blk.Block().Slot()) < cp.Epoch {
log.WithField("headRoot", fmt.Sprintf("%#x", headRoot)).Error("head block is older than finalized block, starting with finalized block as head")
log.WithField("headRoot", fmt.Sprintf("%#x", headRoot)).Error("Head block is older than finalized block, starting with finalized block as head")
return nil
}
chain, err := s.buildForkchoiceChain(s.ctx, blk)
if err != nil {
log.WithError(err).Error("could not build forkchoice chain, starting with finalized block as head")
log.WithError(err).Error("Could not build forkchoice chain, starting with finalized block as head")
return nil
}
s.cfg.ForkChoiceStore.Lock()
@@ -170,6 +170,6 @@ func (s *Service) setupForkchoiceCheckpoints() error {
Root: fRoot}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")
}
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
s.cfg.ForkChoiceStore.SetGenesisTime(s.genesisTime)
return nil
}

View File

@@ -32,7 +32,7 @@ func Test_startupHeadRoot(t *testing.T) {
})
defer resetCfg()
require.Equal(t, service.startupHeadRoot(), gr)
require.LogsContain(t, hook, "could not get head block root, starting with finalized block as head")
require.LogsContain(t, hook, "Could not get head block root, starting with finalized block as head")
})
st, _ := util.DeterministicGenesisState(t, 64)

View File

@@ -4,6 +4,7 @@ import (
"context"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/event"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
@@ -24,7 +25,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"google.golang.org/protobuf/proto"
@@ -109,8 +112,10 @@ type testServiceRequirements struct {
func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceRequirements) {
ctx := t.Context()
genesis := time.Now().Add(-1 * 4 * time.Duration(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().SecondsPerSlot)) * time.Second) // Genesis was 4 epochs ago.
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
fcs.SetGenesisTime(genesis)
sg := stategen.New(beaconDB, fcs)
notif := &mockBeaconNode{}
fcs.SetBalancesByRooter(sg.ActiveNonSlashedBalancesByRoot)
@@ -149,6 +154,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithExecutionEngineCaller(&mockExecution.EngineClient{}),
WithP2PBroadcaster(&mockAccessor{}),
WithLightClientStore(&lightclient.Store{}),
WithGenesisTime(genesis),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -555,11 +555,11 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, slot, bytesutil.ToBytes32(s.Root), [32]byte{}, [32]byte{}, ojc, ojc)
if err != nil {
logrus.WithError(err).Error("could not update head")
logrus.WithError(err).Error("Could not update head")
}
err = s.ForkChoiceStore.InsertNode(ctx, st, root)
if err != nil {
logrus.WithError(err).Error("could not insert node to forkchoice")
logrus.WithError(err).Error("Could not insert node to forkchoice")
}
}
@@ -641,7 +641,7 @@ func (s *ChainService) GetProposerHead() [32]byte {
}
// SetForkChoiceGenesisTime mocks the same method in the chain service
func (s *ChainService) SetForkChoiceGenesisTime(timestamp uint64) {
func (s *ChainService) SetForkChoiceGenesisTime(timestamp time.Time) {
if s.ForkChoiceStore != nil {
s.ForkChoiceStore.SetGenesisTime(timestamp)
}

View File

@@ -3,8 +3,6 @@ package blockchain
import (
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -17,11 +15,8 @@ import (
)
func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
b := util.NewBeaconBlock()
b.Block.Slot = 1792480
util.SaveBlock(t, t.Context(), beaconDB, b)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -68,15 +63,15 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := testServiceWithDB(t)
beaconDB := s.cfg.BeaconDB
util.SaveBlock(t, t.Context(), beaconDB, b)
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, err)
fcs := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt, ForkChoiceStore: fcs},
wsVerifier: wv,
}
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
s.cfg.WeakSubjectivityCheckpt = tt.checkpt
s.wsVerifier = wv
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
err = s.wsVerifier.VerifyWeakSubjectivity(t.Context(), cp.Epoch)
if tt.wantErr == nil {

View File

@@ -24,7 +24,7 @@ var ErrNoBuilder = errors.New("builder endpoint not configured")
// BlockBuilder defines the interface for interacting with the block builder
type BlockBuilder interface {
SubmitBlindedBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
SubmitBlindedBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error)
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubKey [48]byte) (builder.SignedBid, error)
RegisterValidator(ctx context.Context, reg []*ethpb.SignedValidatorRegistrationV1) error
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
@@ -87,7 +87,7 @@ func (s *Service) Stop() error {
}
// SubmitBlindedBlock submits a blinded block to the builder relay network.
func (s *Service) SubmitBlindedBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (s *Service) SubmitBlindedBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
ctx, span := trace.StartSpan(ctx, "builder.SubmitBlindedBlock")
defer span.End()
start := time.Now()

View File

@@ -29,6 +29,7 @@ type MockBuilderService struct {
PayloadCapella *v1.ExecutionPayloadCapella
PayloadDeneb *v1.ExecutionPayloadDeneb
BlobBundle *v1.BlobsBundle
BlobBundleV2 *v1.BlobsBundleV2
ErrSubmitBlindedBlock error
Bid *ethpb.SignedBuilderBid
BidCapella *ethpb.SignedBuilderBidCapella
@@ -46,7 +47,7 @@ func (s *MockBuilderService) Configured() bool {
}
// SubmitBlindedBlock for mocking.
func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, v1.BlobsBundler, error) {
switch b.Version() {
case version.Bellatrix:
w, err := blocks.WrappedExecutionPayload(s.Payload)
@@ -66,6 +67,16 @@ func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.
return nil, nil, errors.Wrap(err, "could not wrap deneb payload")
}
return w, s.BlobBundle, s.ErrSubmitBlindedBlock
case version.Fulu:
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap deneb payload for fulu")
}
// For Fulu, return BlobsBundleV2 if available, otherwise regular BlobsBundle
if s.BlobBundleV2 != nil {
return w, s.BlobBundleV2, s.ErrSubmitBlindedBlock
}
return w, s.BlobBundle, s.ErrSubmitBlindedBlock
default:
return nil, nil, errors.New("unknown block version for mocking")
}

View File

@@ -178,7 +178,7 @@ func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoo
s.lock.Lock()
defer s.lock.Unlock()
if clearCount != s.cleared.Load() {
log.Warn("cache rotated during async committee update operation - abandoning cache update")
log.Warn("Cache rotated during async committee update operation - abandoning cache update")
return nil
}

View File

@@ -31,7 +31,7 @@ func Test_BaseReward(t *testing.T) {
valIdx: 2,
st: genState(1),
want: 0,
errString: "validator index 2 does not exist",
errString: "index 2 out of bounds",
},
{
name: "active balance is 32eth",
@@ -89,7 +89,7 @@ func Test_BaseRewardWithTotalBalance(t *testing.T) {
valIdx: 2,
activeBalance: 1,
want: 0,
errString: "validator index 2 does not exist",
errString: "index 2 out of bounds",
},
{
name: "active balance is 1",

View File

@@ -217,15 +217,15 @@ func IsSyncCommitteeAggregator(sig []byte) (bool, error) {
// ValidateSyncMessageTime validates sync message to ensure that the provided slot is valid.
// Spec: [IGNORE] The message's slot is for the current slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance), i.e. sync_committee_message.slot == current_slot
func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
if err := slots.ValidateClock(slot, uint64(genesisTime.Unix())); err != nil {
if err := slots.ValidateClock(slot, genesisTime); err != nil {
return err
}
messageTime, err := slots.ToTime(uint64(genesisTime.Unix()), slot)
messageTime, err := slots.StartTime(genesisTime, slot)
if err != nil {
return err
}
currentSlot := slots.Since(genesisTime)
slotStartTime, err := slots.ToTime(uint64(genesisTime.Unix()), currentSlot)
currentSlot := slots.CurrentSlot(genesisTime)
slotStartTime, err := slots.StartTime(genesisTime, currentSlot)
if err != nil {
return err
}

View File

@@ -68,7 +68,7 @@ func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.Beacon
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -119,6 +119,7 @@ func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
require.NoError(t, err)
_, err = Eth1DataHasEnoughSupport(s, eth1data)
_ = err
fuzz.FreeMemory(i)
}
}
@@ -319,6 +320,7 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyDeposit(s, deposit)
_ = err
fuzz.FreeMemory(i)
}
}
@@ -382,5 +384,6 @@ func TestFuzzVerifyExit_10000(t *testing.T) {
_ = err
err = VerifyExitAndSignature(val, s, ve)
_ = err
fuzz.FreeMemory(i)
}
}

View File

@@ -162,7 +162,7 @@ func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) err
if !bytes.Equal(payload.PrevRandao(), random) {
return ErrInvalidPayloadPrevRandao
}
t, err := slots.ToTime(st.GenesisTime(), st.Slot())
t, err := slots.StartTime(st.GenesisTime(), st.Slot())
if err != nil {
return err
}

View File

@@ -501,7 +501,7 @@ func Test_ValidatePayload(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string
@@ -551,7 +551,7 @@ func Test_ProcessPayload(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string
@@ -627,7 +627,7 @@ func Test_ProcessPayload_Blinded(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string
@@ -697,7 +697,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
ts, err := slots.ToTime(st.GenesisTime(), st.Slot())
ts, err := slots.StartTime(st.GenesisTime(), st.Slot())
require.NoError(t, err)
tests := []struct {
name string

View File

@@ -43,7 +43,7 @@ func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
}
s := &ethpb.BeaconStateCapella{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -59,7 +59,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -194,11 +194,11 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if IsValidSwitchToCompoundingRequest(st, cr) {
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
if !ok {
log.Error("failed to find source validator index")
log.Error("Failed to find source validator index")
continue
}
if err := SwitchToCompoundingValidator(st, srcIdx); err != nil {
log.WithError(err).Error("failed to switch to compounding validator")
log.WithError(err).Error("Failed to switch to compounding validator")
}
continue
}
@@ -280,7 +280,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
}
bal, err := st.PendingBalanceToWithdraw(srcIdx)
if err != nil {
log.WithError(err).Error("failed to fetch pending balance to withdraw")
log.WithError(err).Error("Failed to fetch pending balance to withdraw")
continue
}
if bal > 0 {
@@ -290,7 +290,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {
log.WithError(err).Error("failed to compute consolidation epoch")
log.WithError(err).Error("Failed to compute consolidation epoch")
continue
}
srcV.ExitEpoch = exitEpoch

View File

@@ -208,7 +208,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
s := &ethpb.BeaconStateElectra{
GenesisTime: beaconState.GenesisTime(),
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{

View File

@@ -36,7 +36,7 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
}
s := &ethpb.BeaconStateBellatrix{
GenesisTime: state.GenesisTime(),
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{

View File

@@ -38,11 +38,14 @@ const (
// SingleAttReceived is sent after a single attestation object is received from gossip or rpc
SingleAttReceived = 9
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 10
// BlockGossipReceived is sent after a block has been received from gossip or API that passes validation rules.
BlockGossipReceived = 10
BlockGossipReceived = 11
// DataColumnReceived is sent after a data column has been seen after gossip validation rules.
DataColumnReceived = 11
DataColumnReceived = 12
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -94,6 +97,11 @@ type SingleAttReceivedData struct {
Attestation ethpb.Att
}
// DataColumnSidecarReceivedData is the data sent with DataColumnSidecarReceived events.
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}
// BlockGossipReceivedData is the data sent with BlockGossipReceived events.
type BlockGossipReceivedData struct {
// SignedBlock is the block that was received.

View File

@@ -111,7 +111,7 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
}
s := &ethpb.BeaconStateFulu{
GenesisTime: beaconState.GenesisTime(),
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{

View File

@@ -36,7 +36,6 @@ go_library(
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/hash"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
@@ -125,18 +124,18 @@ func ComputeSubnetFromCommitteeAndSlot(activeValCount uint64, comIdx primitives.
// valid_attestation_slot = 101
//
// In the attestation must be within the range of 95 to 102 in the example above.
func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
attTime, err := slots.ToTime(uint64(genesisTime.Unix()), attSlot)
func ValidateAttestationTime(attSlot primitives.Slot, genesis time.Time, clockDisparity time.Duration) error {
attTime, err := slots.StartTime(genesis, attSlot)
if err != nil {
return err
}
currentSlot := slots.Since(genesisTime)
currentSlot := slots.CurrentSlot(genesis)
// When receiving an attestation, it can be from the future.
// so the upper bounds is set to now + clockDisparity(SECONDS_PER_SLOT * 2).
// But when sending an attestation, it should not be in future slot.
// so the upper bounds is set to now + clockDisparity(MAXIMUM_GOSSIP_CLOCK_DISPARITY).
upperBounds := prysmTime.Now().Add(clockDisparity)
upperBounds := time.Now().Add(clockDisparity)
// An attestation cannot be older than the current slot - attestation propagation slot range
// with a minor tolerance for peer clock disparity.
@@ -144,7 +143,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
if currentSlot > params.BeaconConfig().AttestationPropagationSlotRange {
lowerBoundsSlot = currentSlot - params.BeaconConfig().AttestationPropagationSlotRange
}
lowerTime, err := slots.ToTime(uint64(genesisTime.Unix()), lowerBoundsSlot)
lowerTime, err := slots.StartTime(genesis, lowerBoundsSlot)
if err != nil {
return err
}
@@ -187,7 +186,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
// VerifyCheckpointEpoch is within current epoch and previous epoch
// with respect to current time. Returns true if it's within, false if it's not.
func VerifyCheckpointEpoch(c *ethpb.Checkpoint, genesis time.Time) bool {
currentSlot := slots.CurrentSlot(uint64(genesis.Unix()))
currentSlot := slots.CurrentSlot(genesis)
currentEpoch := slots.ToEpoch(currentSlot)
var prevEpoch primitives.Epoch

View File

@@ -217,10 +217,10 @@ func Test_ValidateAttestationTime(t *testing.T) {
{
name: "attestation.slot is well beyond current slot",
args: args{
attSlot: 1 << 32,
genesisTime: prysmTime.Now().Add(-15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
attSlot: 1024,
genesisTime: time.Now().Add(-15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
},
wantedErr: "attestation slot 4294967296 not within attestation propagation range of 0 to 15 (current slot)",
wantedErr: "attestation slot 1024 not within attestation propagation range of 0 to 15 (current slot)",
},
}
for _, tt := range tests {

View File

@@ -403,7 +403,7 @@ func AssignmentForValidator(
}
}
}
return nil // validator is not scheduled this epoch
return &LiteAssignment{} // validator is not scheduled this epoch
}
// CommitteeAssignments calculates committee assignments for each validator during the specified epoch.
@@ -669,11 +669,11 @@ func ComputeCommittee(
// InitializeProposerLookahead computes the list of the proposer indices for the next MIN_SEED_LOOKAHEAD + 1 epochs.
func InitializeProposerLookahead(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) ([]uint64, error) {
lookAhead := make([]uint64, 0, uint64(params.BeaconConfig().MinSeedLookahead+1)*uint64(params.BeaconConfig().SlotsPerEpoch))
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return nil, errors.Wrap(err, "could not get active indices")
}
for i := range params.BeaconConfig().MinSeedLookahead + 1 {
indices, err := ActiveValidatorIndices(ctx, state, epoch+i)
if err != nil {
return nil, errors.Wrap(err, "could not get active indices")
}
proposerIndices, err := PrecomputeProposerIndices(state, indices, epoch+i)
if err != nil {
return nil, errors.Wrap(err, "could not compute proposer indices")

View File

@@ -912,6 +912,50 @@ func TestAssignmentForValidator(t *testing.T) {
{{4, 5, 6}},
}
got = helpers.AssignmentForValidator(bySlot, start, primitives.ValidatorIndex(99))
require.IsNil(t, got)
// should be empty to be safe
require.DeepEqual(t, &helpers.LiteAssignment{}, got)
})
}
// Regression for #15450
func TestInitializeProposerLookahead_RegressionTest(t *testing.T) {
ctx := t.Context()
state, _ := util.DeterministicGenesisState(t, 128)
// Set some validators to activate in epoch 3 instead of 0
validators := state.Validators()
for i := 64; i < 128; i++ {
validators[i].ActivationEpoch = 3
}
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(64)) // epoch 2
epoch := slots.ToEpoch(state.Slot())
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, state, epoch)
require.NoError(t, err)
slotsPerEpoch := int(params.BeaconConfig().SlotsPerEpoch)
for epochOffset := primitives.Epoch(0); epochOffset < 2; epochOffset++ {
targetEpoch := epoch + epochOffset
activeIndices, err := helpers.ActiveValidatorIndices(ctx, state, targetEpoch)
require.NoError(t, err)
expectedProposers, err := helpers.PrecomputeProposerIndices(state, activeIndices, targetEpoch)
require.NoError(t, err)
startIdx := int(epochOffset) * slotsPerEpoch
endIdx := startIdx + slotsPerEpoch
actualProposers := proposerLookahead[startIdx:endIdx]
expectedUint64 := make([]uint64, len(expectedProposers))
for i, proposer := range expectedProposers {
expectedUint64[i] = uint64(proposer)
}
// This assertion would fail with the original bug:
for i, expected := range expectedUint64 {
require.Equal(t, expected, actualProposers[i],
"Proposer index mismatch at slot %d in epoch %d", i, targetEpoch)
}
}
}

View File

@@ -100,7 +100,7 @@ func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
ok, err := helpers.IsCurrentPeriodSyncCommittee(state, 12390192)
require.ErrorContains(t, "validator index 12390192 does not exist", err)
require.ErrorContains(t, "index 12390192 out of bounds", err)
require.Equal(t, false, ok)
}
@@ -187,7 +187,7 @@ func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
ok, err := helpers.IsNextPeriodSyncCommittee(state, 120391029)
require.ErrorContains(t, "validator index 120391029 does not exist", err)
require.ErrorContains(t, "index 120391029 out of bounds", err)
require.Equal(t, false, ok)
}
@@ -287,7 +287,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
index, err := helpers.CurrentPeriodSyncSubcommitteeIndices(state, 129301923)
require.ErrorContains(t, "validator index 129301923 does not exist", err)
require.ErrorContains(t, "index 129301923 out of bounds", err)
require.DeepEqual(t, []primitives.CommitteeIndex(nil), index)
}
@@ -374,7 +374,7 @@ func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
index, err := helpers.NextPeriodSyncSubcommitteeIndices(state, 21093019)
require.ErrorContains(t, "validator index 21093019 does not exist", err)
require.ErrorContains(t, "index 21093019 out of bounds", err)
require.DeepEqual(t, []primitives.CommitteeIndex(nil), index)
}

View File

@@ -561,17 +561,6 @@ func TestActiveValidatorIndices(t *testing.T) {
},
want: []primitives.ValidatorIndex{0, 2, 3},
},*/
{
name: "impossible_zero_validators", // Regression test for issue #13051
args: args{
state: &ethpb.BeaconState{
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
Validators: make([]*ethpb.Validator, 0),
},
epoch: 10,
},
wantedErr: "state has nil validator slice",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -3,12 +3,14 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"helpers.go",
"lightclient.go",
"store.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/db/iface:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
@@ -24,6 +26,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -41,7 +44,6 @@ go_test(
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -0,0 +1,243 @@
package light_client
import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
currentSyncCommittee := &pb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
var currentSyncCommitteeBranch [][]byte
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
}
for i := 0; i < len(currentSyncCommitteeBranch); i++ {
currentSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
executionBranch[i] = make([]byte, 32)
}
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{},
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else {
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
}
return light_client.NewWrappedBootstrap(m)
}
func makeExecutionAndProofDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*enginev1.ExecutionPayloadHeaderDeneb, [][]byte, error) {
if blk.Version() < version.Capella {
p, err := execution.EmptyExecutionPayloadHeader(version.Deneb)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok := p.(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderDeneb{})
}
payloadProof := emptyPayloadProof()
return payloadHeader, payloadProof, nil
}
payload, err := blk.Block().Body().Execution()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
BlobGasUsed: 0,
ExcessBlobGas: 0,
}
if blk.Version() >= version.Deneb {
blobGasUsed, err := payload.BlobGasUsed()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get blob gas used")
}
excessBlobGas, err := payload.ExcessBlobGas()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get excess blob gas")
}
payloadHeader.BlobGasUsed = blobGasUsed
payloadHeader.ExcessBlobGas = excessBlobGas
}
payloadProof, err := blocks.PayloadProof(ctx, blk.Block())
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload proof")
}
return payloadHeader, payloadProof, nil
}
func makeExecutionAndProofCapella(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*enginev1.ExecutionPayloadHeaderCapella, [][]byte, error) {
if blk.Version() > version.Capella {
return nil, nil, fmt.Errorf("unsupported block version %s for capella execution payload", version.String(blk.Version()))
}
if blk.Version() < version.Capella {
p, err := execution.EmptyExecutionPayloadHeader(version.Capella)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok := p.(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderCapella{})
}
payloadProof := emptyPayloadProof()
return payloadHeader, payloadProof, nil
}
payload, err := blk.Block().Body().Execution()
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader := &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
payloadProof, err := blocks.PayloadProof(ctx, blk.Block())
if err != nil {
return nil, nil, errors.Wrap(err, "could not get execution payload proof")
}
return payloadHeader, payloadProof, nil
}
func makeBeaconBlockHeader(blk interfaces.ReadOnlySignedBeaconBlock) (*pb.BeaconBlockHeader, error) {
parentRoot := blk.Block().ParentRoot()
stateRoot := blk.Block().StateRoot()
bodyRoot, err := blk.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get body root")
}
return &pb.BeaconBlockHeader{
Slot: blk.Block().Slot(),
ProposerIndex: blk.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}, nil
}
func emptyPayloadProof() [][]byte {
branch := interfaces.LightClientExecutionBranch{}
proof := make([][]byte, len(branch))
for i, b := range branch {
proof[i] = b[:]
}
return proof
}

View File

@@ -6,12 +6,10 @@ import (
"fmt"
"reflect"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
consensus_types "github.com/OffchainLabs/prysm/v6/consensus-types"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -163,13 +161,13 @@ func NewLightClientUpdateFromBeaconState(
updateAttestedPeriod := slots.SyncCommitteePeriod(slots.ToEpoch(attestedBlock.Block().Slot()))
// update = LightClientUpdate()
result, err := CreateDefaultLightClientUpdate(currentSlot, attestedState)
result, err := CreateDefaultLightClientUpdate(attestedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not create default light client update")
}
// update.attested_header = block_to_light_client_header(attested_block)
attestedLightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, attestedBlock)
attestedLightClientHeader, err := BlockToLightClientHeader(ctx, attestedBlock.Version(), attestedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get attested light client header")
}
@@ -210,7 +208,7 @@ func NewLightClientUpdateFromBeaconState(
// if finalized_block.message.slot != GENESIS_SLOT
if finalizedBlock.Block().Slot() != 0 {
// update.finalized_header = block_to_light_client_header(finalized_block)
finalizedLightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, finalizedBlock)
finalizedLightClientHeader, err := BlockToLightClientHeader(ctx, attestedBlock.Version(), finalizedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get finalized light client header")
}
@@ -247,9 +245,7 @@ func NewLightClientUpdateFromBeaconState(
return result, nil
}
func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState state.BeaconState) (interfaces.LightClientUpdate, error) {
currentEpoch := slots.ToEpoch(currentSlot)
func CreateDefaultLightClientUpdate(attestedBlock interfaces.ReadOnlySignedBeaconBlock) (interfaces.LightClientUpdate, error) {
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
@@ -261,7 +257,7 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
}
var nextSyncCommitteeBranch [][]byte
if attestedState.Version() >= version.Electra {
if attestedBlock.Version() >= version.Electra {
nextSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
nextSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
@@ -276,7 +272,7 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
}
var finalityBranch [][]byte
if attestedState.Version() >= version.Electra {
if attestedBlock.Version() >= version.Electra {
finalityBranch = make([][]byte, fieldparams.FinalityBranchDepthElectra)
} else {
finalityBranch = make([][]byte, fieldparams.FinalityBranchDepth)
@@ -286,10 +282,12 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
}
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
switch attestedBlock.Version() {
case version.Altair, version.Bellatrix:
m = &pb.LightClientUpdateAltair{
AttestedHeader: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -310,10 +308,11 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
SyncCommitteeSignature: make([]byte, 96),
},
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
case version.Capella:
m = &pb.LightClientUpdateCapella{
AttestedHeader: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -362,10 +361,11 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
SyncCommitteeSignature: make([]byte, 96),
},
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
case version.Deneb:
m = &pb.LightClientUpdateDeneb{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -418,120 +418,65 @@ func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState s
SyncCommitteeSignature: make([]byte, 96),
},
}
} else {
if attestedState.Version() >= version.Electra {
m = &pb.LightClientUpdateElectra{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
case version.Electra, version.Fulu:
m = &pb.LightClientUpdateElectra{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: attestedBlock.Block().Slot(),
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
FinalizedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
SyncAggregate: &pb.SyncAggregate{
SyncCommitteeBits: make([]byte, 64),
SyncCommitteeSignature: make([]byte, 96),
ExecutionBranch: executionBranch,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
FinalizedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
}
} else {
m = &pb.LightClientUpdateDeneb{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
FinalizedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
GasLimit: 0,
GasUsed: 0,
},
ExecutionBranch: executionBranch,
},
SyncAggregate: &pb.SyncAggregate{
SyncCommitteeBits: make([]byte, 64),
SyncCommitteeSignature: make([]byte, 96),
},
}
ExecutionBranch: executionBranch,
},
SyncAggregate: &pb.SyncAggregate{
SyncCommitteeBits: make([]byte, 64),
SyncCommitteeSignature: make([]byte, 96),
},
}
default:
return nil, errors.Errorf("unsupported beacon chain version %s", version.String(attestedBlock.Version()))
}
return light_client.NewWrappedUpdate(m)
@@ -575,189 +520,52 @@ func ComputeWithdrawalsRoot(payload interfaces.ExecutionData) ([]byte, error) {
func BlockToLightClientHeader(
ctx context.Context,
currentSlot primitives.Slot,
block interfaces.ReadOnlySignedBeaconBlock,
attestedBlockVersion int, // this is the version that the light client header should be in, based on the attested block.
block interfaces.ReadOnlySignedBeaconBlock, // this block is either the attested block, or the finalized block.
// in case of the latter, we might need to upgrade it to the attested block's version.
) (interfaces.LightClientHeader, error) {
var m proto.Message
currentEpoch := slots.ToEpoch(currentSlot)
blockEpoch := slots.ToEpoch(block.Block().Slot())
parentRoot := block.Block().ParentRoot()
stateRoot := block.Block().StateRoot()
bodyRoot, err := block.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get body root")
if block.Version() > attestedBlockVersion {
return nil, errors.Errorf("block version %s is greater than attested block version %s", version.String(block.Version()), version.String(attestedBlockVersion))
}
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
beacon, err := makeBeaconBlockHeader(block)
if err != nil {
return nil, errors.Wrap(err, "could not make beacon block header")
}
var m proto.Message
switch attestedBlockVersion {
case version.Altair, version.Bellatrix:
m = &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Beacon: beacon,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
var payloadHeader *enginev1.ExecutionPayloadHeaderCapella
var payloadProof [][]byte
if blockEpoch < params.BeaconConfig().CapellaForkEpoch {
var ok bool
p, err := execution.EmptyExecutionPayloadHeader(version.Capella)
if err != nil {
return nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok = p.(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderCapella{})
}
payloadProof = emptyPayloadProof()
} else {
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader = &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
payloadProof, err = blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
case version.Capella:
payloadHeader, payloadProof, err := makeExecutionAndProofCapella(ctx, block)
if err != nil {
return nil, errors.Wrap(err, "could not make execution payload header and proof")
}
m = &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Beacon: beacon,
Execution: payloadHeader,
ExecutionBranch: payloadProof,
}
} else {
var payloadHeader *enginev1.ExecutionPayloadHeaderDeneb
var payloadProof [][]byte
if blockEpoch < params.BeaconConfig().CapellaForkEpoch {
var ok bool
p, err := execution.EmptyExecutionPayloadHeader(version.Deneb)
if err != nil {
return nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok = p.(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, fmt.Errorf("payload header type %T is not %T", p, &enginev1.ExecutionPayloadHeaderDeneb{})
}
payloadProof = emptyPayloadProof()
} else {
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
var blobGasUsed uint64
var excessBlobGas uint64
if blockEpoch >= params.BeaconConfig().DenebForkEpoch {
blobGasUsed, err = payload.BlobGasUsed()
if err != nil {
return nil, errors.Wrap(err, "could not get blob gas used")
}
excessBlobGas, err = payload.ExcessBlobGas()
if err != nil {
return nil, errors.Wrap(err, "could not get excess blob gas")
}
}
payloadHeader = &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
BlobGasUsed: blobGasUsed,
ExcessBlobGas: excessBlobGas,
}
payloadProof, err = blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
case version.Deneb, version.Electra, version.Fulu:
payloadHeader, payloadProof, err := makeExecutionAndProofDeneb(ctx, block)
if err != nil {
return nil, errors.Wrap(err, "could not make execution payload header and proof")
}
m = &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Beacon: beacon,
Execution: payloadHeader,
ExecutionBranch: payloadProof,
}
default:
return nil, fmt.Errorf("unsupported attested block version %s", version.String(attestedBlockVersion))
}
return light_client.NewWrappedHeader(m)
}
func emptyPayloadProof() [][]byte {
branch := interfaces.LightClientExecutionBranch{}
proof := make([][]byte, len(branch))
for i, b := range branch {
proof[i] = b[:]
}
return proof
}
func HasRelevantSyncCommittee(update interfaces.LightClientUpdate) (bool, error) {
if update.Version() >= version.Electra {
branch, err := update.NextSyncCommitteeBranchElectra()
@@ -909,7 +717,7 @@ func NewLightClientBootstrapFromBeaconState(
return nil, errors.Wrap(err, "could not create default light client bootstrap")
}
lightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, block)
lightClientHeader, err := BlockToLightClientHeader(ctx, state.Version(), block)
if err != nil {
return nil, errors.Wrap(err, "could not convert block to light client header")
}
@@ -942,78 +750,6 @@ func NewLightClientBootstrapFromBeaconState(
return bootstrap, nil
}
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
currentSyncCommittee := &pb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
var currentSyncCommitteeBranch [][]byte
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
}
for i := 0; i < len(currentSyncCommitteeBranch); i++ {
currentSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
executionBranch[i] = make([]byte, 32)
}
// TODO: can this be based on the current epoch?
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{},
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else {
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
}
return light_client.NewWrappedBootstrap(m)
}
func UpdateHasSupermajority(syncAggregate *pb.SyncAggregate) bool {
maxActiveParticipants := syncAggregate.SyncCommitteeBits.Len()
numActiveParticipants := syncAggregate.SyncCommitteeBits.Count()

View File

@@ -7,7 +7,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
light_client "github.com/OffchainLabs/prysm/v6/consensus-types/light-client"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
lightClient "github.com/OffchainLabs/prysm/v6/beacon-chain/core/light-client"
@@ -547,7 +546,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().AltairForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Altair,
l.Block,
)
require.NoError(t, err)
@@ -570,7 +569,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().BellatrixForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Bellatrix,
l.Block,
)
require.NoError(t, err)
@@ -594,7 +593,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().CapellaForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Capella,
l.Block,
)
require.NoError(t, err)
@@ -655,7 +654,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().CapellaForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Capella,
l.Block,
)
require.NoError(t, err)
@@ -718,7 +717,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block,
)
require.NoError(t, err)
@@ -787,7 +786,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block,
)
require.NoError(t, err)
@@ -856,7 +855,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, version.Electra, l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -921,7 +920,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Electra, util.WithBlinded())
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, version.Electra, l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -989,7 +988,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().CapellaForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Capella,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1011,7 +1010,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1034,7 +1033,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1094,7 +1093,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
header, err := lightClient.BlockToLightClientHeader(
l.Ctx,
primitives.Slot(params.BeaconConfig().DenebForkEpoch)*params.BeaconConfig().SlotsPerEpoch,
version.Deneb,
l.Block)
require.NoError(t, err)
require.NotNil(t, header, "header is nil")
@@ -1180,14 +1179,13 @@ func createNonEmptyFinalityBranch() [][]byte {
}
func TestIsBetterUpdate(t *testing.T) {
config := params.BeaconConfig()
st, err := util.NewBeaconStateAltair()
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
t.Run("new has supermajority but old doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1203,9 +1201,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("old has supermajority but new doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1221,9 +1219,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new doesn't have supermajority and newNumActiveParticipants is greater than oldNumActiveParticipants", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1239,9 +1237,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new doesn't have supermajority and newNumActiveParticipants is lesser than oldNumActiveParticipants", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1257,9 +1255,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has relevant sync committee but old doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1296,9 +1294,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("old has relevant sync committee but new doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1335,9 +1333,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has finality but old doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1378,9 +1376,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("old has finality but new doesn't", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1421,9 +1419,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has finality and sync committee finality both but old doesn't have sync committee finality", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1482,9 +1480,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has finality but doesn't have sync committee finality and old has sync committee finality", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1543,9 +1541,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has more active participants than old", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1561,9 +1559,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new has less active participants than old", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1579,9 +1577,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new's attested header's slot is lesser than old's attested header's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1640,9 +1638,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("new's attested header's slot is greater than old's attested header's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1701,9 +1699,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("none of the above conditions are met and new signature's slot is less than old signature's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{
@@ -1762,9 +1760,9 @@ func TestIsBetterUpdate(t *testing.T) {
})
t.Run("none of the above conditions are met and new signature's slot is greater than old signature's slot", func(t *testing.T) {
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(1), st)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(primitives.Slot(config.AltairForkEpoch*primitives.Epoch(config.SlotsPerEpoch)).Add(2), st)
newUpdate, err := lightClient.CreateDefaultLightClientUpdate(blk)
require.NoError(t, err)
oldUpdate.SetSyncAggregate(&pb.SyncAggregate{

View File

@@ -1,18 +1,127 @@
package light_client
import (
"context"
"sync"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
type Store struct {
mu sync.RWMutex
beaconDB iface.HeadAccessDatabase
lastFinalityUpdate interfaces.LightClientFinalityUpdate
lastOptimisticUpdate interfaces.LightClientOptimisticUpdate
}
func NewLightClientStore(db iface.HeadAccessDatabase) *Store {
return &Store{
beaconDB: db,
}
}
func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot [32]byte) (interfaces.LightClientBootstrap, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client bootstrap from the database
bootstrap, err := s.beaconDB.LightClientBootstrap(ctx, blockRoot[:])
if err != nil {
return nil, err
}
if bootstrap == nil { // not found
return nil, ErrLightClientBootstrapNotFound
}
return bootstrap, nil
}
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot [32]byte, bootstrap interfaces.LightClientBootstrap) error {
s.mu.Lock()
defer s.mu.Unlock()
// Save the light client bootstrap to the database
if err := s.beaconDB.SaveLightClientBootstrap(ctx, blockRoot[:], bootstrap); err != nil {
return err
}
return nil
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) ([]interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client updatesMap from the database
updatesMap, err := s.beaconDB.LightClientUpdates(ctx, startPeriod, endPeriod)
if err != nil {
return nil, err
}
var updates []interfaces.LightClientUpdate
for i := startPeriod; i <= endPeriod; i++ {
update, ok := updatesMap[i]
if !ok {
// Only return the first contiguous range of updates
break
}
updates = append(updates, update)
}
return updates, nil
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error) {
s.mu.RLock()
defer s.mu.RUnlock()
// Fetch the light client update for the given period from the database
update, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return nil, err
}
return update, nil
}
func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error {
s.mu.Lock()
defer s.mu.Unlock()
oldUpdate, err := s.beaconDB.LightClientUpdate(ctx, period)
if err != nil {
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := IsBetterUpdate(update, oldUpdate)
if err != nil {
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.beaconDB.SaveLightClientUpdate(ctx, period, update); err != nil {
return errors.Wrapf(err, "could not save light client update")
}
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
func (s *Store) SetLastFinalityUpdate(update interfaces.LightClientFinalityUpdate) {
s.mu.Lock()
defer s.mu.Unlock()

View File

@@ -12,7 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v6/runtime/logging"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
errors "github.com/pkg/errors"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
@@ -122,7 +122,7 @@ func (s *LazilyPersistentStoreBlob) IsDataAvailable(ctx context.Context, current
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
log.WithFields(lf).WithFields(logging.BlockFieldsFromBlob(sidecars[0])).
Debug("invalid BlobSidecars received")
Debug("Invalid BlobSidecars received")
}
return errors.Wrapf(err, "invalid BlobSidecars received for block %#x", root)
}

View File

@@ -255,7 +255,7 @@ func pruneBefore(before primitives.Epoch, l fsLayout) (map[primitives.Epoch]*pru
}
continue
}
log.WithError(err).Error("encountered unhandled error during pruning")
log.WithError(err).Error("Encountered unhandled error during pruning")
return nil, errors.Wrap(errPruneFailed, err.Error())
}
if ident.epoch >= before {

View File

@@ -153,6 +153,13 @@ func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, []
}
m = bootstrap
syncCommitteeHash = enc[len(altairKey) : len(altairKey)+32]
case hasBellatrixKey(enc):
bootstrap := &ethpb.LightClientBootstrapAltair{}
if err := bootstrap.UnmarshalSSZ(enc[len(bellatrixKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Bellatrix light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(bellatrixKey) : len(bellatrixKey)+32]
case hasCapellaKey(enc):
bootstrap := &ethpb.LightClientBootstrapCapella{}
if err := bootstrap.UnmarshalSSZ(enc[len(capellaKey)+32:]); err != nil {
@@ -265,6 +272,12 @@ func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
return nil, errors.Wrap(err, "could not unmarshal Altair light client update")
}
m = update
case hasBellatrixKey(enc):
update := &ethpb.LightClientUpdateAltair{}
if err := update.UnmarshalSSZ(enc[len(bellatrixKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Bellatrix light client update")
}
m = update
case hasCapellaKey(enc):
update := &ethpb.LightClientUpdateCapella{}
if err := update.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
@@ -297,6 +310,8 @@ func keyForLightClientUpdate(v int) ([]byte, error) {
return denebKey, nil
case version.Capella:
return capellaKey, nil
case version.Bellatrix:
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
default:

View File

@@ -46,7 +46,21 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.AltairForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
BodyRoot: sampleRoot,
},
})
require.NoError(t, err)
st, err = util.NewBeaconState()
require.NoError(t, err)
case version.Bellatrix:
slot = primitives.Slot(config.BellatrixForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -60,7 +74,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.CapellaForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -88,7 +102,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.DenebForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -116,7 +130,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.ElectraForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -144,7 +158,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
slot = primitives.Slot(config.FuluForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
Slot: slot,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
@@ -192,71 +206,30 @@ func TestStore_LightClientUpdate_CanSaveRetrieve(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 3
cfg.BellatrixForkEpoch = 1
cfg.CapellaForkEpoch = 2
cfg.DenebForkEpoch = 3
cfg.ElectraForkEpoch = 4
cfg.FuluForkEpoch = 5
params.OverrideBeaconConfig(cfg)
db := setupDB(t)
ctx := t.Context()
t.Run("Altair", func(t *testing.T) {
update, err := createUpdate(t, version.Altair)
require.NoError(t, err)
period := uint64(1)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
update, err := createUpdate(t, testVersion)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Capella", func(t *testing.T) {
update, err := createUpdate(t, version.Capella)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Deneb", func(t *testing.T) {
update, err := createUpdate(t, version.Deneb)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Electra", func(t *testing.T) {
update, err := createUpdate(t, version.Electra)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Fulu", func(t *testing.T) {
update, err := createUpdate(t, version.Fulu)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
}
}
func TestStore_LightClientUpdates_canRetrieveRange(t *testing.T) {
@@ -584,9 +557,10 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.BellatrixForkEpoch = 1
cfg.CapellaForkEpoch = 2
cfg.DenebForkEpoch = 3
cfg.ElectraForkEpoch = 4
cfg.EpochsPerSyncCommitteePeriod = 1
params.OverrideBeaconConfig(cfg)
@@ -599,89 +573,38 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
require.IsNil(t, retrievedBootstrap)
})
t.Run("Altair", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().VersionToForkEpochMap()[testVersion]) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair"), bootstrap)
require.NoError(t, err)
blockRoot := []byte("blockRootAltair" + version.String(testVersion))
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
err = db.SaveLightClientBootstrap(ctx, blockRoot, bootstrap)
require.NoError(t, err)
t.Run("Capella", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().CapellaForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootCapella"), bootstrap)
require.NoError(t, err)
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootCapella"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Deneb", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().DenebForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootDeneb"), bootstrap)
require.NoError(t, err)
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootDeneb"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Electra", func(t *testing.T) {
bootstrap, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().ElectraForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootElectra"), bootstrap)
require.NoError(t, err)
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootElectra"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
retrievedBootstrap, err := db.LightClientBootstrap(ctx, blockRoot)
require.NoError(t, err)
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
if testVersion >= version.Electra {
savedBranch, err := bootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
} else {
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
}
})
}
}
func TestStore_LightClientBootstrap_MultipleBootstrapsWithSameSyncCommittee(t *testing.T) {
@@ -839,6 +762,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -851,6 +775,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -877,6 +802,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
@@ -905,6 +831,7 @@ func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: currentSlot,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),

View File

@@ -60,13 +60,13 @@ type Service struct {
initSyncWaiter func() error
}
func New(ctx context.Context, db iface.Database, genesisTime uint64, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
p := &Service{
ctx: ctx,
db: db,
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
done: make(chan struct{}),
slotTicker: slots.NewSlotTicker(slots.StartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
initSyncWaiter: initSyncWaiter,
backfillWaiter: backfillWaiter,
}

View File

@@ -62,7 +62,7 @@ func TestPruner_PruningConditions(t *testing.T) {
if !tt.backfillCompleted {
backfillWaiter = waiter
}
p, err := New(ctx, beaconDB, uint64(time.Now().Unix()), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
require.NoError(t, err)
go p.Start()
@@ -100,7 +100,7 @@ func TestPruner_PruneSuccess(t *testing.T) {
p, err := New(
ctx,
beaconDB,
uint64(time.Now().Unix()),
time.Now(),
nil,
nil,
WithSlotTicker(slotTicker),

View File

@@ -31,6 +31,7 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/execution/types:go_default_library",
@@ -97,6 +98,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -44,11 +45,18 @@ var (
GetPayloadMethodV3,
GetPayloadBodiesByHashV1,
GetPayloadBodiesByRangeV1,
GetBlobsV1,
}
electraEngineEndpoints = []string{
NewPayloadMethodV4,
GetPayloadMethodV4,
}
fuluEngineEndpoints = []string{
GetPayloadMethodV5,
GetBlobsV2,
}
)
const (
@@ -73,6 +81,8 @@ const (
GetPayloadMethodV3 = "engine_getPayloadV3"
// GetPayloadMethodV4 is the get payload method added for electra
GetPayloadMethodV4 = "engine_getPayloadV4"
// GetPayloadMethodV5 is the get payload method added for fulu
GetPayloadMethodV5 = "engine_getPayloadV5"
// BlockByHashMethod request string for JSON-RPC.
BlockByHashMethod = "eth_getBlockByHash"
// BlockByNumberMethod request string for JSON-RPC.
@@ -85,11 +95,16 @@ const (
ExchangeCapabilities = "engine_exchangeCapabilities"
// GetBlobsV1 request string for JSON-RPC.
GetBlobsV1 = "engine_getBlobsV1"
// GetBlobsV2 request string for JSON-RPC.
GetBlobsV2 = "engine_getBlobsV2"
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
defaultEngineTimeout = time.Second
)
var errInvalidPayloadBodyResponse = errors.New("engine api payload body response is invalid")
var (
errInvalidPayloadBodyResponse = errors.New("engine api payload body response is invalid")
errMissingBlobsAndProofsFromEL = errors.New("engine api payload body response is missing blobs and proofs")
)
// ForkchoiceUpdatedResponse is the response kind received by the
// engine_forkchoiceUpdatedV1 endpoint.
@@ -107,7 +122,8 @@ type Reconstructor interface {
ReconstructFullBellatrixBlockBatch(
ctx context.Context, blindedBlocks []interfaces.ReadOnlySignedBeaconBlock,
) ([]interfaces.SignedBeaconBlock, error)
ReconstructBlobSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, hi func(uint64) bool) ([]blocks.VerifiedROBlob, error)
ReconstructBlobSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte, hi func(uint64) bool) ([]blocks.VerifiedROBlob, error)
ReconstructDataColumnSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error)
}
// EngineCaller defines a client that can interact with an Ethereum
@@ -256,14 +272,17 @@ func (s *Service) ForkchoiceUpdated(
}
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
pe := slots.ToEpoch(slot)
if pe >= params.BeaconConfig().ElectraForkEpoch {
epoch := slots.ToEpoch(slot)
if epoch >= params.BeaconConfig().FuluForkEpoch {
return GetPayloadMethodV5, &pb.ExecutionBundleFulu{}
}
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return GetPayloadMethodV4, &pb.ExecutionBundleElectra{}
}
if pe >= params.BeaconConfig().DenebForkEpoch {
if epoch >= params.BeaconConfig().DenebForkEpoch {
return GetPayloadMethodV3, &pb.ExecutionPayloadDenebWithValueAndBlobsBundle{}
}
if pe >= params.BeaconConfig().CapellaForkEpoch {
if epoch >= params.BeaconConfig().CapellaForkEpoch {
return GetPayloadMethodV2, &pb.ExecutionPayloadCapellaWithValue{}
}
return GetPayloadMethod, &pb.ExecutionPayload{}
@@ -289,7 +308,7 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
}
res, err := blocks.NewGetPayloadResponse(result)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "new get payload response")
}
return res, nil
}
@@ -298,33 +317,36 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeCapabilities")
defer span.End()
// Only check for electra related engine methods if it has been activated.
if params.ElectraEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, electraEngineEndpoints...)
}
var result []string
err := s.rpcClient.CallContext(ctx, &result, ExchangeCapabilities, supportedEngineEndpoints)
if err != nil {
if params.FuluEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, fuluEngineEndpoints...)
}
elSupportedEndpointsSlice := make([]string, len(supportedEngineEndpoints))
if err := s.rpcClient.CallContext(ctx, &elSupportedEndpointsSlice, ExchangeCapabilities, supportedEngineEndpoints); err != nil {
return nil, handleRPCError(err)
}
var unsupported []string
for _, s1 := range supportedEngineEndpoints {
supported := false
for _, s2 := range result {
if s1 == s2 {
supported = true
break
}
}
if !supported {
unsupported = append(unsupported, s1)
elSupportedEndpoints := make(map[string]bool, len(elSupportedEndpointsSlice))
for _, method := range elSupportedEndpointsSlice {
elSupportedEndpoints[method] = true
}
unsupported := make([]string, 0)
for _, method := range supportedEngineEndpoints {
if !elSupportedEndpoints[method] {
unsupported = append(unsupported, method)
}
}
if len(unsupported) != 0 {
log.Warnf("Please update client, detected the following unsupported engine methods: %s", unsupported)
log.WithField("methods", unsupported).Warning("Connected execution client does not support some requested engine methods")
}
return result, handleRPCError(err)
return elSupportedEndpointsSlice, nil
}
// GetTerminalBlockHash returns the valid terminal block hash based on total difficulty.
@@ -495,9 +517,10 @@ func (s *Service) HeaderByNumber(ctx context.Context, number *big.Int) (*types.H
func (s *Service) GetBlobs(ctx context.Context, versionedHashes []common.Hash) ([]*pb.BlobAndProof, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobs")
defer span.End()
// If the execution engine does not support `GetBlobsV1`, return early to prevent encountering an error later.
if !s.capabilityCache.has(GetBlobsV1) {
return nil, nil
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV1))
}
result := make([]*pb.BlobAndProof, len(versionedHashes))
@@ -505,6 +528,19 @@ func (s *Service) GetBlobs(ctx context.Context, versionedHashes []common.Hash) (
return result, handleRPCError(err)
}
func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash) ([]*pb.BlobAndProofV2, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
defer span.End()
if !s.capabilityCache.has(GetBlobsV2) {
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
return result, handleRPCError(err)
}
// ReconstructFullBlock takes in a blinded beacon block and reconstructs
// a beacon block with a full execution payload via the engine API.
func (s *Service) ReconstructFullBlock(
@@ -584,7 +620,7 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
blobIndex := kzgIndexes[i]
proof, err := blocks.MerkleProofKZGCommitment(blockBody, blobIndex)
if err != nil {
log.WithError(err).WithField("index", blobIndex).Error("failed to get Merkle proof for KZG commitment")
log.WithError(err).WithField("index", blobIndex).Error("Failed to get Merkle proof for KZG commitment")
continue
}
sidecar := &ethpb.BlobSidecar{
@@ -598,14 +634,14 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
roBlob, err := blocks.NewROBlobWithRoot(sidecar, blockRoot)
if err != nil {
log.WithError(err).WithField("index", blobIndex).Error("failed to create RO blob with root")
log.WithError(err).WithField("index", blobIndex).Error("Failed to create RO blob with root")
continue
}
v := s.blobVerifier(roBlob, verification.ELMemPoolRequirements)
verifiedBlob, err := v.VerifiedROBlob()
if err != nil {
log.WithError(err).WithField("index", blobIndex).Error("failed to verify RO blob")
log.WithError(err).WithField("index", blobIndex).Error("Failed to verify RO blob")
continue
}
@@ -615,6 +651,75 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
return verifiedBlobs, nil
}
// ReconstructDataColumnSidecars reconstructs the verified data column sidecars for a given beacon block.
// It retrieves the KZG commitments from the block body, fetches the associated blobs and cell proofs from the EL,
// and constructs the corresponding verified read-only data column sidecars.
func (s *Service) ReconstructDataColumnSidecars(ctx context.Context, signedROBlock interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error) {
block := signedROBlock.Block()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", blockRoot),
"slot": block.Slot(),
})
kzgCommitments, err := block.Body().BlobKzgCommitments()
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "blob KZG commitments")
}
// Collect KZG hashes for all blobs.
versionedHashes := make([]common.Hash, 0, len(kzgCommitments))
for _, commitment := range kzgCommitments {
versionedHash := primitives.ConvertKzgCommitmentToVersionedHash(commitment)
versionedHashes = append(versionedHashes, versionedHash)
}
// Fetch all blobsAndCellsProofs from the execution client.
blobAndProofV2s, err := s.GetBlobsV2(ctx, versionedHashes)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "get blobs V2")
}
// Return early if nothing is returned from the EL.
if len(blobAndProofV2s) == 0 {
log.Debug("No blobs returned from execution client")
return nil, nil
}
// Extract the blobs and proofs from the blobAndProofV2s.
blobs, cellProofs := make([][]byte, 0, len(blobAndProofV2s)), make([][]byte, 0, len(blobAndProofV2s))
for _, blobsAndProofs := range blobAndProofV2s {
if blobsAndProofs == nil {
return nil, wrapWithBlockRoot(errMissingBlobsAndProofsFromEL, blockRoot, "")
}
blobs, cellProofs = append(blobs, blobsAndProofs.Blob), append(cellProofs, blobsAndProofs.KzgProofs...)
}
// Construct the data column sidcars from the blobs and cell proofs provided by the execution client.
dataColumnSidecars, err := peerdas.ConstructDataColumnSidecars(signedROBlock, blobs, cellProofs)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "construct data column sidecars")
}
// Finally, construct verified RO data column sidecars.
// We trust the execution layer we are connected to, so we can upgrade the read only data column sidecar into a verified one.
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(dataColumnSidecars))
for _, dataColumnSidecar := range dataColumnSidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(dataColumnSidecar, blockRoot)
if err != nil {
return nil, wrapWithBlockRoot(err, blockRoot, "new read-only data column with root")
}
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
}
log.Debug("Data columns successfully reconstructed from the execution client")
return verifiedRODataColumns, nil
}
func fullPayloadFromPayloadBody(
header interfaces.ExecutionData, body *pb.ExecutionPayloadBody, bVersion int,
) (interfaces.ExecutionData, error) {
@@ -902,3 +1007,8 @@ func toBlockNumArg(number *big.Int) string {
}
return hexutil.EncodeBig(number)
}
// wrapWithBlockRoot returns a new error with the given block root.
func wrapWithBlockRoot(err error, blockRoot [32]byte, message string) error {
return errors.Wrap(err, fmt.Sprintf("%s for block %#x", message, blockRoot))
}

View File

@@ -13,6 +13,7 @@ import (
"strings"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
mocks "github.com/OffchainLabs/prysm/v6/beacon-chain/execution/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
@@ -167,6 +168,7 @@ func TestClient_HTTP(t *testing.T) {
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 4
params.OverrideBeaconConfig(cfg)
t.Run(GetPayloadMethod, func(t *testing.T) {
@@ -317,11 +319,11 @@ func TestClient_HTTP(t *testing.T) {
require.DeepEqual(t, uint64(2), g)
commitments := [][]byte{bytesutil.PadTo([]byte("commitment1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("commitment2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, commitments, resp.BlobsBundle.KzgCommitments)
require.DeepEqual(t, commitments, resp.BlobsBundler.GetKzgCommitments())
proofs := [][]byte{bytesutil.PadTo([]byte("proof1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("proof2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, proofs, resp.BlobsBundle.Proofs)
require.DeepEqual(t, proofs, resp.BlobsBundler.GetProofs())
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
require.DeepEqual(t, blobs, resp.BlobsBundler.GetBlobs())
})
t.Run(GetPayloadMethodV4, func(t *testing.T) {
payloadId := [8]byte{1}
@@ -372,11 +374,11 @@ func TestClient_HTTP(t *testing.T) {
require.DeepEqual(t, uint64(2), g)
commitments := [][]byte{bytesutil.PadTo([]byte("commitment1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("commitment2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, commitments, resp.BlobsBundle.KzgCommitments)
require.DeepEqual(t, commitments, resp.BlobsBundler.GetKzgCommitments())
proofs := [][]byte{bytesutil.PadTo([]byte("proof1"), fieldparams.BLSPubkeyLength), bytesutil.PadTo([]byte("proof2"), fieldparams.BLSPubkeyLength)}
require.DeepEqual(t, proofs, resp.BlobsBundle.Proofs)
require.DeepEqual(t, proofs, resp.BlobsBundler.GetProofs())
blobs := [][]byte{bytesutil.PadTo([]byte("a"), fieldparams.BlobLength), bytesutil.PadTo([]byte("b"), fieldparams.BlobLength)}
require.DeepEqual(t, blobs, resp.BlobsBundle.Blobs)
require.DeepEqual(t, blobs, resp.BlobsBundler.GetBlobs())
requests := &pb.ExecutionRequests{
Deposits: []*pb.DepositRequest{
{
@@ -405,7 +407,52 @@ func TestClient_HTTP(t *testing.T) {
require.DeepEqual(t, requests, resp.ExecutionRequests)
})
t.Run(GetPayloadMethodV5, func(t *testing.T) {
payloadId := [8]byte{1}
want, ok := fix["ExecutionBundleFulu"].(*pb.GetPayloadV5ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := io.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
reqArg, err := json.Marshal(pb.PayloadIDBytes(payloadId))
require.NoError(t, err)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(reqArg),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
client := &Service{}
client.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
resp, err := client.GetPayload(ctx, payloadId, 4*params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
_, ok = resp.BlobsBundler.(*pb.BlobsBundleV2)
if !ok {
t.Logf("resp.BlobsBundler has unexpected type: %T", resp.BlobsBundler)
}
require.Equal(t, ok, true)
})
t.Run(ForkchoiceUpdatedMethod+" VALID status", func(t *testing.T) {
forkChoiceState := &pb.ForkchoiceState{
HeadBlockHash: []byte("head"),
@@ -1539,6 +1586,7 @@ func fixtures() map[string]interface{} {
"ExecutionPayloadCapellaWithValue": s.ExecutionPayloadWithValueCapella,
"ExecutionPayloadDenebWithValue": s.ExecutionPayloadWithValueDeneb,
"ExecutionBundleElectra": s.ExecutionBundleElectra,
"ExecutionBundleFulu": s.ExecutionBundleFulu,
"ValidPayloadStatus": s.ValidPayloadStatus,
"InvalidBlockHashStatus": s.InvalidBlockHashStatus,
"AcceptedStatus": s.AcceptedStatus,
@@ -1774,6 +1822,36 @@ func fixturesStruct() *payloadFixtures {
append([]byte{pb.WithdrawalRequestType}, withdrawalRequestBytes...),
append([]byte{pb.ConsolidationRequestType}, consolidationRequestBytes...)},
}
executionBundleFixtureFulu := &pb.GetPayloadV5ResponseJson{
ShouldOverrideBuilder: true,
ExecutionPayload: &pb.ExecutionPayloadDenebJSON{
ParentHash: &common.Hash{'a'},
FeeRecipient: &common.Address{'b'},
StateRoot: &common.Hash{'c'},
ReceiptsRoot: &common.Hash{'d'},
LogsBloom: &hexutil.Bytes{'e'},
PrevRandao: &common.Hash{'f'},
BaseFeePerGas: "0x123",
BlockHash: &common.Hash{'g'},
Transactions: []hexutil.Bytes{{'h'}},
Withdrawals: []*pb.Withdrawal{},
BlockNumber: &hexUint,
GasLimit: &hexUint,
GasUsed: &hexUint,
Timestamp: &hexUint,
BlobGasUsed: &bgu,
ExcessBlobGas: &ebg,
},
BlockValue: "0x11fffffffff",
BlobsBundle: &pb.BlobBundleV2JSON{
Commitments: []hexutil.Bytes{[]byte("commitment1"), []byte("commitment2")},
Proofs: []hexutil.Bytes{[]byte("proof1"), []byte("proof2")},
Blobs: []hexutil.Bytes{{'a'}, {'b'}},
},
ExecutionRequests: []hexutil.Bytes{append([]byte{pb.DepositRequestType}, depositRequestBytes...),
append([]byte{pb.WithdrawalRequestType}, withdrawalRequestBytes...),
append([]byte{pb.ConsolidationRequestType}, consolidationRequestBytes...)},
}
parent := bytesutil.PadTo([]byte("parentHash"), fieldparams.RootLength)
sha3Uncles := bytesutil.PadTo([]byte("sha3Uncles"), fieldparams.RootLength)
miner := bytesutil.PadTo([]byte("miner"), fieldparams.FeeRecipientLength)
@@ -1868,6 +1946,7 @@ func fixturesStruct() *payloadFixtures {
ExecutionPayloadWithValueCapella: executionPayloadWithValueFixtureCapella,
ExecutionPayloadWithValueDeneb: executionPayloadWithValueFixtureDeneb,
ExecutionBundleElectra: executionBundleFixtureElectra,
ExecutionBundleFulu: executionBundleFixtureFulu,
ValidPayloadStatus: validStatus,
InvalidBlockHashStatus: inValidBlockHashStatus,
AcceptedStatus: acceptedStatus,
@@ -1892,6 +1971,7 @@ type payloadFixtures struct {
ExecutionPayloadWithValueCapella *pb.GetPayloadV2ResponseJson
ExecutionPayloadWithValueDeneb *pb.GetPayloadV3ResponseJson
ExecutionBundleElectra *pb.GetPayloadV4ResponseJson
ExecutionBundleFulu *pb.GetPayloadV5ResponseJson
ValidPayloadStatus *pb.PayloadStatus
InvalidBlockHashStatus *pb.PayloadStatus
AcceptedStatus *pb.PayloadStatus
@@ -2361,7 +2441,7 @@ func Test_ExchangeCapabilities(t *testing.T) {
for _, item := range results {
require.NotNil(t, item)
}
assert.LogsContain(t, logHook, "Please update client, detected the following unsupported engine methods:")
assert.LogsContain(t, logHook, "Connected execution client does not support some requested engine methods")
})
t.Run("list of items", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -2424,7 +2504,7 @@ func TestReconstructBlobSidecars(t *testing.T) {
t.Run("get-blobs end point is not supported", func(t *testing.T) {
hi := mockSummary(t, []bool{true, true, true, true, true, false})
verifiedBlobs, err := client.ReconstructBlobSidecars(ctx, sb, r, hi)
require.NoError(t, err)
require.ErrorContains(t, "engine_getBlobsV1 is not supported", err)
require.Equal(t, 0, len(verifiedBlobs))
})
@@ -2476,6 +2556,76 @@ func TestReconstructBlobSidecars(t *testing.T) {
})
}
func TestReconstructDataColumnSidecars(t *testing.T) {
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Setup right fork epoch
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 4
params.OverrideBeaconConfig(cfg)
client := &Service{capabilityCache: &capabilityCache{}}
b := util.NewBeaconBlockFulu()
b.Block.Slot = 4 * params.BeaconConfig().SlotsPerEpoch
kzgCommitments := createRandomKzgCommitments(t, 6)
b.Block.Body.BlobKzgCommitments = kzgCommitments
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
ctx := context.Background()
t.Run("GetBlobsV2 is not supported", func(t *testing.T) {
_, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.ErrorContains(t, "get blobs V2 for block", err)
})
t.Run("nothing received", func(t *testing.T) {
srv := createBlobServerV2(t, 0, []bool{})
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 0, len(dataColumns))
})
t.Run("receiving all blobs", func(t *testing.T) {
blobMasks := []bool{true, true, true, true, true, true}
srv := createBlobServerV2(t, 6, blobMasks)
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.NoError(t, err)
require.Equal(t, 128, len(dataColumns))
})
t.Run("missing some blobs", func(t *testing.T) {
blobMasks := []bool{false, true, true, true, true, true}
srv := createBlobServerV2(t, 6, blobMasks)
defer srv.Close()
rpcClient, client := setupRpcClientV2(t, srv.URL, client)
defer rpcClient.Close()
dataColumns, err := client.ReconstructDataColumnSidecars(ctx, sb, r)
require.ErrorContains(t, errMissingBlobsAndProofsFromEL.Error(), err)
require.Equal(t, 0, len(dataColumns))
})
}
func createRandomKzgCommitments(t *testing.T, num int) [][]byte {
kzgCommitments := make([][]byte, num)
for i := range kzgCommitments {
@@ -2511,6 +2661,42 @@ func createBlobServer(t *testing.T, numBlobs int, callbackFuncs ...func()) *http
}))
}
func createBlobServerV2(t *testing.T, numBlobs int, blobMasks []bool) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
require.Equal(t, len(blobMasks), numBlobs)
blobAndCellProofs := make([]*pb.BlobAndProofV2Json, numBlobs)
for i := range blobAndCellProofs {
if !blobMasks[i] {
continue
}
blobAndCellProofs[i] = &pb.BlobAndProofV2Json{
Blob: []byte("0xblob"),
KzgProofs: []hexutil.Bytes{},
}
for j := 0; j < int(params.BeaconConfig().NumberOfColumns); j++ {
cellProof := make([]byte, 48)
blobAndCellProofs[i].KzgProofs = append(blobAndCellProofs[i].KzgProofs, cellProof)
}
}
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": blobAndCellProofs,
}
err := json.NewEncoder(w).Encode(respJSON)
require.NoError(t, err)
}))
}
func setupRpcClient(t *testing.T, url string, client *Service) (*rpc.Client, *Service) {
rpcClient, err := rpc.DialHTTP(url)
require.NoError(t, err)
@@ -2522,6 +2708,12 @@ func setupRpcClient(t *testing.T, url string, client *Service) (*rpc.Client, *Se
return rpcClient, client
}
func setupRpcClientV2(t *testing.T, url string, client *Service) (*rpc.Client, *Service) {
rpcClient, client := setupRpcClient(t, url, client)
client.capabilityCache = &capabilityCache{capabilities: map[string]interface{}{GetBlobsV2: nil}}
return rpcClient, client
}
func testNewBlobVerifier() verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return &verification.MockBlobVerifier{

View File

@@ -248,14 +248,14 @@ func (s *Service) ProcessChainStart(genesisTime uint64, eth1BlockHash [32]byte,
for i := range s.chainStartData.ChainstartDeposits {
proof, err := s.depositTrie.MerkleProof(i)
if err != nil {
log.WithError(err).Error("unable to generate deposit proof")
log.WithError(err).Error("Unable to generate deposit proof")
}
s.chainStartData.ChainstartDeposits[i].Proof = proof
}
root, err := s.depositTrie.HashTreeRoot()
if err != nil { // This should never happen.
log.WithError(err).Error("unable to determine root of deposit trie, aborting chain start")
log.WithError(err).Error("Unable to determine root of deposit trie, aborting chain start")
return
}
s.chainStartData.Eth1Data = &ethpb.Eth1Data{

View File

@@ -557,8 +557,8 @@ func (s *Service) initPOWService() {
}
}
// Handle edge case with embedded genesis state by fetching genesis header to determine
// its height.
if s.chainStartData.Chainstarted && s.chainStartData.GenesisBlock == 0 {
// its height only if the deposit requests have not started yet (Pre Pectra EIP-6110 behavior).
if s.chainStartData.Chainstarted && s.chainStartData.GenesisBlock == 0 && !s.depositRequestsStarted {
genHash := common.BytesToHash(s.chainStartData.Eth1Data.BlockHash)
genBlock := s.chainStartData.GenesisBlock
// In the event our provided chainstart data references a non-existent block hash,
@@ -711,7 +711,7 @@ func (s *Service) cacheBlockHeaders(start, end uint64) error {
// Determines the earliest voting block from which to start caching all our previous headers from.
func (s *Service) determineEarliestVotingBlock(ctx context.Context, followBlock uint64) (uint64, error) {
genesisTime := s.chainStartData.GenesisTime
currSlot := slots.CurrentSlot(genesisTime)
currSlot := slots.CurrentSlot(time.Unix(int64(genesisTime), 0)) // lint:ignore uintcast -- Genesis time will never exceed int64 in seconds.
// In the event genesis has not occurred yet, we just request to go back follow_distance blocks.
if genesisTime == 0 || currSlot == 0 {
@@ -837,7 +837,7 @@ func (s *Service) validPowchainData(ctx context.Context) (*ethpb.ETH1ChainData,
}
s.chainStartData = &ethpb.ChainStartData{
Chainstarted: true,
GenesisTime: genState.GenesisTime(),
GenesisTime: uint64(genState.GenesisTime().Unix()),
GenesisBlock: 0,
Eth1Data: genState.Eth1Data(),
ChainstartDeposits: make([]*ethpb.Deposit, 0),

View File

@@ -17,6 +17,7 @@ go_library(
"//beacon-chain/execution/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"context"
"math/big"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -38,6 +39,8 @@ type EngineClient struct {
ErrGetPayload error
BlobSidecars []blocks.VerifiedROBlob
ErrorBlobSidecars error
DataColumnSidecars []blocks.VerifiedRODataColumn
ErrorDataColumnSidecars error
}
// NewPayload --
@@ -109,10 +112,14 @@ func (e *EngineClient) ReconstructFullBellatrixBlockBatch(
}
// ReconstructBlobSidecars is a mock implementation of the ReconstructBlobSidecars method.
func (e *EngineClient) ReconstructBlobSidecars(context.Context, interfaces.ReadOnlySignedBeaconBlock, [32]byte, func(uint64) bool) ([]blocks.VerifiedROBlob, error) {
func (e *EngineClient) ReconstructBlobSidecars(context.Context, interfaces.ReadOnlySignedBeaconBlock, [fieldparams.RootLength]byte, func(uint64) bool) ([]blocks.VerifiedROBlob, error) {
return e.BlobSidecars, e.ErrorBlobSidecars
}
func (e *EngineClient) ReconstructDataColumnSidecars(context.Context, interfaces.ReadOnlySignedBeaconBlock, [fieldparams.RootLength]byte) ([]blocks.VerifiedRODataColumn, error) {
return e.DataColumnSidecars, e.ErrorDataColumnSidecars
}
// GetTerminalBlockHash --
func (e *EngineClient) GetTerminalBlockHash(ctx context.Context, transitionTime uint64) ([]byte, bool, error) {
ttd := new(big.Int)

View File

@@ -70,7 +70,7 @@ func (f *ForkChoice) Head(
jc := f.JustifiedCheckpoint()
fc := f.FinalizedCheckpoint()
currentEpoch := slots.EpochsSinceGenesis(time.Unix(int64(f.store.genesisTime), 0))
currentEpoch := slots.EpochsSinceGenesis(f.store.genesisTime)
if err := f.store.treeRootNode.updateBestDescendant(ctx, jc.Epoch, fc.Epoch, currentEpoch); err != nil {
return [32]byte{}, errors.Wrap(err, "could not update best descendant")
}
@@ -127,7 +127,7 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, ro
if err := f.updateCheckpoints(ctx, jc, fc); err != nil {
_, remErr := f.store.removeNode(ctx, node)
if remErr != nil {
log.WithError(remErr).Error("could not remove node")
log.WithError(remErr).Error("Could not remove node")
}
return errors.Wrap(err, "could not update checkpoints")
}
@@ -486,8 +486,8 @@ func (f *ForkChoice) InsertChain(ctx context.Context, chain []*forkchoicetypes.B
}
// SetGenesisTime sets the genesisTime tracked by forkchoice
func (f *ForkChoice) SetGenesisTime(genesisTime uint64) {
f.store.genesisTime = genesisTime
func (f *ForkChoice) SetGenesisTime(genesis time.Time) {
f.store.genesisTime = genesis.Truncate(time.Second) // Genesis time has a precision of 1 second.
}
// SetOriginRoot sets the genesis block root

View File

@@ -5,7 +5,6 @@ import (
"encoding/binary"
"errors"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
@@ -691,7 +690,7 @@ func TestForkChoice_UpdateCheckpoints(t *testing.T) {
fcs := setup(tt.justified.Epoch, tt.finalized.Epoch)
fcs.store.justifiedCheckpoint = tt.justified
fcs.store.finalizedCheckpoint = tt.finalized
fcs.store.genesisTime = uint64(time.Now().Unix()) - uint64(tt.currentSlot)*params.BeaconConfig().SecondsPerSlot
driftGenesisTime(fcs, tt.currentSlot, 0)
st, roblock, err := prepareForkchoiceState(ctx, 32, [32]byte{'f'},
[32]byte{}, [32]byte{}, tt.finalized.Epoch, tt.finalized.Epoch)

View File

@@ -3,6 +3,7 @@ package doublylinkedtree
import (
"bytes"
"context"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
forkchoice2 "github.com/OffchainLabs/prysm/v6/consensus-types/forkchoice"
@@ -11,9 +12,9 @@ import (
"github.com/pkg/errors"
)
// ProcessAttestationsThreshold is the number of seconds after which we
// ProcessAttestationsThreshold is the amount of time after which we
// process attestations for the current slot
const ProcessAttestationsThreshold = 10
const ProcessAttestationsThreshold = 10 * time.Second
// applyWeightChanges recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node.
@@ -131,10 +132,10 @@ func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
// Note that genesisTime has seconds granularity, therefore we use a strict
// inequality < here. For example a block that arrives 3.9999 seconds into the
// slot will have secs = 3 below.
func (n *Node) arrivedEarly(genesisTime uint64) (bool, error) {
secs, err := slots.SecondsSinceSlotStart(n.slot, genesisTime, n.timestamp)
votingWindow := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
return secs < votingWindow, err
func (n *Node) arrivedEarly(genesis time.Time) (bool, error) {
sss, err := slots.SinceSlotStart(n.slot, genesis, n.timestamp.Truncate(time.Second)) // Truncate such that 3.9999 seconds will have a value of 3.
votingWindow := time.Duration(params.BeaconConfig().SecondsPerSlot/params.BeaconConfig().IntervalsPerSlot) * time.Second
return sss < votingWindow, err
}
// arrivedAfterOrphanCheck returns whether this block was inserted after the
@@ -142,8 +143,8 @@ func (n *Node) arrivedEarly(genesisTime uint64) (bool, error) {
// Note that genesisTime has seconds granularity, therefore we use an
// inequality >= here. For example a block that arrives 10.00001 seconds into the
// slot will have secs = 10 below.
func (n *Node) arrivedAfterOrphanCheck(genesisTime uint64) (bool, error) {
secs, err := slots.SecondsSinceSlotStart(n.slot, genesisTime, n.timestamp)
func (n *Node) arrivedAfterOrphanCheck(genesis time.Time) (bool, error) {
secs, err := slots.SinceSlotStart(n.slot, genesis, n.timestamp.Truncate(time.Second)) // Truncate such that 10.00001 seconds will have a value of 10.
return secs >= ProcessAttestationsThreshold, err
}

View File

@@ -2,6 +2,7 @@ package doublylinkedtree
import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/forkchoice"
@@ -262,7 +263,7 @@ func TestNode_TimeStampsChecks(t *testing.T) {
ctx := t.Context()
// early block
driftGenesisTime(f, 1, 1)
driftGenesisTime(f, 1, time.Second)
root := [32]byte{'a'}
f.justifiedBalances = []uint64{10}
state, blk, err := prepareForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 0, 0)
@@ -278,9 +279,9 @@ func TestNode_TimeStampsChecks(t *testing.T) {
require.NoError(t, err)
require.Equal(t, false, late)
orphanLateBlockFirstThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
orphanLateBlockFirstThreshold := time.Duration(params.BeaconConfig().SecondsPerSlot/params.BeaconConfig().IntervalsPerSlot) * time.Second
// late block
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
root = [32]byte{'b'}
state, blk, err = prepareForkchoiceState(ctx, 2, root, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
@@ -296,7 +297,7 @@ func TestNode_TimeStampsChecks(t *testing.T) {
require.Equal(t, false, late)
// very late block
driftGenesisTime(f, 3, ProcessAttestationsThreshold+1)
driftGenesisTime(f, 3, ProcessAttestationsThreshold+time.Second)
root = [32]byte{'c'}
state, blk, err = prepareForkchoiceState(ctx, 3, root, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
require.NoError(t, err)

View File

@@ -12,8 +12,12 @@ import (
// Helper function to simulate the block being on time or delayed for proposer
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(f *ForkChoice, slot primitives.Slot, delay uint64) {
f.SetGenesisTime(uint64(time.Now().Unix()) - uint64(slot)*params.BeaconConfig().SecondsPerSlot - delay)
func driftGenesisTime(f *ForkChoice, slot primitives.Slot, delay time.Duration) {
genesis := time.Now()
s := time.Duration(slot*primitives.Slot(params.BeaconConfig().SecondsPerSlot)) * time.Second
genesis = genesis.Add(-1 * s)
genesis = genesis.Add(-1 * delay.Abs())
f.SetGenesisTime(genesis)
}
// Simple, ex-ante attack mitigation using proposer boost.
@@ -445,7 +449,7 @@ func TestForkChoice_BoostProposerRoot(t *testing.T) {
f := setup(0, 0)
slot := primitives.Slot(1)
currentSlot := primitives.Slot(1)
driftGenesisTime(f, currentSlot, params.BeaconConfig().SecondsPerSlot-1)
driftGenesisTime(f, currentSlot, time.Duration(params.BeaconConfig().SecondsPerSlot-1)*time.Second)
state, blkRoot, err := prepareForkchoiceState(ctx, slot, root, zeroHash, zeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -465,7 +469,7 @@ func TestForkChoice_BoostProposerRoot(t *testing.T) {
f := setup(0, 0)
slot := primitives.Slot(1)
currentSlot := primitives.Slot(1)
driftGenesisTime(f, currentSlot, 1)
driftGenesisTime(f, currentSlot, time.Second)
state, blkRoot, err := prepareForkchoiceState(ctx, slot, root, zeroHash, zeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))

View File

@@ -83,12 +83,12 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
}
// Return early if we are checking before 10 seconds into the slot
secs, err := slots.SecondsSinceSlotStart(head.slot, f.store.genesisTime, uint64(time.Now().Unix()))
sss, err := slots.SinceSlotStart(head.slot, f.store.genesisTime, time.Now())
if err != nil {
log.WithError(err).Error("could not check current slot")
return true
}
if secs < ProcessAttestationsThreshold {
if sss < ProcessAttestationsThreshold {
return true
}
// Only orphan a block if the parent LMD vote is strong
@@ -110,9 +110,9 @@ func (f *ForkChoice) GetProposerHead() [32]byte {
if head == nil {
return [32]byte{}
}
// Only reorg blocks from the previous slot.
if head.slot+1 != slots.CurrentSlot(f.store.genesisTime) {
currentSlot := slots.CurrentSlot(f.store.genesisTime)
if head.slot+1 != currentSlot {
return head.root
}
// Do not reorg on epoch boundaries
@@ -153,12 +153,12 @@ func (f *ForkChoice) GetProposerHead() [32]byte {
}
// Only reorg if we are proposing early
secs, err := slots.SecondsSinceSlotStart(head.slot+1, f.store.genesisTime, uint64(time.Now().Unix()))
sss, err := slots.SinceSlotStart(currentSlot, f.store.genesisTime, time.Now())
if err != nil {
log.WithError(err).Error("could not check if proposing early")
return head.root
}
if secs >= orphanLateBlockProposingEarly {
if sss >= orphanLateBlockProposingEarly*time.Second {
return head.root
}
return parent.root

View File

@@ -2,6 +2,7 @@ package doublylinkedtree
import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -27,8 +28,8 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
}
f.ProcessAttestation(ctx, attesters, blk.Root(), 0)
orphanLateBlockFirstThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
orphanLateBlockFirstThreshold := time.Duration(params.BeaconConfig().SecondsPerSlot/params.BeaconConfig().IntervalsPerSlot) * time.Second
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
st, blk, err = prepareForkchoiceState(ctx, 2, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blk))
@@ -48,29 +49,29 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
t.Run("head is not from current slot", func(t *testing.T) {
driftGenesisTime(f, 3, 0)
require.Equal(t, false, f.ShouldOverrideFCU())
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
})
t.Run("head is from epoch boundary", func(t *testing.T) {
saved := f.store.headNode.slot
driftGenesisTime(f, params.BeaconConfig().SlotsPerEpoch-1, 0)
f.store.headNode.slot = params.BeaconConfig().SlotsPerEpoch - 1
require.Equal(t, false, f.ShouldOverrideFCU())
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
f.store.headNode.slot = saved
})
t.Run("head is early", func(t *testing.T) {
saved := f.store.headNode.timestamp
f.store.headNode.timestamp = saved - 2
f.store.headNode.timestamp = saved.Add(-2 * time.Second)
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.timestamp = saved
})
t.Run("chain not finalizing", func(t *testing.T) {
saved := f.store.headNode.slot
f.store.headNode.slot = 97
driftGenesisTime(f, 97, orphanLateBlockFirstThreshold+1)
driftGenesisTime(f, 97, orphanLateBlockFirstThreshold+time.Second)
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.slot = saved
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
})
t.Run("Not single block reorg", func(t *testing.T) {
saved := f.store.headNode.parent.slot
@@ -92,11 +93,11 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
})
t.Run("parent is weak late call", func(t *testing.T) {
saved := f.store.headNode.parent.weight
driftGenesisTime(f, 2, 11)
driftGenesisTime(f, 2, 11*time.Second)
f.store.headNode.parent.weight = 0
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.parent.weight = saved
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+1)
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
})
t.Run("Head is strong", func(t *testing.T) {
f.store.headNode.weight = f.store.committeeWeight
@@ -125,7 +126,7 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
}
f.ProcessAttestation(ctx, attesters, blk.Root(), 0)
driftGenesisTime(f, 3, 1)
driftGenesisTime(f, 3, 1*time.Second)
childRoot := [32]byte{'b'}
st, blk, err = prepareForkchoiceState(ctx, 2, childRoot, [32]byte{'a'}, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
@@ -134,10 +135,9 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
require.NoError(t, err)
require.Equal(t, blk.Root(), headRoot)
orphanLateBlockFirstThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
f.store.headNode.timestamp -= params.BeaconConfig().SecondsPerSlot - orphanLateBlockFirstThreshold
f.store.headNode.timestamp.Add(-1 * time.Duration(params.BeaconConfig().SecondsPerSlot-orphanLateBlockFirstThreshold) * time.Second)
t.Run("head is weak", func(t *testing.T) {
require.Equal(t, parentRoot, f.GetProposerHead())
})
t.Run("head is nil", func(t *testing.T) {
saved := f.store.headNode
@@ -148,19 +148,20 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
t.Run("head is not from previous slot", func(t *testing.T) {
driftGenesisTime(f, 4, 0)
require.Equal(t, childRoot, f.GetProposerHead())
driftGenesisTime(f, 3, 1)
driftGenesisTime(f, 3, 1*time.Second)
})
t.Run("head is from epoch boundary", func(t *testing.T) {
saved := f.store.headNode.slot
driftGenesisTime(f, params.BeaconConfig().SlotsPerEpoch, 0)
f.store.headNode.slot = params.BeaconConfig().SlotsPerEpoch - 1
require.Equal(t, childRoot, f.GetProposerHead())
driftGenesisTime(f, 3, 1)
driftGenesisTime(f, 3, 1*time.Second)
f.store.headNode.slot = saved
})
t.Run("head is early", func(t *testing.T) {
saved := f.store.headNode.timestamp
f.store.headNode.timestamp = saved - 2
headTimeStamp := f.store.genesisTime.Add(time.Duration(uint64(f.store.headNode.slot)*params.BeaconConfig().SecondsPerSlot+1) * time.Second)
f.store.headNode.timestamp = headTimeStamp
require.Equal(t, childRoot, f.GetProposerHead())
f.store.headNode.timestamp = saved
})
@@ -170,7 +171,7 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
driftGenesisTime(f, 98, 0)
require.Equal(t, childRoot, f.GetProposerHead())
f.store.headNode.slot = saved
driftGenesisTime(f, 3, 1)
driftGenesisTime(f, 3, 1*time.Second)
})
t.Run("Not single block reorg", func(t *testing.T) {
saved := f.store.headNode.parent.slot

View File

@@ -44,7 +44,7 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
if bestDescendant == nil {
bestDescendant = justifiedNode
}
currentEpoch := slots.EpochsSinceGenesis(time.Unix(int64(s.genesisTime), 0))
currentEpoch := slots.EpochsSinceGenesis(s.genesisTime)
if !bestDescendant.viableForHead(s.justifiedCheckpoint.Epoch, currentEpoch) {
s.allTipsAreInvalid = true
return [32]byte{}, fmt.Errorf("head at slot %d with weight %d is not eligible, finalizedEpoch, justified Epoch %d, %d != %d, %d",
@@ -99,7 +99,7 @@ func (s *Store) insert(ctx context.Context,
unrealizedFinalizedEpoch: finalizedEpoch,
optimistic: true,
payloadHash: payloadHash,
timestamp: uint64(time.Now().Unix()),
timestamp: time.Now(),
}
// Set the node's target checkpoint
@@ -128,15 +128,18 @@ func (s *Store) insert(ctx context.Context,
} else {
parent.children = append(parent.children, n)
// Apply proposer boost
timeNow := uint64(time.Now().Unix())
if timeNow < s.genesisTime {
now := time.Now()
if now.Before(s.genesisTime) {
return n, nil
}
secondsIntoSlot := (timeNow - s.genesisTime) % params.BeaconConfig().SecondsPerSlot
currentSlot := slots.CurrentSlot(s.genesisTime)
boostThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
sss, err := slots.SinceSlotStart(currentSlot, s.genesisTime, now)
if err != nil {
return nil, fmt.Errorf("could not determine time since current slot started: %w", err)
}
boostThreshold := time.Duration(params.BeaconConfig().SecondsPerSlot/params.BeaconConfig().IntervalsPerSlot) * time.Second
isFirstBlock := s.proposerBoostRoot == [32]byte{}
if currentSlot == slot && secondsIntoSlot < boostThreshold && isFirstBlock {
if currentSlot == slot && sss < boostThreshold && isFirstBlock {
s.proposerBoostRoot = root
}
@@ -268,17 +271,18 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
}
// HighestReceivedBlockDelay returns the number of slots that the highest
// received block was late when receiving it
// received block was late when receiving it. For example, a block was late by 12 slots,
// then this method is expected to return 12.
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode
if n == nil {
return 0
}
secs, err := slots.SecondsSinceSlotStart(n.slot, f.store.genesisTime, n.timestamp)
sss, err := slots.SinceSlotStart(n.slot, f.store.genesisTime, n.timestamp)
if err != nil {
return 0
}
return primitives.Slot(secs / params.BeaconConfig().SecondsPerSlot)
return primitives.Slot(uint64(sss/time.Second) / params.BeaconConfig().SecondsPerSlot)
}
// ReceivedBlocksLastEpoch returns the number of blocks received in the last epoch

View File

@@ -329,7 +329,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
var b [32]byte
// Make sure it doesn't underflow
s.genesisTime = uint64(time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
ctx := t.Context()
_, blk, err := prepareForkchoiceState(ctx, 1, [32]byte{'a'}, b, b, 1, 1)
require.NoError(t, err)
@@ -347,7 +347,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration((-64*int64(params.BeaconConfig().SecondsPerSlot))-1) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration((-64*int64(params.BeaconConfig().SecondsPerSlot))-1) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
@@ -360,7 +360,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(2), count)
@@ -373,7 +373,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-66*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(3), count)
@@ -386,7 +386,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-98*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-98*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
@@ -400,7 +400,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
@@ -415,7 +415,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
@@ -430,7 +430,7 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
@@ -445,17 +445,17 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
s.genesisTime = uint64(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-132*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(2), count)
require.Equal(t, primitives.Slot(132), f.HighestReceivedBlockSlot())
s.genesisTime = uint64(time.Now().Add(time.Duration(-134*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-134*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(1), count)
s.genesisTime = uint64(time.Now().Add(time.Duration(-165*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second).Unix())
f.SetGenesisTime(time.Now().Add(time.Duration(-165*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second))
count, err = f.ReceivedBlocksLastEpoch()
require.NoError(t, err)
require.Equal(t, uint64(0), count)
@@ -616,3 +616,17 @@ func TestStore_CleanupInserting(t *testing.T) {
require.NotNil(t, f.InsertNode(ctx, st, blk))
require.Equal(t, false, f.HasNode(blk.Root()))
}
func TestStore_HighestReceivedBlockDelay(t *testing.T) {
f := ForkChoice{
store: &Store{
genesisTime: time.Unix(0, 0),
highestReceivedNode: &Node{
slot: 10,
timestamp: time.Unix(int64(((10 + 12) * params.BeaconConfig().SecondsPerSlot)), 0), // 12 slots late
},
},
}
require.Equal(t, primitives.Slot(12), f.HighestReceivedBlockDelay())
}

View File

@@ -2,6 +2,7 @@ package doublylinkedtree
import (
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
@@ -37,7 +38,7 @@ type Store struct {
nodeByPayload map[[fieldparams.RootLength]byte]*Node // nodes indexed by payload Hash
slashedIndices map[primitives.ValidatorIndex]bool // the list of equivocating validator indices
originRoot [fieldparams.RootLength]byte // The genesis block root
genesisTime uint64
genesisTime time.Time
highestReceivedNode *Node // The highest slot node.
receivedBlocksLastEpoch [fieldparams.SlotsPerEpoch]primitives.Slot // Using `highestReceivedSlot`. The slot of blocks received in the last epoch.
allTipsAreInvalid bool // tracks if all tips are not viable for head
@@ -60,7 +61,7 @@ type Node struct {
weight uint64 // weight of this node: the total balance including children
bestDescendant *Node // bestDescendant node of this node.
optimistic bool // whether the block has been fully validated or not
timestamp uint64 // The timestamp when the node was inserted.
timestamp time.Time // The timestamp when the node was inserted.
}
// Vote defines an individual validator's vote.

Some files were not shown because too many files have changed in this diff Show More