Compare commits

..

1 Commits

Author SHA1 Message Date
terence tsao
03ad817d7e Get blobs from el based in cli flag blob-to-address 2025-10-16 12:30:50 -07:00
302 changed files with 2353 additions and 13160 deletions

View File

@@ -4,67 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.1.4](https://github.com/prysmaticlabs/prysm/compare/v6.1.3...v6.1.4) - 2025-10-24
This release includes a bug fix affecting block proposals in rare cases, along with an important update for Windows users running post-Fusaka fork.
### Added
- SSZ-QL: Add endpoints for `BeaconState`/`BeaconBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15888)
- Add native state diff type and marshalling functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15250)
- Update the earliest available slot after pruning operations in beacon chain database pruner. This ensures the P2P layer accurately knows which historical data is available after pruning, preventing nodes from advertising or attempting to serve data that has been pruned. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15694)
### Fixed
- Correctly advertise (in ENR and beacon API) attestation subnets when using `--subscribe-all-subnets`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15880)
- `randomPeer`: Return if the context is cancelled when waiting for peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15876)
- Improve error message when the byte count read from disk when reading a data column sidecars is lower than expected. (Mostly, because the file is truncated.). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15881)
- Delete the genesis state file when --clear-db / --force-clear-db is specified. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15883)
- Fix sync committee subscription to use subnet indices instead of committee indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15885)
- Fixed metadata extraction on Windows by correctly splitting file paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15899)
- `VerifyDataColumnsSidecarKZGProofs`: Check if sizes match. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15892)
- Fix recoverStateSummary to persist state summaries in stateSummaryBucket instead of stateBucket (#15896). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15896)
- `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15908)
- Sync committee uses correct state to calculate position. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15905)
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
This release has several important beacon API and p2p fixes.
### Added
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
### Changed
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
### Fixed
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
@@ -3820,4 +3759,4 @@ There are no security updates in this release.
# Older than v2.0.0
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases

View File

@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-beta.1"
consensus_spec_version = "v1.6.0-beta.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-oEj0MTViJHjZo32nABK36gfvSXpbwkBk/jt6Mj7pWFI=",
"minimal": "sha256-cS4NPv6IRBoCSmWomQ8OEo8IsVNW9YawUFqoRZQBUj4=",
"mainnet": "sha256-BYuLndMPAh4p13IRJgNfVakrCVL69KRrNw2tdc3ETbE=",
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-yrq3tdwPS8Ri+ueeLAHssIT3ssMrX7zvHiJ8Xf9GVYs=",
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -6,25 +6,15 @@ go_library(
"client.go",
"errors.go",
"options.go",
"transport.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
deps = ["@com_github_pkg_errors//:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = [
"client_test.go",
"transport_test.go",
],
srcs = ["client_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -11,7 +11,6 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/api/client/beacon",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
"regexp"
"strconv"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/client"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
@@ -274,7 +273,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*stru
if err != nil {
return errors.Wrap(err, "invalid format, failed to create new POST request object")
}
req.Header.Set(api.ContentTypeHeader, api.JsonMediaType)
req.Header.Set("Content-Type", "application/json")
resp, err := c.Do(req)
if err != nil {
return err

View File

@@ -171,7 +171,7 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
if err != nil {
return
}
req.Header.Add(api.UserAgentHeader, version.BuildData())
req.Header.Add("User-Agent", version.BuildData())
for _, o := range opts {
o(req)
}
@@ -232,11 +232,11 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
var getOpts reqOption
if c.sszEnabled {
getOpts = func(r *http.Request) {
r.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
r.Header.Set("Accept", api.OctetStreamMediaType)
}
} else {
getOpts = func(r *http.Request) {
r.Header.Set(api.AcceptHeader, api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
}
data, header, err := c.do(ctx, http.MethodGet, path, nil, http.StatusOK, getOpts)
@@ -259,8 +259,8 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
func (c *Client) parseHeaderResponse(data []byte, header http.Header, slot primitives.Slot) (SignedBid, error) {
var versionHeader string
if c.sszEnabled || header.Get(api.EthConsensusVersionHeader) != "" {
versionHeader = header.Get(api.EthConsensusVersionHeader)
if c.sszEnabled || header.Get(api.VersionHeader) != "" {
versionHeader = header.Get(api.VersionHeader)
} else {
// If we don't have a version header, attempt to parse JSON for version
v := &VersionResponse{}
@@ -390,8 +390,8 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
)
if c.sszEnabled {
postOpts = func(r *http.Request) {
r.Header.Set(api.ContentTypeHeader, api.OctetStreamMediaType)
r.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
r.Header.Set("Content-Type", api.OctetStreamMediaType)
r.Header.Set("Accept", api.OctetStreamMediaType)
}
body, err = sszValidatorRegisterRequest(svr)
if err != nil {
@@ -401,8 +401,8 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
}
} else {
postOpts = func(r *http.Request) {
r.Header.Set(api.ContentTypeHeader, api.JsonMediaType)
r.Header.Set(api.AcceptHeader, api.JsonMediaType)
r.Header.Set("Content-Type", api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
body, err = jsonValidatorRegisterRequest(svr)
if err != nil {
@@ -446,7 +446,7 @@ func sszValidatorRegisterRequest(svr []*ethpb.SignedValidatorRegistrationV1) ([]
return ssz, nil
}
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.EthConsensusVersionHeader + " header")
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")
func getVersionsBlockToPayload(blockVersion int) (int, error) {
if blockVersion >= version.Fulu {
@@ -525,7 +525,7 @@ func (c *Client) SubmitBlindedBlockPostFulu(ctx context.Context, sb interfaces.R
func (c *Client) checkBlockVersion(respBytes []byte, header http.Header) (int, error) {
var versionHeader string
if c.sszEnabled {
versionHeader = strings.ToLower(header.Get(api.EthConsensusVersionHeader))
versionHeader = strings.ToLower(header.Get(api.VersionHeader))
} else {
// fallback to JSON-based version extraction
v := &VersionResponse{}
@@ -555,9 +555,9 @@ func (c *Client) buildBlindedBlockRequest(sb interfaces.ReadOnlySignedBeaconBloc
return nil, nil, errors.Wrap(err, "could not marshal SSZ for blinded block")
}
opt := func(r *http.Request) {
r.Header.Set(api.EthConsensusVersionHeader, version.String(sb.Version()))
r.Header.Set(api.ContentTypeHeader, api.OctetStreamMediaType)
r.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
r.Header.Set(api.VersionHeader, version.String(sb.Version()))
r.Header.Set("Content-Type", api.OctetStreamMediaType)
r.Header.Set("Accept", api.OctetStreamMediaType)
}
return body, opt, nil
}
@@ -571,9 +571,9 @@ func (c *Client) buildBlindedBlockRequest(sb interfaces.ReadOnlySignedBeaconBloc
return nil, nil, errors.Wrap(err, "error marshaling blinded block to JSON")
}
opt := func(r *http.Request) {
r.Header.Set(api.EthConsensusVersionHeader, version.String(sb.Version()))
r.Header.Set(api.ContentTypeHeader, api.JsonMediaType)
r.Header.Set(api.AcceptHeader, api.JsonMediaType)
r.Header.Set(api.VersionHeader, version.String(sb.Version()))
r.Header.Set("Content-Type", api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
return body, opt, nil
}
@@ -676,7 +676,7 @@ func (c *Client) parseBlindedBlockResponseJSON(
// happy path, and an error with information about the server response body for a non-200 response.
func (c *Client) Status(ctx context.Context) error {
getOpts := func(r *http.Request) {
r.Header.Set(api.AcceptHeader, api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
_, _, err := c.do(ctx, http.MethodGet, getStatus, nil, http.StatusOK, getOpts)
return err

View File

@@ -92,8 +92,8 @@ func TestClient_RegisterValidator(t *testing.T) {
t.Run("JSON success", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.JsonMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
body, err := io.ReadAll(r.Body)
defer func() {
require.NoError(t, r.Body.Close())
@@ -127,8 +127,8 @@ func TestClient_RegisterValidator(t *testing.T) {
t.Run("SSZ success", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
body, err := io.ReadAll(r.Body)
defer func() {
require.NoError(t, r.Body.Close())
@@ -225,7 +225,7 @@ func TestClient_GetHeader(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponse)),
@@ -260,7 +260,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("bellatrix ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponse), epr))
@@ -269,7 +269,7 @@ func TestClient_GetHeader(t *testing.T) {
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "bellatrix")
header.Set(api.VersionHeader, "bellatrix")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -306,7 +306,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("capella", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
@@ -338,7 +338,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("capella ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseCapella{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseCapella), epr))
@@ -347,7 +347,7 @@ func TestClient_GetHeader(t *testing.T) {
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "capella")
header.Set(api.VersionHeader, "capella")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -380,7 +380,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("deneb", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
@@ -420,7 +420,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("deneb ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseDeneb{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseDeneb), epr))
@@ -429,7 +429,7 @@ func TestClient_GetHeader(t *testing.T) {
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "deneb")
header.Set(api.VersionHeader, "deneb")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -488,7 +488,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("electra", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
@@ -533,7 +533,7 @@ func TestClient_GetHeader(t *testing.T) {
t.Run("electra ssz", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
require.Equal(t, expectedPath, r.URL.Path)
epr := &ExecHeaderResponseElectra{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponseElectra), epr))
@@ -542,7 +542,7 @@ func TestClient_GetHeader(t *testing.T) {
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "electra")
header.Set(api.VersionHeader, "electra")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -612,9 +612,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
@@ -640,9 +640,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "bellatrix", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
ep := &structs.ExecutionPayload{}
@@ -652,7 +652,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "bellatrix")
header.Set(api.VersionHeader, "bellatrix")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -680,9 +680,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "capella", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "capella", r.Header.Get(api.VersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadCapella)),
@@ -710,9 +710,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "capella", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "capella", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
epr := &ExecutionPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadCapella), epr))
ep := &structs.ExecutionPayloadCapella{}
@@ -722,7 +722,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
ssz, err := pro.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "capella")
header.Set(api.VersionHeader, "capella")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -753,9 +753,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "deneb", r.Header.Get(api.VersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)
@@ -793,9 +793,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "deneb", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
epr := &ExecPayloadResponseDeneb{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadDeneb), epr))
pro, blob, err := epr.ToProto()
@@ -807,7 +807,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
ssz, err := combined.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "deneb")
header.Set(api.VersionHeader, "deneb")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -840,9 +840,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "electra", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "electra", r.Header.Get(api.VersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
var req structs.SignedBlindedBeaconBlockElectra
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)
@@ -880,9 +880,9 @@ func TestSubmitBlindedBlock(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "electra", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "electra", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
epr := &ExecPayloadResponseDeneb{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadDeneb), epr))
pro, blob, err := epr.ToProto()
@@ -894,7 +894,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
ssz, err := combined.MarshalSSZ()
require.NoError(t, err)
header := http.Header{}
header.Set(api.EthConsensusVersionHeader, "electra")
header.Set(api.VersionHeader, "electra")
return &http.Response{
StatusCode: http.StatusOK,
Header: header,
@@ -1566,9 +1566,9 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.JsonMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.JsonMediaType, r.Header.Get("Accept"))
// Post-Fulu: only return status code, no payload
return &http.Response{
StatusCode: http.StatusAccepted,
@@ -1591,9 +1591,9 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.ContentTypeHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get(api.AcceptHeader))
require.Equal(t, "bellatrix", r.Header.Get(api.VersionHeader))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Content-Type"))
require.Equal(t, api.OctetStreamMediaType, r.Header.Get("Accept"))
// Post-Fulu: only return status code, no payload
return &http.Response{
StatusCode: http.StatusAccepted,
@@ -1617,7 +1617,7 @@ func TestSubmitBlindedBlockPostFulu(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockV2Path, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get(api.EthConsensusVersionHeader))
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
message := ErrorMessage{
Code: 400,
Message: "Bad Request",

View File

@@ -73,8 +73,8 @@ func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
Data: []byte(errors.Wrap(err, "failed to create HTTP request").Error()),
}
}
req.Header.Set(api.AcceptHeader, api.EventStreamMediaType)
req.Header.Set(api.ConnectionHeader, api.KeepAlive)
req.Header.Set("Accept", api.EventStreamMediaType)
req.Header.Set("Connection", api.KeepAlive)
resp, err := h.httpClient.Do(req)
if err != nil {
eventsChannel <- &Event{

View File

@@ -4,8 +4,6 @@ import (
"fmt"
"net/http"
"time"
"github.com/OffchainLabs/prysm/v6/api"
)
// ReqOption is a request functional option.
@@ -14,14 +12,14 @@ type ReqOption func(*http.Request)
// WithSSZEncoding is a request functional option that adds SSZ encoding header.
func WithSSZEncoding() ReqOption {
return func(req *http.Request) {
req.Header.Set(api.AcceptEncodingHeader, api.OctetStreamMediaType)
req.Header.Set("Accept", "application/octet-stream")
}
}
// WithAuthorizationToken is a request functional option that adds header for authorization token.
func WithAuthorizationToken(token string) ReqOption {
return func(req *http.Request) {
req.Header.Set(api.AuthorizationHeader, fmt.Sprintf("%s %s", api.BearerAuthorization, token))
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
}
}

View File

@@ -1,25 +0,0 @@
package client
import "net/http"
// CustomHeadersTransport adds custom headers to each request
type CustomHeadersTransport struct {
base http.RoundTripper
headers map[string][]string
}
func NewCustomHeadersTransport(base http.RoundTripper, headers map[string][]string) *CustomHeadersTransport {
return &CustomHeadersTransport{
base: base,
headers: headers,
}
}
func (t *CustomHeadersTransport) RoundTrip(req *http.Request) (*http.Response, error) {
for header, values := range t.headers {
for _, value := range values {
req.Header.Add(header, value)
}
}
return t.base.RoundTrip(req)
}

View File

@@ -1,25 +0,0 @@
package client
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
type noopTransport struct{}
func (*noopTransport) RoundTrip(*http.Request) (*http.Response, error) {
return nil, nil
}
func TestRoundTrip(t *testing.T) {
tr := &CustomHeadersTransport{base: &noopTransport{}, headers: map[string][]string{"key1": []string{"value1", "value2"}, "key2": []string{"value3"}}}
req := httptest.NewRequest("GET", "http://foo", nil)
_, err := tr.RoundTrip(req)
require.NoError(t, err)
assert.DeepEqual(t, []string{"value1", "value2"}, req.Header.Values("key1"))
assert.DeepEqual(t, []string{"value3"}, req.Header.Values("key2"))
}

View File

@@ -3,38 +3,18 @@ package api
import "net/http"
const (
// Headers
EthConsensusVersionHeader = "Eth-Consensus-Version"
EthExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
EthExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
EthConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
AcceptEncodingHeader = "Accept-Encoding"
AcceptHeader = "Accept"
AccessControlAllowOriginHeader = "Access-Control-Allow-Origin"
AuthorizationHeader = "Authorization"
CacheControlHeader = "Cache-Control"
ConnectionHeader = "Connection"
ContentEncodingHeader = "Content-Encoding"
ContentLengthHeader = "Content-Length"
ContentTypeHeader = "Content-Type"
HostHeader = "Host"
OriginHeader = "Origin"
UserAgentHeader = "User-Agent"
// Header values
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
PlainMediaType = "text/plain"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
GzipEncoding = "gzip"
BasicAuthorization = "Basic"
BearerAuthorization = "Bearer"
NoCache = "no-cache"
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
ExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)
// SetSSEHeaders sets the headers needed for a server-sent event response.
func SetSSEHeaders(w http.ResponseWriter) {
w.Header().Set(ContentTypeHeader, EventStreamMediaType)
w.Header().Set(ConnectionHeader, KeepAlive)
w.Header().Set("Content-Type", EventStreamMediaType)
w.Header().Set("Connection", KeepAlive)
}

View File

@@ -47,7 +47,7 @@ func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
next.ServeHTTP(w, r)
return
}
contentType := r.Header.Get(api.ContentTypeHeader)
contentType := r.Header.Get("Content-Type")
if contentType == "" {
http.Error(w, "Content-Type header is missing", http.StatusUnsupportedMediaType)
return
@@ -75,7 +75,7 @@ func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if _, ok := apiutil.Negotiate(r.Header.Get(api.AcceptHeader), serverAcceptedTypes); !ok {
if _, ok := apiutil.Negotiate(r.Header.Get("Accept"), serverAcceptedTypes); !ok {
http.Error(w, "Not Acceptable", http.StatusNotAcceptable)
return
}
@@ -88,7 +88,7 @@ func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
func AcceptEncodingHeaderHandler() Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.Header.Get(api.AcceptEncodingHeader), api.GzipEncoding) {
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
next.ServeHTTP(w, r)
return
}
@@ -116,10 +116,10 @@ type gzipResponseWriter struct {
}
func (g *gzipResponseWriter) WriteHeader(statusCode int) {
if strings.Contains(g.Header().Get(api.ContentTypeHeader), api.JsonMediaType) {
if strings.Contains(g.Header().Get("Content-Type"), api.JsonMediaType) {
// Removing the current Content-Length because zipping will change it.
g.Header().Del(api.ContentLengthHeader)
g.Header().Set(api.ContentEncodingHeader, api.GzipEncoding)
g.Header().Del("Content-Length")
g.Header().Set("Content-Encoding", "gzip")
g.zip = true
}

View File

@@ -132,7 +132,7 @@ func TestContentTypeHandler(t *testing.T) {
}
req := httptest.NewRequest(httpMethod, "/", nil)
if tt.contentType != "" {
req.Header.Set(api.ContentTypeHeader, tt.contentType)
req.Header.Set("Content-Type", tt.contentType)
}
rr := httptest.NewRecorder()
@@ -148,7 +148,7 @@ func TestContentTypeHandler(t *testing.T) {
func TestAcceptEncodingHeaderHandler(t *testing.T) {
dummyContent := "Test gzip middleware content"
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, r.Header.Get(api.AcceptHeader))
w.Header().Set("Content-Type", r.Header.Get("Accept"))
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte(dummyContent))
require.NoError(t, err)
@@ -165,7 +165,7 @@ func TestAcceptEncodingHeaderHandler(t *testing.T) {
{
name: "Accept gzip",
accept: api.JsonMediaType,
acceptEncoding: api.GzipEncoding,
acceptEncoding: "gzip",
expectCompressed: true,
},
{
@@ -189,7 +189,7 @@ func TestAcceptEncodingHeaderHandler(t *testing.T) {
{
name: "SSZ",
accept: api.OctetStreamMediaType,
acceptEncoding: api.GzipEncoding,
acceptEncoding: "gzip",
expectCompressed: false,
},
}
@@ -197,16 +197,16 @@ func TestAcceptEncodingHeaderHandler(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
req.Header.Set(api.AcceptHeader, tt.accept)
req.Header.Set("Accept", tt.accept)
if tt.acceptEncoding != "" {
req.Header.Set(api.AcceptEncodingHeader, tt.acceptEncoding)
req.Header.Set("Accept-Encoding", tt.acceptEncoding)
}
rr := &frozenHeaderRecorder{ResponseRecorder: httptest.NewRecorder()}
handler.ServeHTTP(rr, req)
if tt.expectCompressed {
require.Equal(t, api.GzipEncoding, rr.frozenHeader.Get(api.ContentEncodingHeader), "Expected Content-Encoding header to be 'gzip'")
require.Equal(t, "gzip", rr.frozenHeader.Get("Content-Encoding"), "Expected Content-Encoding header to be 'gzip'")
compressedBody := rr.Body.Bytes()
require.NotEqual(t, dummyContent, string(compressedBody), "Response body should be compressed and differ from the original")
@@ -230,7 +230,7 @@ func TestAcceptEncodingHeaderHandler(t *testing.T) {
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{api.JsonMediaType, api.OctetStreamMediaType}
acceptedTypes := []string{"application/json", "application/octet-stream"}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
@@ -295,7 +295,7 @@ func TestAcceptHeaderHandler(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
if tt.acceptHeader != "" {
req.Header.Set(api.AcceptHeader, tt.acceptHeader)
req.Header.Set("Accept", tt.acceptHeader)
}
rr := httptest.NewRecorder()

View File

@@ -296,8 +296,3 @@ type GetBlobsResponse struct {
Finalized bool `json:"finalized"`
Data []string `json:"data"` //blobs
}
type SSZQueryRequest struct {
Query string `json:"query"`
IncludeProof bool `json:"include_proof,omitempty"`
}

View File

@@ -173,7 +173,6 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/verification:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -346,24 +346,13 @@ func (s *Service) notifyNewHeadEvent(
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
if err != nil {
return errors.Wrap(err, "could not obtain parent root in forkchoice")
}
parentSlot, err := s.RecentBlockSlot(parentRoot)
if err != nil {
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
}
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.NewHead,
Data: &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot,
State: newHeadStateRoot,
EpochTransition: epochTransition,
EpochTransition: slots.IsEpochStart(newHeadSlot),
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
ExecutionOptimistic: isOptimistic,

View File

@@ -162,9 +162,6 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -199,9 +196,6 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
@@ -219,37 +213,6 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("epoch transition", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
require.Equal(t, true, ok)
wanted := &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
}
func TestRetrieveHead_ReadOnly(t *testing.T) {

View File

@@ -472,8 +472,8 @@ func (s *Service) removeStartupState() {
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
beaconConfig := params.BeaconConfig()
custodyRequirement := beaconConfig.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = cfg.NumberOfCustodyGroups
custodyGroupCount = beaconConfig.NumberOfColumns
}
// Safely compute the fulu fork slot.
@@ -536,11 +536,11 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
}
func fuluForkSlot() (primitives.Slot, error) {
cfg := params.BeaconConfig()
beaconConfig := params.BeaconConfig()
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)

View File

@@ -23,11 +23,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -598,103 +596,3 @@ func TestNotifyIndex(t *testing.T) {
t.Errorf("Notifier channel did not receive the index")
}
}
func TestUpdateCustodyInfoInDB(t *testing.T) {
const (
fuluForkEpoch = 10
custodyRequirement = uint64(4)
earliestStoredSlot = primitives.Slot(12)
numberOfCustodyGroups = uint64(64)
numberOfColumns = uint64(128)
)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = fuluForkEpoch
cfg.CustodyRequirement = custodyRequirement
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
cfg.NumberOfColumns = numberOfColumns
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
pbBlock := util.NewBeaconBlock()
pbBlock.Block.Slot = 12
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
require.NoError(t, err)
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
require.NoError(t, err)
t.Run("CGC increases before fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
// After Fulu
// ----------
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
t.Run("CGC increases after fulu", func(t *testing.T) {
service, requirements := minimalTestService(t)
err = requirements.db.SaveBlock(ctx, roBlock)
require.NoError(t, err)
// Before Fulu
// -----------
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
require.NoError(t, err)
require.Equal(t, earliestStoredSlot, actualEas)
require.Equal(t, custodyRequirement, actualCgc)
// After Fulu
// ----------
resetFlags := flags.Get()
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeAllDataSubnets = true
flags.Init(gFlags)
defer flags.Init(resetFlags)
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
require.NoError(t, err)
require.Equal(t, slot, actualEas)
require.Equal(t, numberOfCustodyGroups, actualCgc)
})
}

View File

@@ -130,14 +130,6 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
return earliestAvailableSlot, custodyGroupCount, nil
}
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
dch.mut.Lock()
defer dch.mut.Unlock()
dch.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}

View File

@@ -472,36 +472,6 @@ func (s *ChainService) HasBlock(ctx context.Context, rt [32]byte) bool {
return s.InitSyncBlockRoots[rt]
}
func (s *ChainService) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
if s.DB == nil {
return nil
}
count := len(blockRoots)
availableRoots := make(map[[32]byte]bool, count)
notInDBRoots := make([][32]byte, 0, count)
for _, root := range blockRoots {
if s.DB.HasBlock(ctx, root) {
availableRoots[root] = true
continue
}
notInDBRoots = append(notInDBRoots, root)
}
if s.InitSyncBlockRoots == nil {
return availableRoots
}
for _, root := range notInDBRoots {
if s.InitSyncBlockRoots[root] {
availableRoots[root] = true
}
}
return availableRoots
}
// RecentBlockSlot mocks the same method in the chain service.
func (s *ChainService) RecentBlockSlot([32]byte) (primitives.Slot, error) {
return s.BlockSlot, nil

View File

@@ -3,7 +3,6 @@ package blockchain
import (
"context"
"fmt"
"slices"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filters"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -82,10 +81,12 @@ func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, f
if err != nil {
return errors.Wrap(err, "error while retrieving block roots to verify weak subjectivity")
}
if slices.Contains(roots, v.root) {
log.Info("Weak subjectivity check has passed!!")
v.verified = true
return nil
for _, root := range roots {
if v.root == root {
log.Info("Weak subjectivity check has passed!!")
v.verified = true
return nil
}
}
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
}

View File

@@ -12,46 +12,6 @@ import (
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
)
// ConvertToAltair converts a Phase 0 beacon state to an Altair beacon state.
func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
if err != nil {
return nil, err
}
return newState, nil
}
// UpgradeToAltair updates input state to return the version Altair state.
//
// Spec code:
@@ -104,7 +64,39 @@ func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
// post.next_sync_committee = get_next_sync_committee(post)
// return post
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
newState, err := ConvertToAltair(state)
epoch := time.CurrentEpoch(state)
numValidators := state.NumValidators()
s := &ethpb.BeaconStateAltair{
GenesisTime: uint64(state.GenesisTime().Unix()),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: state.HistoricalRoots(),
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: make([]byte, numValidators),
CurrentEpochParticipation: make([]byte, numValidators),
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: make([]uint64, numValidators),
}
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
if err != nil {
return nil, err
}

View File

@@ -15,129 +15,6 @@ import (
"github.com/pkg/errors"
)
// ConvertToElectra converts a Deneb beacon state to an Electra beacon state. It does not perform any fork logic.
func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
return post, nil
}
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
//
// nolint:dupword
@@ -249,7 +126,55 @@ func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error)
//
// return post
func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
s, err := ConvertToElectra(beaconState)
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
@@ -281,38 +206,97 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
if err != nil {
return nil, errors.Wrap(err, "failed to get total active balance")
}
if err := s.SetExitBalanceToConsume(helpers.ActivationExitChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set exit balance to consume")
}
if err := s.SetEarliestExitEpoch(earliestExitEpoch); err != nil {
return nil, errors.Wrap(err, "failed to set earliest exit epoch")
}
if err := s.SetConsolidationBalanceToConsume(helpers.ConsolidationChurnLimit(primitives.Gwei(tab))); err != nil {
return nil, errors.Wrap(err, "failed to set consolidation balance to consume")
s := &ethpb.BeaconStateElectra{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}
// Sorting preActivationIndices based on a custom criteria
vals := s.Validators()
sort.Slice(preActivationIndices, func(i, j int) bool {
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
if vals[preActivationIndices[i]].ActivationEligibilityEpoch == vals[preActivationIndices[j]].ActivationEligibilityEpoch {
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
return preActivationIndices[i] < preActivationIndices[j]
}
return vals[preActivationIndices[i]].ActivationEligibilityEpoch < vals[preActivationIndices[j]].ActivationEligibilityEpoch
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
})
// need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
}
for _, index := range preActivationIndices {
if err := QueueEntireBalanceAndResetValidator(s, index); err != nil {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := QueueExcessActiveBalance(s, index); err != nil {
if err := QueueExcessActiveBalance(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}
return s, nil
return post, nil
}

View File

@@ -7,7 +7,6 @@ go_library(
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl/testnet:__pkg__",
"//consensus-types/hdiff:__subpackages__",
"//testing/spectest:__subpackages__",
"//validator/client:__pkg__",
],

View File

@@ -15,7 +15,6 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -8,7 +8,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -18,25 +17,6 @@ import (
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
s, err := ConvertToFulu(beaconState)
if err != nil {
return nil, errors.Wrap(err, "could not convert to fulu")
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
pl := make([]primitives.ValidatorIndex, len(proposerLookahead))
for i, v := range proposerLookahead {
pl[i] = primitives.ValidatorIndex(v)
}
if err := s.SetProposerLookahead(pl); err != nil {
return nil, errors.Wrap(err, "failed to set proposer lookahead")
}
return s, nil
}
func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
@@ -125,6 +105,11 @@ func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateFulu{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
@@ -186,6 +171,14 @@ func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
return state_native.InitializeFromProtoUnsafeFulu(s)
// Need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
}
return post, nil
}

View File

@@ -401,7 +401,7 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
return 0, errors.New("empty active indices list")
}
hashFunc := hash.CustomSHA256Hasher()
cfg := params.BeaconConfig()
beaconConfig := params.BeaconConfig()
seedBuffer := make([]byte, len(seed)+8)
copy(seedBuffer, seed[:])
@@ -426,14 +426,14 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
offset := (i % 16) * 2
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
if effectiveBal*fieldparams.MaxRandomValueElectra >= cfg.MaxEffectiveBalanceElectra*randomValue {
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
return candidateIndex, nil
}
} else {
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/32)
randomByte := hashFunc(seedBuffer)[i%32]
if effectiveBal*fieldparams.MaxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
if effectiveBal*fieldparams.MaxRandomByte >= beaconConfig.MaxEffectiveBalance*uint64(randomByte) {
return candidateIndex, nil
}
}

View File

@@ -201,3 +201,14 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -286,3 +286,20 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -89,14 +89,14 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
cfg := params.BeaconConfig()
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
if custodyGroup >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := cfg.NumberOfColumns
numberOfColumns := beaconConfig.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
@@ -112,9 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {
return 0, ErrIndexTooLarge

View File

@@ -43,13 +43,6 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
return ErrMismatchLength
@@ -79,30 +72,10 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
for _, sidecar := range sidecars {
for i := range sidecar.Column {
var (
commitment kzg.Bytes48
cell kzg.Cell
proof kzg.Bytes48
)
commitmentBytes := sidecar.KzgCommitments[i]
cellBytes := sidecar.Column[i]
proofBytes := sidecar.KzgProofs[i]
if len(commitmentBytes) != len(commitment) ||
len(cellBytes) != len(cell) ||
len(proofBytes) != len(proof) {
return ErrMismatchLength
}
copy(commitment[:], commitmentBytes)
copy(cell[:], cellBytes)
copy(proof[:], proofBytes)
commitments = append(commitments, commitment)
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
indices = append(indices, sidecar.Index)
cells = append(cells, cell)
proofs = append(proofs, proof)
cells = append(cells, kzg.Cell(sidecar.Column[i]))
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
}
}

View File

@@ -18,46 +18,38 @@ import (
)
func TestVerifyDataColumnSidecar(t *testing.T) {
testCases := []struct {
name string
index uint64
blobCount int
commitmentCount int
proofCount int
maxBlobsPerBlock uint64
expectedError error
}{
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
}
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
params.OverrideBeaconConfig(cfg)
t.Run("no commitments", func(t *testing.T) {
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
})
column := make([][]byte, tc.blobCount)
kzgCommitments := make([][]byte, tc.commitmentCount)
kzgProof := make([][]byte, tc.proofCount)
t.Run("KZG commitments size mismatch", func(t *testing.T) {
kzgCommitments := make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
t.Run("KZG proofs size mismatch", func(t *testing.T) {
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
if tc.expectedError != nil {
require.ErrorIs(t, err, tc.expectedError)
return
}
require.NoError(t, err)
})
}
t.Run("nominal", func(t *testing.T) {
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.NoError(t, err)
})
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
@@ -68,14 +60,6 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
t.Run("size mismatch", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
t.Run("invalid proof", func(t *testing.T) {
sidecars := generateRandomSidecars(t, seed, blobCount)
sidecars[0].Column[0][0]++ // It is OK to overflow

View File

@@ -84,10 +84,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
totalNodeBalance += validator.EffectiveBalance()
}
cfg := params.BeaconConfig()
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
validatorCustodyRequirement := cfg.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := cfg.BalancePerAdditionalCustodyGroup
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil

View File

@@ -196,7 +196,7 @@ func TestAltairCompatible(t *testing.T) {
}
func TestCanUpgradeTo(t *testing.T) {
cfg := params.BeaconConfig()
beaconConfig := params.BeaconConfig()
outerTestCases := []struct {
name string
@@ -205,32 +205,32 @@ func TestCanUpgradeTo(t *testing.T) {
}{
{
name: "Altair",
forkEpoch: &cfg.AltairForkEpoch,
forkEpoch: &beaconConfig.AltairForkEpoch,
upgradeFunc: time.CanUpgradeToAltair,
},
{
name: "Bellatrix",
forkEpoch: &cfg.BellatrixForkEpoch,
forkEpoch: &beaconConfig.BellatrixForkEpoch,
upgradeFunc: time.CanUpgradeToBellatrix,
},
{
name: "Capella",
forkEpoch: &cfg.CapellaForkEpoch,
forkEpoch: &beaconConfig.CapellaForkEpoch,
upgradeFunc: time.CanUpgradeToCapella,
},
{
name: "Deneb",
forkEpoch: &cfg.DenebForkEpoch,
forkEpoch: &beaconConfig.DenebForkEpoch,
upgradeFunc: time.CanUpgradeToDeneb,
},
{
name: "Electra",
forkEpoch: &cfg.ElectraForkEpoch,
forkEpoch: &beaconConfig.ElectraForkEpoch,
upgradeFunc: time.CanUpgradeToElectra,
},
{
name: "Fulu",
forkEpoch: &cfg.FuluForkEpoch,
forkEpoch: &beaconConfig.FuluForkEpoch,
upgradeFunc: time.CanUpgradeToFulu,
},
}
@@ -238,7 +238,7 @@ func TestCanUpgradeTo(t *testing.T) {
for _, otc := range outerTestCases {
params.SetupTestConfigCleanup(t)
*otc.forkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.OverrideBeaconConfig(beaconConfig)
innerTestCases := []struct {
name string

View File

@@ -200,7 +200,6 @@ func (dcs *DataColumnStorage) WarmCache() {
fileMetadata, err := extractFileMetadata(path)
if err != nil {
log.WithError(err).Error("Error encountered while extracting file metadata")
return nil
}
// Open the data column filesystem file.
@@ -989,8 +988,8 @@ func filePath(root [fieldparams.RootLength]byte, epoch primitives.Epoch) string
// extractFileMetadata extracts the metadata from a file path.
// If the path is not a leaf, it returns nil.
func extractFileMetadata(path string) (*fileMetadata, error) {
// Use filepath.Separator to handle both Windows (\) and Unix (/) path separators
parts := strings.Split(path, string(filepath.Separator))
// Is this Windows friendly?
parts := strings.Split(path, "/")
if len(parts) != 3 {
return nil, errors.Errorf("unexpected file %s", path)
}
@@ -1033,5 +1032,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
// period computes the period of a given epoch.
func period(epoch primitives.Epoch) uint64 {
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
}

View File

@@ -3,7 +3,6 @@ package filesystem
import (
"encoding/binary"
"os"
"path/filepath"
"testing"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
@@ -726,37 +725,3 @@ func TestPrune(t *testing.T) {
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
})
}
func TestExtractFileMetadata(t *testing.T) {
t.Run("Unix", func(t *testing.T) {
// Test with Unix-style path separators (/)
path := "12/1234/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '/' {
// On Unix systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Windows systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
t.Run("Windows", func(t *testing.T) {
// Test with Windows-style path separators (\)
path := "12\\1234\\0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
metadata, err := extractFileMetadata(path)
if filepath.Separator == '\\' {
// On Windows systems, this should succeed
require.NoError(t, err)
require.Equal(t, uint64(12), metadata.period)
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
return
}
// On Unix systems, this should fail because it uses the wrong separator
require.NotNil(t, err)
})
}

View File

@@ -212,8 +212,7 @@ func filterNoop(_ string) bool {
return true
}
// IsBlockRootDir returns true if the path segment looks like a block root directory.
func IsBlockRootDir(p string) bool {
func isRootDir(p string) bool {
dir := filepath.Base(p)
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
}

View File

@@ -188,7 +188,7 @@ func TestListDir(t *testing.T) {
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: IsBlockRootDir,
filter: isRootDir,
},
{
name: "ssz filter",

View File

@@ -19,14 +19,12 @@ import (
const (
// Full root in directory will be 66 chars, eg:
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
rootStringLen = 66
sszExt = "ssz"
partExt = "part"
rootStringLen = 66
sszExt = "ssz"
partExt = "part"
periodicEpochBaseDir = "by-epoch"
)
// PeriodicEpochBaseDir is the name of the base directory for the by-epoch layout.
const PeriodicEpochBaseDir = "by-epoch"
const (
LayoutNameFlat = "flat"
LayoutNameByEpoch = "by-epoch"
@@ -132,11 +130,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
if iter.atEOF() {
return errLayoutNotDetected
}
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
lastMoved := ""
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
moved := 0
dc := newDirCleaner()
migrationLogged := false
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
if err != nil {
if errors.Is(err, errIdentFailure) {
@@ -148,11 +146,6 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
}
return errors.Wrapf(errMigrationFailure, "failed to iterate previous layout structure while migrating blobs, err=%s", err.Error())
}
if !migrationLogged {
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).
Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
migrationLogged = true
}
src := from.dir(ident)
target := to.dir(ident)
if src != lastMoved {

View File

@@ -34,7 +34,7 @@ func (l *periodicEpochLayout) name() string {
func (l *periodicEpochLayout) blockParentDirs(ident blobIdent) []string {
return []string{
PeriodicEpochBaseDir,
periodicEpochBaseDir,
l.periodDir(ident.epoch),
l.epochDir(ident.epoch),
}
@@ -50,28 +50,28 @@ func (l *periodicEpochLayout) notify(ident blobIdent) error {
// If before == 0, it won't be used as a filter and all idents will be returned.
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
_, err := l.fs.Stat(PeriodicEpochBaseDir)
_, err := l.fs.Stat(periodicEpochBaseDir)
if err != nil {
if os.IsNotExist(err) {
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
}
return nil, errors.Wrapf(err, "error reading path %s", PeriodicEpochBaseDir)
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
}
// iterate root, which should have directories named by "period"
entries, err := listDir(l.fs, PeriodicEpochBaseDir)
entries, err := listDir(l.fs, periodicEpochBaseDir)
if err != nil {
return nil, errors.Wrapf(err, "failed to list %s", PeriodicEpochBaseDir)
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
}
return &identIterator{
fs: l.fs,
path: PeriodicEpochBaseDir,
path: periodicEpochBaseDir,
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
layers: []layoutLayer{
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
{populateIdent: populateRoot, filter: IsBlockRootDir}, // extract root from path
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
},
entries: entries,
}, nil
@@ -98,7 +98,7 @@ func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
}
func (l *periodicEpochLayout) periodDir(epoch primitives.Epoch) string {
return filepath.Join(PeriodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
}
func (l *periodicEpochLayout) sszPath(n blobIdent) string {

View File

@@ -30,7 +30,7 @@ func (l *flatLayout) iterateIdents(before primitives.Epoch) (*identIterator, err
if os.IsNotExist(err) {
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
}
return nil, errors.Wrap(err, "error reading blob base dir")
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
}
entries, err := listDir(l.fs, ".")
if err != nil {
@@ -199,10 +199,10 @@ func (l *flatSlotReader) isSSZAndBefore(fname string) bool {
// the epoch can be determined.
func isFlatCachedAndBefore(cache *blobStorageSummaryCache, before primitives.Epoch) func(string) bool {
if before == 0 {
return IsBlockRootDir
return isRootDir
}
return func(p string) bool {
if !IsBlockRootDir(p) {
if !isRootDir(p) {
return false
}
root, err := rootFromPath(p)

View File

@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnFs(fs),
)

View File

@@ -28,7 +28,6 @@ type ReadOnlyDatabase interface {
BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error)
BlockRootsBySlot(ctx context.Context, slot primitives.Slot) (bool, [][32]byte, error)
HasBlock(ctx context.Context, blockRoot [32]byte) bool
AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool
GenesisBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
GenesisBlockRoot(ctx context.Context) ([32]byte, error)
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
@@ -130,7 +129,6 @@ type NoHeadAccessDatabase interface {
// Custody operations.
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
// P2P Metadata operations.
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error

View File

@@ -336,42 +336,6 @@ func (s *Store) HasBlock(ctx context.Context, blockRoot [32]byte) bool {
return exists
}
// AvailableBlocks returns a set of roots indicating which blocks corresponding to `blockRoots` are available in the storage.
func (s *Store) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
_, span := trace.StartSpan(ctx, "BeaconDB.AvailableBlocks")
defer span.End()
count := len(blockRoots)
availableRoots := make(map[[32]byte]bool, count)
// First, check the cache for each block root.
notInCacheRoots := make([][32]byte, 0, count)
for _, root := range blockRoots {
if v, ok := s.blockCache.Get(string(root[:])); v != nil && ok {
availableRoots[root] = true
continue
}
notInCacheRoots = append(notInCacheRoots, root)
}
// Next, check the database for the remaining block roots.
if err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for _, root := range notInCacheRoots {
if bkt.Get(root[:]) != nil {
availableRoots[root] = true
}
}
return nil
}); err != nil {
panic(err) // lint:nopanic -- View never returns an error.
}
return availableRoots
}
// BlocksBySlot retrieves a list of beacon blocks and its respective roots by slot.
func (s *Store) BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlocksBySlot")

View File

@@ -656,44 +656,6 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
}
}
func TestAvailableBlocks(t *testing.T) {
ctx := t.Context()
db := setupDB(t)
b0, b1, b2 := util.NewBeaconBlock(), util.NewBeaconBlock(), util.NewBeaconBlock()
b0.Block.Slot, b1.Block.Slot, b2.Block.Slot = 10, 20, 30
sb0, err := blocks.NewSignedBeaconBlock(b0)
require.NoError(t, err)
r0, err := b0.Block.HashTreeRoot()
require.NoError(t, err)
// Save b0 but remove it from cache.
err = db.SaveBlock(ctx, sb0)
require.NoError(t, err)
db.blockCache.Del(string(r0[:]))
// b1 is not saved at all.
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
// Save b2 in cache and DB.
sb2, err := blocks.NewSignedBeaconBlock(b2)
require.NoError(t, err)
r2, err := b2.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, sb2))
require.NoError(t, err)
expected := map[[32]byte]bool{r0: true, r2: true}
actual := db.AvailableBlocks(ctx, [][32]byte{r0, r1, r2})
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.Equal(t, true, actual[i])
}
}
func TestStore_Blocks_FiltersCorrectly(t *testing.T) {
for _, tt := range blockTests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -132,6 +132,6 @@ func recoverStateSummary(ctx context.Context, tx *bolt.Tx, root []byte) error {
if err != nil {
return err
}
summaryBucket := tx.Bucket(stateSummaryBucket)
summaryBucket := tx.Bucket(stateBucket)
return summaryBucket.Put(root, summaryEnc)
}

View File

@@ -137,32 +137,3 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
require.ErrorContains(t, errMissingStateForCheckpoint.Error(), db.SaveFinalizedCheckpoint(ctx, cp))
}
// Regression test: verify that saving a checkpoint triggers recovery which writes
// the state summary into the correct stateSummaryBucket so that HasStateSummary/StateSummary see it.
func TestRecoverStateSummary_WritesToStateSummaryBucket(t *testing.T) {
db := setupDB(t)
ctx := t.Context()
// Create a block without saving a state or summary, so recovery is needed.
blk := util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{})
root, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, wsb))
// Precondition: summary not present yet.
require.Equal(t, false, db.HasStateSummary(ctx, root))
// Saving justified checkpoint should trigger recovery path calling recoverStateSummary.
cp := &ethpb.Checkpoint{Epoch: 2, Root: root[:]}
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
// Postcondition: summary is visible via the public summary APIs (which read stateSummaryBucket).
require.Equal(t, true, db.HasStateSummary(ctx, root))
summary, err := db.StateSummary(ctx, root)
require.NoError(t, err)
require.NotNil(t, summary)
assert.DeepEqual(t, &ethpb.StateSummary{Slot: blk.Block.Slot, Root: root[:]}, summary)
}

View File

@@ -2,19 +2,16 @@ package kv
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
// In this case, it also updates the earliest available slot with the provided value.
// It returns the (potentially updated) custody group count and earliest available slot.
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
@@ -73,79 +70,6 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
return storedEarliestAvailableSlot, storedGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
defer span.End()
storedEarliestAvailableSlot := primitives.Slot(0)
if err := s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the custody bucket.
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
if err != nil {
return errors.Wrap(err, "create custody bucket")
}
// Retrieve the stored earliest available slot.
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
if len(storedEarliestAvailableSlotBytes) != 0 {
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
}
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot <= storedEarliestAvailableSlot {
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "calculate minimum required slot")
}
// Prevent any increase that would put earliest available slot beyond the minimum required slot
if earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
minRequiredSlot, minRequiredEpoch,
)
}
storedEarliestAvailableSlot = earliestAvailableSlot
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
return errors.Wrap(err, "put earliest available slot")
}
return nil
}); err != nil {
return err
}
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
return nil
}
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
// only if `subscribed` is `true`.
// It returns the previous subscription status.

View File

@@ -3,13 +3,10 @@ package kv
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/time/slots"
bolt "go.etcd.io/bbolt"
)
@@ -135,131 +132,6 @@ func TestUpdateCustodyInfo(t *testing.T) {
})
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
ctx := t.Context()
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
const (
initialSlot = primitives.Slot(300)
initialCount = uint64(10)
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with a lower slot (should update for backfill)
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, earliestSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set earliest slot well before minRequiredSlot
const groupCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
require.NoError(t, err)
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
validSlot := minRequiredSlot - 100
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
require.NoError(t, err)
// Verify the database was updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, validSlot, storedSlot)
require.Equal(t, groupCount, storedCount)
})
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
db := setupDB(t)
// Calculate the current slot and minimum required slot based on actual current time
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
currentSlot := slots.CurrentSlot(genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
var minRequiredEpoch primitives.Epoch
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
} else {
minRequiredEpoch = 0
}
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
require.NoError(t, err)
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
const initialCount = uint64(5)
initialSlot := primitives.Slot(1000)
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Try to set earliest slot beyond the minimum required slot
invalidSlot := minRequiredSlot + 100
// This should fail
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
require.ErrorContains(t, "cannot increase earliest available slot", err)
require.ErrorContains(t, "exceeds minimum required slot", err)
// Verify the database wasn't updated
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
t.Run("no change when slot equals current slot", func(t *testing.T) {
const (
initialSlot = primitives.Slot(100)
initialCount = uint64(5)
)
db := setupDB(t)
// Initialize custody info
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
require.NoError(t, err)
// Update with the same slot
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
require.NoError(t, err)
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
require.Equal(t, initialSlot, storedSlot)
require.Equal(t, initialCount, storedCount)
})
}
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
ctx := context.Background()

View File

@@ -8,6 +8,7 @@ go_library(
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/iface:go_default_library",
"//config/params:go_default_library",
@@ -28,7 +29,6 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots/testing:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -24,24 +25,17 @@ const (
defaultNumBatchesToPrune = 15
)
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
// importing the p2p package and creating a cycle.
type custodyUpdater interface {
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
}
type ServiceOption func(*Service)
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
return func(s *Service) {
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
if retentionEpochs < defaultRetentionEpochs {
log.WithField("userEpochs", retentionEpochs).
WithField("minRequired", defaultRetentionEpochs).
Warn("Retention period too low, ignoring and using minimum required value")
retentionEpochs = defaultRetentionEpochs
Warn("Retention period too low, using minimum required value")
}
s.ps = pruneStartSlotFunc(retentionEpochs)
@@ -64,23 +58,17 @@ type Service struct {
slotTicker slots.Ticker
backfillWaiter func() error
initSyncWaiter func() error
custody custodyUpdater
}
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
if custody == nil {
return nil, errors.New("custody updater is required for pruner but was not provided")
}
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
p := &Service{
ctx: ctx,
db: db,
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
done: make(chan struct{}),
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
initSyncWaiter: initSyncWaiter,
backfillWaiter: backfillWaiter,
custody: custody,
}
for _, o := range opts {
@@ -169,45 +157,17 @@ func (p *Service) prune(slot primitives.Slot) error {
return errors.Wrap(err, "failed to prune batches")
}
earliestAvailableSlot := pruneUpto + 1
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
// Update pruning checkpoint.
p.prunedUpto = pruneUpto
// Update the earliest available slot after pruning
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrap(err, "update earliest available slot")
}
log.WithFields(logrus.Fields{
"prunedUpto": pruneUpto,
"earliestAvailableSlot": earliestAvailableSlot,
"duration": time.Since(tt),
"currentSlot": slot,
"batchSize": defaultPrunableBatchSize,
"numBatches": numBatches,
}).Debug("Successfully pruned chain data")
return nil
}
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
// and also persists it to the database.
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
if !params.FuluEnabled() {
return nil
}
// Update the p2p in-memory state
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
}
// Persist to database to ensure it survives restarts
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
}
return nil
}

View File

@@ -2,7 +2,6 @@ package pruner
import (
"context"
"errors"
"testing"
"time"
@@ -16,7 +15,6 @@ import (
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -64,9 +62,7 @@ func TestPruner_PruningConditions(t *testing.T) {
if !tt.backfillCompleted {
backfillWaiter = waiter
}
mockCustody := &mockCustodyUpdater{}
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
require.NoError(t, err)
go p.Start()
@@ -101,14 +97,12 @@ func TestPruner_PruneSuccess(t *testing.T) {
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
mockCustody := &mockCustodyUpdater{}
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
@@ -139,242 +133,3 @@ func TestPruner_PruneSuccess(t *testing.T) {
require.NoError(t, p.Stop())
}
// Mock custody updater for testing
type mockCustodyUpdater struct {
custodyGroupCount uint64
earliestAvailableSlot primitives.Slot
updateCallCount int
}
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
m.earliestAvailableSlot = earliestAvailableSlot
return nil
}
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater
mockCustody := &mockCustodyUpdater{
custodyGroupCount: 4,
earliestAvailableSlot: 0,
}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Check that UpdateEarliestAvailableSlot was called
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
// The earliest available slot should be pruneUpto + 1
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
// So earliest available slot should be 16 + 1 = 17
expectedEarliestSlot := primitives.Slot(17)
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
// Verify that no error was logged
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel {
t.Errorf("Unexpected error log: %s", entry.Message)
}
}
require.NoError(t, p.Stop())
}
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
type mockCustodyUpdaterWithUpdateError struct {
updateCallCount int
}
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
m.updateCallCount++
return errors.New("failed to update earliest available slot")
}
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
// Use minimal config for testing
params.SetupTestConfigCleanup(t)
config := params.MinimalSpecConfig()
params.OverrideBeaconConfig(config)
ctx := t.Context()
beaconDB := dbtest.SetupDB(t)
// Get the minimum required epochs (272 + 1 = 273 for minimal)
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests + 1)
// Use a slot that's guaranteed to be after the minimum retention period
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
tests := []struct {
name string
userRetentionEpochs primitives.Epoch
expectedPruneSlot primitives.Slot
description string
}{
{
name: "User value below minimum - should use minimum",
userRetentionEpochs: 2, // Way below minimum
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use minimum when user value is too low",
},
{
name: "User value at minimum",
userRetentionEpochs: minRequiredEpochs,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when at minimum",
},
{
name: "User value above minimum",
userRetentionEpochs: minRequiredEpochs + 10,
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs+10)*params.BeaconConfig().SlotsPerEpoch,
description: "Should use user value when above minimum",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.WarnLevel)
mockCustody := &mockCustodyUpdater{}
// Create pruner with retention period
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithRetentionPeriod(tt.userRetentionEpochs),
)
require.NoError(t, err)
// Test the pruning calculation
pruneUptoSlot := p.ps(currentSlot)
// Verify the pruning slot
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
// Check if warning was logged when value was too low
if tt.userRetentionEpochs < minRequiredEpochs {
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
}
})
}
}
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
logrus.SetLevel(logrus.DebugLevel)
hook := logTest.NewGlobal()
ctx, cancel := context.WithCancel(t.Context())
defer cancel()
beaconDB := dbtest.SetupDB(t)
retentionEpochs := primitives.Epoch(2)
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
mockCustody := &mockCustodyUpdaterWithUpdateError{}
// Create pruner with mock custody updater
p, err := New(
ctx,
beaconDB,
time.Now(),
nil,
nil,
mockCustody,
WithSlotTicker(slotTicker),
)
require.NoError(t, err)
p.ps = func(current primitives.Slot) primitives.Slot {
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
}
// Save some blocks to be pruned
for i := primitives.Slot(1); i <= 32; i++ {
blk := util.NewBeaconBlock()
blk.Block.Slot = i
wsb, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
}
// Start pruner and trigger at slot 80
go p.Start()
currentSlot := primitives.Slot(80)
slotTicker.Channel <- currentSlot
// Wait for pruning to complete
time.Sleep(100 * time.Millisecond)
// Should have called UpdateEarliestAvailableSlot
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
// Check that error was logged by the prune function
found := false
for _, entry := range hook.AllEntries() {
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
found = true
break
}
}
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
require.NoError(t, p.Stop())
}

View File

@@ -25,7 +25,6 @@ go_library(
"//testing/spectest:__subpackages__",
],
deps = [
"//api:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
@@ -99,7 +98,6 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",

View File

@@ -122,6 +122,7 @@ type Reconstructor interface {
) ([]interfaces.SignedBeaconBlock, error)
ReconstructBlobSidecars(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [fieldparams.RootLength]byte, hi func(uint64) bool) ([]blocks.VerifiedROBlob, error)
ConstructDataColumnSidecars(ctx context.Context, populator peerdas.ConstructionPopulator) ([]blocks.VerifiedRODataColumn, error)
GetBlobsV2(ctx context.Context, versionedHashes []common.Hash) ([]*pb.BlobAndProofV2, error)
}
// EngineCaller defines a client that can interact with an Ethereum

View File

@@ -13,7 +13,6 @@ import (
"strings"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
@@ -178,7 +177,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionPayload"].(*pb.ExecutionPayload)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -224,7 +223,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionPayloadCapellaWithValue"].(*pb.GetPayloadV2ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -277,7 +276,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionPayloadDenebWithValue"].(*pb.GetPayloadV3ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -332,7 +331,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionBundleElectra"].(*pb.GetPayloadV4ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -414,7 +413,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionBundleFulu"].(*pb.GetPayloadV5ResponseJson)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -936,7 +935,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -967,7 +966,7 @@ func TestClient_HTTP(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -1061,7 +1060,7 @@ func TestReconstructFullBellatrixBlock(t *testing.T) {
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -1164,7 +1163,7 @@ func TestReconstructFullBellatrixBlockBatch(t *testing.T) {
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -1237,7 +1236,7 @@ func TestReconstructFullBellatrixBlockBatch(t *testing.T) {
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2154,7 +2153,7 @@ func (*testEngineService) NewPayloadV2(
func forkchoiceUpdateSetup(t *testing.T, fcs *pb.ForkchoiceState, att *pb.PayloadAttributes, res *ForkchoiceUpdatedResponse) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2193,7 +2192,7 @@ func forkchoiceUpdateSetup(t *testing.T, fcs *pb.ForkchoiceState, att *pb.Payloa
func forkchoiceUpdateSetupV2(t *testing.T, fcs *pb.ForkchoiceState, att *pb.PayloadAttributesV2, res *ForkchoiceUpdatedResponse) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2232,7 +2231,7 @@ func forkchoiceUpdateSetupV2(t *testing.T, fcs *pb.ForkchoiceState, att *pb.Payl
func newPayloadSetup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayload) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2266,7 +2265,7 @@ func newPayloadSetup(t *testing.T, status *pb.PayloadStatus, payload *pb.Executi
func newPayloadV2Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayloadCapella) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2300,7 +2299,7 @@ func newPayloadV2Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.Execu
func newPayloadV3Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayloadDeneb) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2334,7 +2333,7 @@ func newPayloadV3Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.Execu
func newPayloadV4Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.ExecutionPayloadDeneb, requests *pb.ExecutionRequests) *Service {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2415,7 +2414,7 @@ func TestReconstructBlindedBlockBatch(t *testing.T) {
func Test_ExchangeCapabilities(t *testing.T) {
t.Run("empty response works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2447,7 +2446,7 @@ func Test_ExchangeCapabilities(t *testing.T) {
})
t.Run("list of items", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2642,7 +2641,7 @@ func createRandomKzgCommitments(t *testing.T, num int) [][]byte {
func createBlobServer(t *testing.T, numBlobs int, callbackFuncs ...func()) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
@@ -2667,7 +2666,7 @@ func createBlobServer(t *testing.T, numBlobs int, callbackFuncs ...func()) *http
func createBlobServerV2(t *testing.T, numBlobs int, blobMasks []bool) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()

View File

@@ -6,7 +6,6 @@ import (
"net/http/httptest"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
pb "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -55,7 +54,7 @@ func (s *mockEngine) ServeHTTP(w http.ResponseWriter, r *http.Request) {
http.Error(w, "failed to decode request: "+err.Error(), http.StatusBadRequest)
return
}
w.Header().Set(api.ContentTypeHeader, api.JsonMediaType)
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(s.t, r.Body.Close())
}()

View File

@@ -7,7 +7,6 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/config/params"
contracts "github.com/OffchainLabs/prysm/v6/contracts/deposit"
"github.com/OffchainLabs/prysm/v6/io/logs"
@@ -121,7 +120,7 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
if err != nil {
return nil, err
}
headers.Set(api.AuthorizationHeader, header)
headers.Set("Authorization", header)
}
for _, h := range s.cfg.headers {
if h == "" {

View File

@@ -6,7 +6,6 @@ go_library(
"cache.go",
"helpers.go",
"lightclient.go",
"log.go",
"store.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client",

View File

@@ -1,5 +0,0 @@
package light_client
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "light-client")

View File

@@ -14,6 +14,7 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
)
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")

View File

@@ -81,7 +81,6 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/server/middleware:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",

View File

@@ -2,13 +2,11 @@ package node
import (
"context"
"os"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
@@ -38,22 +36,6 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearGenesis(dir string) error {
if !c.shouldProceed() {
return nil
}
gfile, err := genesis.FindStateFile(dir)
if err != nil {
return nil
}
if err := os.Remove(gfile.FilePath()); err != nil {
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
}
return nil
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil

View File

@@ -12,7 +12,6 @@ import (
"os"
"os/signal"
"path/filepath"
"slices"
"strconv"
"strings"
"sync"
@@ -178,9 +177,6 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.db = kvdb
if err := dbClearer.clearGenesis(dataDir); err != nil {
return nil, errors.Wrap(err, "could not clear genesis state")
}
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
return nil, errors.Wrap(err, "could not initialize genesis state")
@@ -832,6 +828,7 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}, bFil
regularsync.WithSlasherEnabled(b.slasherEnabled),
regularsync.WithLightClientStore(b.lcStore),
regularsync.WithBatchVerifierLimit(b.cliCtx.Int(flags.BatchVerifierLimit.Name)),
regularsync.WithBlobToAddress(b.cliCtx.String(flags.BlobToAddressFlag.Name)),
)
return b.services.RegisterService(rs)
}
@@ -1109,7 +1106,6 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
genesis,
initSyncWaiter(cliCtx.Context, b.initialSyncComplete),
backfillService.WaitForCompletion,
b.fetchP2P(),
opts...,
)
if err != nil {
@@ -1136,8 +1132,10 @@ func (b *BeaconNode) registerLightClientStore() {
func hasNetworkFlag(cliCtx *cli.Context) bool {
for _, flag := range features.NetworkFlags {
if slices.ContainsFunc(flag.Names(), cliCtx.IsSet) {
return true
for _, name := range flag.Names() {
if cliCtx.IsSet(name) {
return true
}
}
}
return false

View File

@@ -10,7 +10,6 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
@@ -250,18 +249,18 @@ func TestCORS(t *testing.T) {
// Create a request and response recorder
req := httptest.NewRequest("GET", "http://example.com/some-path", nil)
req.Header.Set(api.OriginHeader, tc.origin)
req.Header.Set("Origin", tc.origin)
rr := httptest.NewRecorder()
// Serve HTTP
handler.ServeHTTP(rr, req)
// Check the CORS headers based on the expected outcome
if tc.expectAllow && rr.Header().Get(api.AccessControlAllowOriginHeader) != tc.origin {
t.Errorf("Expected Access-Control-Allow-Origin header to be %v, got %v", tc.origin, rr.Header().Get(api.AccessControlAllowOriginHeader))
if tc.expectAllow && rr.Header().Get("Access-Control-Allow-Origin") != tc.origin {
t.Errorf("Expected Access-Control-Allow-Origin header to be %v, got %v", tc.origin, rr.Header().Get("Access-Control-Allow-Origin"))
}
if !tc.expectAllow && rr.Header().Get(api.AccessControlAllowOriginHeader) != "" {
t.Errorf("Expected Access-Control-Allow-Origin header to be empty for disallowed origin, got %v", rr.Header().Get(api.AccessControlAllowOriginHeader))
if !tc.expectAllow && rr.Header().Get("Access-Control-Allow-Origin") != "" {
t.Errorf("Expected Access-Control-Allow-Origin header to be empty for disallowed origin, got %v", rr.Header().Get("Access-Control-Allow-Origin"))
}
})
}

View File

@@ -10,13 +10,11 @@ import (
// pruneExpired prunes attestations pool on every slot interval.
func (s *Service) pruneExpired() {
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
offset := time.Duration(secondsPerSlot-1) * time.Second
slotTicker := slots.NewSlotTickerWithOffset(s.genesisTime, offset, secondsPerSlot)
defer slotTicker.Done()
ticker := time.NewTicker(s.cfg.pruneInterval)
defer ticker.Stop()
for {
select {
case <-slotTicker.C():
case <-ticker.C:
s.pruneExpiredAtts()
s.updateMetrics()
case <-s.ctx.Done():

View File

@@ -17,9 +17,7 @@ import (
)
func TestPruneExpired_Ticker(t *testing.T) {
// Need timeout longer than the offset (secondsPerSlot - 1) + some buffer
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot+5) * time.Second
ctx, cancel := context.WithTimeout(t.Context(), timeout)
ctx, cancel := context.WithTimeout(t.Context(), 3*time.Second)
defer cancel()
s, err := NewService(ctx, &Config{

View File

@@ -7,7 +7,6 @@ import (
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/sirupsen/logrus"
)
// This is the default queue size used if we have specified an invalid one.
@@ -64,17 +63,12 @@ func (cfg *Config) connManagerLowHigh() (int, int) {
return low, high
}
// validateConfig validates whether the provided config has valid values and sets
// the invalid ones to default.
func validateConfig(cfg *Config) {
if cfg.QueueSize > 0 {
return
// validateConfig validates whether the values provided are accurate and will set
// the appropriate values for those that are invalid.
func validateConfig(cfg *Config) *Config {
if cfg.QueueSize == 0 {
log.Warnf("Invalid pubsub queue size of %d initialized, setting the quese size as %d instead", cfg.QueueSize, defaultPubsubQueueSize)
cfg.QueueSize = defaultPubsubQueueSize
}
log.WithFields(logrus.Fields{
"queueSize": cfg.QueueSize,
"default": defaultPubsubQueueSize,
}).Warning("Invalid pubsub queue size, setting the queue size to the default value")
cfg.QueueSize = defaultPubsubQueueSize
return cfg
}

View File

@@ -115,57 +115,6 @@ func (s *Service) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return earliestAvailableSlot, custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot updates the earliest available slot.
//
// IMPORTANT: This function should only be called when Fulu is enabled. The caller is responsible
// for checking params.FuluEnabled() before calling this function.
func (s *Service) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
s.custodyInfoLock.Lock()
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
return errors.New("no custody info available")
}
currentSlot := slots.CurrentSlot(s.genesisTime)
currentEpoch := slots.ToEpoch(currentSlot)
// Allow decrease (for backfill scenarios)
if earliestAvailableSlot < s.custodyInfo.earliestAvailableSlot {
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
// This ensures we don't voluntarily refuse to serve mandatory block data
// This check applies regardless of whether we're early or late in the chain
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
// Calculate the minimum required epoch (or 0 if we're early in the chain)
minRequiredEpoch := primitives.Epoch(0)
if currentEpoch > minEpochsForBlocks {
minRequiredEpoch = currentEpoch - minEpochsForBlocks
}
// Convert to slot to ensure we compare at slot-level granularity, not epoch-level
// This prevents allowing increases to slots within minRequiredEpoch that are after its first slot
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
if err != nil {
return errors.Wrap(err, "epoch start")
}
// Prevent any increase that would put earliest slot beyond the minimum required slot
if earliestAvailableSlot > s.custodyInfo.earliestAvailableSlot && earliestAvailableSlot > minRequiredSlot {
return errors.Errorf(
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot), minRequiredSlot, minRequiredEpoch,
)
}
s.custodyInfo.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then
@@ -259,11 +208,11 @@ func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
}
func fuluForkSlot() (primitives.Slot, error) {
cfg := params.BeaconConfig()
beaconConfig := params.BeaconConfig()
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)

View File

@@ -4,7 +4,6 @@ import (
"context"
"strings"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
@@ -168,148 +167,6 @@ func TestUpdateCustodyInfo(t *testing.T) {
}
}
func TestUpdateEarliestAvailableSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
params.OverrideBeaconConfig(config)
t.Run("Valid update", func(t *testing.T) {
const (
initialSlot primitives.Slot = 50
newSlot primitives.Slot = 100
groupCount uint64 = 5
)
// Set up a scenario where we're far enough in the chain that increasing to newSlot is valid
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 100 // Well beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
service := &Service{
// Set genesis time in the past so currentSlot is the "current" slot
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: initialSlot,
groupCount: groupCount,
},
}
err := service.UpdateEarliestAvailableSlot(newSlot)
require.NoError(t, err)
require.Equal(t, newSlot, service.custodyInfo.earliestAvailableSlot)
require.Equal(t, groupCount, service.custodyInfo.groupCount) // Should preserve group count
})
t.Run("Earlier slot - allowed for backfill", func(t *testing.T) {
const initialSlot primitives.Slot = 100
const earlierSlot primitives.Slot = 50
service := &Service{
genesisTime: time.Now(),
custodyInfo: &custodyInfo{
earliestAvailableSlot: initialSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(earlierSlot)
require.NoError(t, err)
require.Equal(t, earlierSlot, service.custodyInfo.earliestAvailableSlot) // Should decrease for backfill
})
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - late in chain", func(t *testing.T) {
// Set current time far enough in the future to have a meaningful MIN_EPOCHS_FOR_BLOCK_REQUESTS period
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 100 // Well beyond the minimum
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Calculate the minimum allowed epoch
minRequiredEpoch := currentEpoch - minEpochsForBlocks
minRequiredSlot := primitives.Slot(minRequiredEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Try to set earliest slot to a value within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period (should fail)
attemptedSlot := minRequiredSlot + 1000 // Within the mandatory retention period
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: minRequiredSlot - 100, // Current value is before the min required
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Prevent increase at epoch boundary - slot precision matters", func(t *testing.T) {
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks + 976 // Current epoch
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
minRequiredEpoch := currentEpoch - minEpochsForBlocks // = 976
storedEarliestSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) - 232 // Before minRequired
// Try to set earliest to slot 8 of the minRequiredEpoch (should fail with slot comparison)
attemptedSlot := primitives.Slot(minRequiredEpoch)*primitives.Slot(params.BeaconConfig().SlotsPerEpoch) + 8
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: storedEarliestSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err, "Should prevent increasing earliest slot beyond the minimum required SLOT (not just epoch)")
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS - early in chain", func(t *testing.T) {
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
currentEpoch := minEpochsForBlocks - 10 // Early in chain, BEFORE we have MIN_EPOCHS_FOR_BLOCK_REQUESTS of history
currentSlot := primitives.Slot(currentEpoch) * primitives.Slot(params.BeaconConfig().SlotsPerEpoch)
// Current earliest slot is at slot 100
currentEarliestSlot := primitives.Slot(100)
// Try to increase earliest slot to slot 1000 (which would be within the mandatory window from currentSlot)
attemptedSlot := primitives.Slot(1000)
service := &Service{
genesisTime: time.Now().Add(-time.Duration(currentSlot) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
custodyInfo: &custodyInfo{
earliestAvailableSlot: currentEarliestSlot,
groupCount: 5,
},
}
err := service.UpdateEarliestAvailableSlot(attemptedSlot)
require.NotNil(t, err, "Should prevent increasing earliest slot within the mandatory retention window, even early in chain")
require.Equal(t, true, strings.Contains(err.Error(), "cannot increase earliest available slot"))
})
t.Run("Nil custody info - should return error", func(t *testing.T) {
service := &Service{
genesisTime: time.Now(),
custodyInfo: nil, // No custody info set
}
err := service.UpdateEarliestAvailableSlot(100)
require.NotNil(t, err)
require.Equal(t, true, strings.Contains(err.Error(), "no custody info available"))
})
}
func TestCustodyGroupCountFromPeer(t *testing.T) {
const (
expectedENR uint64 = 7

View File

@@ -126,7 +126,6 @@ type (
EarliestAvailableSlot(ctx context.Context) (primitives.Slot, error)
CustodyGroupCount(ctx context.Context) (uint64, error)
UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
CustodyGroupCountFromPeer(peer.ID) uint64
}
)

View File

@@ -3,7 +3,6 @@ package p2p
import (
"strings"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/peerstore"
"github.com/prometheus/client_golang/prometheus"
@@ -27,25 +26,12 @@ var (
Help: "The number of peers in a given state.",
},
[]string{"state"})
p2pMaxPeers = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_max_peers",
Help: "The target maximum number of peers.",
})
p2pPeerCountDirectionType = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count_direction_type",
Help: "The number of peers in a given direction and type.",
},
[]string{"direction", "type"})
connectedPeersCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "connected_libp2p_peers",
Help: "Tracks the total number of connected libp2p peers by agent string",
},
[]string{"agent"},
)
minimumPeersPerSubnet = promauto.NewGauge(prometheus.GaugeOpts{
Name: "p2p_minimum_peers_per_subnet",
Help: "The minimum number of peers to connect to per subnet",
})
avgScoreConnectedClients = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "connected_libp2p_peers_average_scores",
Help: "Tracks the overall p2p scores of connected libp2p peers by agent string",
@@ -188,26 +174,18 @@ var (
)
func (s *Service) updateMetrics() {
store := s.Host().Peerstore()
connectedPeers := s.peers.Connected()
p2pPeerCount.WithLabelValues("Connected").Set(float64(len(connectedPeers)))
p2pPeerCount.WithLabelValues("Disconnected").Set(float64(len(s.peers.Disconnected())))
p2pPeerCount.WithLabelValues("Connecting").Set(float64(len(s.peers.Connecting())))
p2pPeerCount.WithLabelValues("Disconnecting").Set(float64(len(s.peers.Disconnecting())))
p2pPeerCount.WithLabelValues("Bad").Set(float64(len(s.peers.Bad())))
upperTCP := strings.ToUpper(string(peers.TCP))
upperQUIC := strings.ToUpper(string(peers.QUIC))
p2pPeerCountDirectionType.WithLabelValues("inbound", upperTCP).Set(float64(len(s.peers.InboundConnectedWithProtocol(peers.TCP))))
p2pPeerCountDirectionType.WithLabelValues("inbound", upperQUIC).Set(float64(len(s.peers.InboundConnectedWithProtocol(peers.QUIC))))
p2pPeerCountDirectionType.WithLabelValues("outbound", upperTCP).Set(float64(len(s.peers.OutboundConnectedWithProtocol(peers.TCP))))
p2pPeerCountDirectionType.WithLabelValues("outbound", upperQUIC).Set(float64(len(s.peers.OutboundConnectedWithProtocol(peers.QUIC))))
connectedPeersCountByClient := make(map[string]float64)
store := s.Host().Peerstore()
numConnectedPeersByClient := make(map[string]float64)
peerScoresByClient := make(map[string][]float64)
for _, p := range connectedPeers {
for i := 0; i < len(connectedPeers); i++ {
p := connectedPeers[i]
pid, err := peer.Decode(p.String())
if err != nil {
log.WithError(err).Debug("Could not decode peer string")
@@ -215,18 +193,16 @@ func (s *Service) updateMetrics() {
}
foundName := agentFromPid(pid, store)
connectedPeersCountByClient[foundName] += 1
numConnectedPeersByClient[foundName] += 1
// Get peer scoring data.
overallScore := s.peers.Scorers().Score(pid)
peerScoresByClient[foundName] = append(peerScoresByClient[foundName], overallScore)
}
connectedPeersCount.Reset() // Clear out previous results.
for agent, total := range connectedPeersCountByClient {
for agent, total := range numConnectedPeersByClient {
connectedPeersCount.WithLabelValues(agent).Set(total)
}
avgScoreConnectedClients.Reset() // Clear out previous results.
for agent, scoringData := range peerScoresByClient {
avgScore := average(scoringData)

View File

@@ -25,7 +25,6 @@ package peers
import (
"context"
"net"
"slices"
"sort"
"strings"
"time"
@@ -82,31 +81,29 @@ const (
type InternetProtocol string
const (
TCP = InternetProtocol("tcp")
QUIC = InternetProtocol("quic")
TCP = "tcp"
QUIC = "quic"
)
type (
// Status is the structure holding the peer status information.
Status struct {
ctx context.Context
scorers *scorers.Service
store *peerdata.Store
ipTracker map[string]uint64
rand *rand.Rand
ipColocationWhitelist []*net.IPNet
}
// Status is the structure holding the peer status information.
type Status struct {
ctx context.Context
scorers *scorers.Service
store *peerdata.Store
ipTracker map[string]uint64
rand *rand.Rand
ipColocationWhitelist []*net.IPNet
}
// StatusConfig represents peer status service params.
StatusConfig struct {
// PeerLimit specifies maximum amount of concurrent peers that are expected to be connect to the node.
PeerLimit int
// ScorerParams holds peer scorer configuration params.
ScorerParams *scorers.Config
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
IPColocationWhitelist []*net.IPNet
}
)
// StatusConfig represents peer status service params.
type StatusConfig struct {
// PeerLimit specifies maximum amount of concurrent peers that are expected to be connect to the node.
PeerLimit int
// ScorerParams holds peer scorer configuration params.
ScorerParams *scorers.Config
// IPColocationWhitelist contains CIDR ranges that are exempt from IP colocation limits.
IPColocationWhitelist []*net.IPNet
}
// NewStatus creates a new status entity.
func NewStatus(ctx context.Context, config *StatusConfig) *Status {
@@ -307,8 +304,11 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
connectedStatus := peerData.ConnState == Connecting || peerData.ConnState == Connected
if connectedStatus && peerData.MetaData != nil && !peerData.MetaData.IsNil() && peerData.MetaData.AttnetsBitfield() != nil {
indices := indicesFromBitfield(peerData.MetaData.AttnetsBitfield())
if slices.Contains(indices, index) {
peers = append(peers, pid)
for _, idx := range indices {
if idx == index {
peers = append(peers, pid)
break
}
}
}
}

View File

@@ -345,17 +345,17 @@ func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg)
}
cfg := params.BeaconConfig()
beaconConfig := params.BeaconConfig()
// Check if the message is to be updated in fulu.
if epoch >= cfg.FuluForkEpoch {
if epoch >= beaconConfig.FuluForkEpoch {
if version, ok := fuluMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}
}
// Check if the message is to be updated in altair.
if epoch >= cfg.AltairForkEpoch {
if epoch >= beaconConfig.AltairForkEpoch {
if version, ok := altairMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -107,16 +106,12 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
_ = cancel // govet fix for lost cancel. Cancel is handled in service.Stop().
validateConfig(cfg)
cfg = validateConfig(cfg)
privKey, err := privKey(cfg)
if err != nil {
return nil, errors.Wrapf(err, "failed to generate p2p private key")
}
p2pMaxPeers.Set(float64(cfg.MaxPeers))
minimumPeersPerSubnet.Set(float64(flags.Get().MinimumPeersPerSubnet))
metaData, err := metaDataFromDB(ctx, cfg.DB)
if err != nil {
log.WithError(err).Error("Failed to create peer metadata")

View File

@@ -514,26 +514,17 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
//
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
cfg := params.BeaconConfig()
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
subs := make([]uint64, 0, subnetsPerNode)
if flags.Get().SubscribeToAllSubnets {
subnets := make([]uint64, 0, cfg.AttestationSubnetCount)
for i := range cfg.AttestationSubnetCount {
subnets = append(subnets, i)
}
return subnets, nil
}
subnets := make([]uint64, 0, cfg.SubnetsPerNode)
for i := range cfg.SubnetsPerNode {
for i := uint64(0); i < subnetsPerNode; i++ {
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
if err != nil {
return nil, errors.Wrap(err, "compute subscribed subnet")
return nil, err
}
subnets = append(subnets, sub)
subs = append(subs, sub)
}
return subnets, nil
return subs, nil
}
// Spec pseudocode definition:

View File

@@ -514,39 +514,17 @@ func TestDataColumnSubnets(t *testing.T) {
func TestSubnetComputation(t *testing.T) {
db, err := enode.OpenDB("")
require.NoError(t, err)
assert.NoError(t, err)
defer db.Close()
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
require.NoError(t, err)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
require.NoError(t, err)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
cfg := params.BeaconConfig()
t.Run("standard", func(t *testing.T) {
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, cfg.SubnetsPerNode, uint64(len(retrievedSubnets)))
require.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
})
t.Run("subscribed to all", func(t *testing.T) {
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, cfg.AttestationSubnetCount, uint64(len(retrievedSubnets)))
for i := range cfg.AttestationSubnetCount {
require.Equal(t, i, retrievedSubnets[i])
}
})
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
assert.NoError(t, err)
assert.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
}
func TestInitializePersistentSubnets(t *testing.T) {

View File

@@ -213,11 +213,6 @@ func (s *FakeP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return earliestAvailableSlot, custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot -- fake.
func (*FakeP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
return nil
}
// CustodyGroupCountFromPeer -- fake.
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0

View File

@@ -499,15 +499,6 @@ func (s *TestP2P) UpdateCustodyInfo(earliestAvailableSlot primitives.Slot, custo
return s.earliestAvailableSlot, s.custodyGroupCount, nil
}
// UpdateEarliestAvailableSlot .
func (s *TestP2P) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
s.custodyInfoMut.Lock()
defer s.custodyInfoMut.Unlock()
s.earliestAvailableSlot = earliestAvailableSlot
return nil
}
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then

View File

@@ -68,10 +68,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
if !params.FuluEnabled() {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
}
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath)
}

View File

@@ -97,7 +97,7 @@ func (s *Service) endpoints(
endpoints = append(endpoints, s.beaconEndpoints(ch, stater, blocker, validatorServer, coreService)...)
endpoints = append(endpoints, s.configEndpoints()...)
endpoints = append(endpoints, s.eventsEndpoints()...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, blocker, coreService)...)
endpoints = append(endpoints, s.prysmBeaconEndpoints(ch, stater, coreService)...)
endpoints = append(endpoints, s.prysmNodeEndpoints()...)
endpoints = append(endpoints, s.prysmValidatorEndpoints(stater, coreService)...)
@@ -1184,7 +1184,6 @@ func (s *Service) eventsEndpoints() []endpoint {
func (s *Service) prysmBeaconEndpoints(
ch *stategen.CanonicalHistory,
stater lookup.Stater,
blocker lookup.Blocker,
coreService *core.Service,
) []endpoint {
server := &beaconprysm.Server{
@@ -1195,7 +1194,6 @@ func (s *Service) prysmBeaconEndpoints(
CanonicalHistory: ch,
BeaconDB: s.cfg.BeaconDB,
Stater: stater,
Blocker: blocker,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
CoreService: coreService,
@@ -1268,28 +1266,6 @@ func (s *Service) prysmBeaconEndpoints(
handler: server.PublishBlobs,
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/beacon/states/{state_id}/query",
name: namespace + ".QueryBeaconState",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.QueryBeaconState,
methods: []string{http.MethodPost},
},
{
template: "/prysm/v1/beacon/blocks/{block_id}/query",
name: namespace + ".QueryBeaconBlock",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.QueryBeaconBlock,
methods: []string{http.MethodPost},
},
}
}

View File

@@ -127,8 +127,6 @@ func Test_endpoints(t *testing.T) {
"/prysm/v1/beacon/states/{state_id}/validator_count": {http.MethodGet},
"/prysm/v1/beacon/chain_head": {http.MethodGet},
"/prysm/v1/beacon/blobs": {http.MethodPost},
"/prysm/v1/beacon/states/{state_id}/query": {http.MethodPost},
"/prysm/v1/beacon/blocks/{block_id}/query": {http.MethodPost},
}
prysmNodeRoutes := map[string][]string{

View File

@@ -101,9 +101,9 @@ func versionHeaderFromRequest(body []byte) (string, error) {
// from the request. If the version header is not provided and not required, it attempts
// to derive it from the request body.
func validateVersionHeader(r *http.Request, body []byte, versionRequired bool) (string, error) {
versionHeader := r.Header.Get(api.EthConsensusVersionHeader)
versionHeader := r.Header.Get(api.VersionHeader)
if versionRequired && versionHeader == "" {
return "", fmt.Errorf("%s header is required", api.EthConsensusVersionHeader)
return "", fmt.Errorf("%s header is required", api.VersionHeader)
}
if !versionRequired && versionHeader == "" {
@@ -219,7 +219,7 @@ func (s *Server) getBlockV2Ssz(w http.ResponseWriter, blk interfaces.ReadOnlySig
httputil.HandleError(w, fmt.Sprintf("Unknown block type %T", blk), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteSsz(w, result)
}
@@ -254,7 +254,7 @@ func (s *Server) getBlockV2Json(ctx context.Context, w http.ResponseWriter, blk
httputil.HandleError(w, fmt.Sprintf("Unknown block type %T", blk), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, result.Version)
w.Header().Set(api.VersionHeader, result.Version)
httputil.WriteJson(w, result)
}
@@ -368,7 +368,7 @@ func (s *Server) GetBlockAttestationsV2(w http.ResponseWriter, r *http.Request)
Finalized: s.FinalizationFetcher.IsFinalized(ctx, root),
Data: attBytes,
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(v))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, resp)
}
@@ -1693,7 +1693,7 @@ func (s *Server) GetPendingConsolidations(w http.ResponseWriter, r *http.Request
httputil.HandleError(w, "Could not get pending consolidations: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(st.Version()))
w.Header().Set(api.VersionHeader, version.String(st.Version()))
if httputil.RespondWithSsz(r) {
sszData, err := serializeItems(pd)
if err != nil {
@@ -1749,7 +1749,7 @@ func (s *Server) GetPendingDeposits(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not get pending deposits: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(st.Version()))
w.Header().Set(api.VersionHeader, version.String(st.Version()))
if httputil.RespondWithSsz(r) {
sszData, err := serializeItems(pd)
if err != nil {
@@ -1805,7 +1805,7 @@ func (s *Server) GetPendingPartialWithdrawals(w http.ResponseWriter, r *http.Req
httputil.HandleError(w, "Could not get pending partial withdrawals: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(st.Version()))
w.Header().Set(api.VersionHeader, version.String(st.Version()))
if httputil.RespondWithSsz(r) {
sszData, err := serializeItems(ppw)
if err != nil {
@@ -1858,7 +1858,7 @@ func (s *Server) GetProposerLookahead(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not get proposer look ahead: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(st.Version()))
w.Header().Set(api.VersionHeader, version.String(st.Version()))
if httputil.RespondWithSsz(r) {
sszLen := (*primitives.ValidatorIndex)(nil).SizeSSZ()
sszData := make([]byte, len(pl)*sszLen)

View File

@@ -150,7 +150,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(v))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, &structs.ListAttestationsResponse{
Version: version.String(v),
Data: attsData,
@@ -225,9 +225,9 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
versionHeader := r.Header.Get(api.EthConsensusVersionHeader)
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.EthConsensusVersionHeader+" header is required", http.StatusBadRequest)
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
@@ -826,7 +826,7 @@ func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request)
Version: version.String(v),
Data: attBytes,
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(v))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, resp)
}
@@ -861,9 +861,9 @@ func (s *Server) SubmitAttesterSlashingsV2(w http.ResponseWriter, r *http.Reques
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttesterSlashingsV2")
defer span.End()
versionHeader := r.Header.Get(api.EthConsensusVersionHeader)
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.EthConsensusVersionHeader+" header is required", http.StatusBadRequest)
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
}
v, err := version.FromString(versionHeader)
if err != nil {

View File

@@ -656,7 +656,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -685,7 +685,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -705,7 +705,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -721,7 +721,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(singleAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -734,7 +734,7 @@ func TestSubmitAttestations(t *testing.T) {
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -750,7 +750,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -766,7 +766,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -794,7 +794,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(singleAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -823,7 +823,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(multipleAttsElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -835,7 +835,7 @@ func TestSubmitAttestations(t *testing.T) {
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -851,7 +851,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -867,7 +867,7 @@ func TestSubmitAttestations(t *testing.T) {
_, err := body.WriteString(invalidAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2168,7 +2168,7 @@ func TestSubmitAttesterSlashings(t *testing.T) {
_, err = body.Write(b)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_electras", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2232,7 +2232,7 @@ func TestSubmitAttesterSlashings(t *testing.T) {
_, err = body.Write(b)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -2262,7 +2262,7 @@ func TestSubmitAttesterSlashings(t *testing.T) {
_, err = body.WriteString(invalidAttesterSlashing)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/beacon/pool/attester_slashings", &body)
request.Header.Set(api.EthConsensusVersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}

File diff suppressed because it is too large Load Diff

View File

@@ -1442,7 +1442,7 @@ func TestGetValidatorIdentities(t *testing.T) {
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/eth/v1/beacon/states/{state_id}/validator_identities", &body)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1466,7 +1466,7 @@ func TestGetValidatorIdentities(t *testing.T) {
_, err := body.WriteString("[\"0\",\"1\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/eth/v1/beacon/states/{state_id}/validator_identities", &body)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1494,7 +1494,7 @@ func TestGetValidatorIdentities(t *testing.T) {
_, err := body.WriteString(fmt.Sprintf("[\"%s\",\"%s\"]", hexPubkey1, hexPubkey2))
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/eth/v1/beacon/states/{state_id}/validator_identities", &body)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1521,7 +1521,7 @@ func TestGetValidatorIdentities(t *testing.T) {
_, err := body.WriteString(fmt.Sprintf("[\"%s\",\"1\"]", hexPubkey))
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/eth/v1/beacon/states/{state_id}/validator_identities", &body)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1548,7 +1548,7 @@ func TestGetValidatorIdentities(t *testing.T) {
_, err := body.WriteString(fmt.Sprintf("[\"%s\",\"%s\"]", hexPubkey, hexutil.Encode([]byte(strings.Repeat("x", fieldparams.BLSPubkeyLength)))))
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/eth/v1/beacon/states/{state_id}/validator_identities", &body)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -1572,7 +1572,7 @@ func TestGetValidatorIdentities(t *testing.T) {
_, err := body.WriteString("[\"1\",\"99999\"]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com/eth/v1/beacon/states/{state_id}/validator_identities", &body)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}

View File

@@ -68,7 +68,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteSsz(w, sszResp)
return
}
@@ -91,7 +91,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, blkRoot),
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteJson(w, resp)
}
@@ -180,7 +180,7 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
copy(sszData[i*sszLen:(i+1)*sszLen], verifiedBlobs[i].Blob)
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteSsz(w, sszData)
return
}
@@ -205,7 +205,7 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, blkRoot),
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteJson(w, resp)
}

View File

@@ -227,7 +227,7 @@ func TestBlobs(t *testing.T) {
}
s.Blobs(writer, request)
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
@@ -381,7 +381,7 @@ func TestBlobs(t *testing.T) {
t.Run("ssz", func(t *testing.T) {
u := "http://foo.example/finalized?indices=0"
request := httptest.NewRequest("GET", u, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -393,7 +393,7 @@ func TestBlobs(t *testing.T) {
BlobStorage: bs,
}
s.Blobs(writer, request)
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
require.Equal(t, len(writer.Body.Bytes()), fieldparams.BlobSidecarSize) // size of each sidecar
// can directly unmarshal to sidecar since there's only 1
@@ -404,7 +404,7 @@ func TestBlobs(t *testing.T) {
t.Run("ssz multiple blobs", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -467,7 +467,7 @@ func TestBlobs_Electra(t *testing.T) {
}
s.Blobs(writer, request)
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
@@ -499,7 +499,7 @@ func TestBlobs_Electra(t *testing.T) {
}
s.Blobs(writer, request)
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
@@ -867,7 +867,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("ssz", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -879,7 +879,7 @@ func TestGetBlobs(t *testing.T) {
BlobStorage: bs,
}
s.GetBlobs(writer, request)
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
require.Equal(t, fieldparams.BlobSize*4, len(writer.Body.Bytes())) // size of 4 sidecars
// unmarshal all 4 blobs
@@ -889,7 +889,7 @@ func TestGetBlobs(t *testing.T) {
t.Run("ssz multiple blobs", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
@@ -1038,7 +1038,7 @@ func TestGetBlobs(t *testing.T) {
}
s.GetBlobs(writer, request)
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlobsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
@@ -1078,7 +1078,7 @@ func TestGetBlobs(t *testing.T) {
}
s.GetBlobs(writer, request)
assert.Equal(t, version.String(version.Fulu), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Fulu), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlobsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
@@ -1132,7 +1132,7 @@ func TestGetBlobs(t *testing.T) {
}
s.GetBlobs(writer, request)
assert.Equal(t, version.String(version.Fulu), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Fulu), writer.Header().Get(api.VersionHeader))
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlobsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))

View File

@@ -129,7 +129,7 @@ func (s *Server) getBeaconStateV2(ctx context.Context, w http.ResponseWriter, id
Finalized: isFinalized,
Data: jsonBytes,
}
w.Header().Set(api.EthConsensusVersionHeader, ver)
w.Header().Set(api.VersionHeader, ver)
httputil.WriteJson(w, resp)
}
@@ -145,7 +145,7 @@ func (s *Server) getBeaconStateSSZV2(ctx context.Context, w http.ResponseWriter,
httputil.HandleError(w, "Could not marshal state into SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(st.Version()))
w.Header().Set(api.VersionHeader, version.String(st.Version()))
httputil.WriteSsz(w, sszState)
}
@@ -279,7 +279,7 @@ func (s *Server) DataColumnSidecars(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteSsz(w, sszResp)
return
}
@@ -302,7 +302,7 @@ func (s *Server) DataColumnSidecars(w http.ResponseWriter, r *http.Request) {
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, blkRoot),
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(blk.Version()))
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteJson(w, resp)
}

View File

@@ -320,13 +320,13 @@ func TestGetBeaconStateSSZV2(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/debug/beacon/states/{state_id}", nil)
request.SetPathValue("state_id", "head")
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBeaconStateV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Phase0), writer.Header().Get(api.VersionHeader))
sszExpected, err := fakeState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
@@ -344,13 +344,13 @@ func TestGetBeaconStateSSZV2(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/debug/beacon/states/{state_id}", nil)
request.SetPathValue("state_id", "head")
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBeaconStateV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Altair), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Altair), writer.Header().Get(api.VersionHeader))
sszExpected, err := fakeState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
@@ -368,13 +368,13 @@ func TestGetBeaconStateSSZV2(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/debug/beacon/states/{state_id}", nil)
request.SetPathValue("state_id", "head")
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBeaconStateV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Bellatrix), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Bellatrix), writer.Header().Get(api.VersionHeader))
sszExpected, err := fakeState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
@@ -392,13 +392,13 @@ func TestGetBeaconStateSSZV2(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/debug/beacon/states/{state_id}", nil)
request.SetPathValue("state_id", "head")
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBeaconStateV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Capella), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Capella), writer.Header().Get(api.VersionHeader))
sszExpected, err := fakeState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
@@ -416,13 +416,13 @@ func TestGetBeaconStateSSZV2(t *testing.T) {
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/debug/beacon/states/{state_id}", nil)
request.SetPathValue("state_id", "head")
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBeaconStateV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.EthConsensusVersionHeader))
assert.Equal(t, version.String(version.Deneb), writer.Header().Get(api.VersionHeader))
sszExpected, err := fakeState.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())

View File

@@ -101,7 +101,7 @@ func (w *StreamingResponseWriterRecorder) RequireStatus(t *testing.T, status int
}
func (w *StreamingResponseWriterRecorder) Flush() {
w.WriteHeader(http.StatusOK)
w.WriteHeader(200)
fw, ok := w.ResponseWriter.(http.Flusher)
if ok {
fw.Flush()

View File

@@ -31,7 +31,6 @@ go_test(
srcs = ["handlers_test.go"],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",

View File

@@ -43,7 +43,7 @@ func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Reques
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(bootstrap.Version()))
w.Header().Set(api.VersionHeader, version.String(bootstrap.Version()))
if httputil.RespondWithSsz(req) {
ssz, err := bootstrap.MarshalSSZ()
@@ -111,7 +111,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
}
if httputil.RespondWithSsz(req) {
w.Header().Set(api.ContentTypeHeader, api.OctetStreamMediaType)
w.Header().Set("Content-Type", "application/octet-stream")
for _, update := range updates {
if ctx.Err() != nil {
@@ -174,7 +174,7 @@ func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.R
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(update.Version()))
w.Header().Set(api.VersionHeader, version.String(update.Version()))
if httputil.RespondWithSsz(req) {
data, err := update.MarshalSSZ()
if err != nil {
@@ -207,7 +207,7 @@ func (s *Server) GetLightClientOptimisticUpdate(w http.ResponseWriter, req *http
return
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(update.Version()))
w.Header().Set(api.VersionHeader, version.String(update.Version()))
if httputil.RespondWithSsz(req) {
data, err := update.MarshalSSZ()
if err != nil {

View File

@@ -11,7 +11,6 @@ import (
"strconv"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/async/event"
blockchainTest "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
@@ -115,7 +114,7 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -236,7 +235,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("single update ssz", func(t *testing.T) {
url := fmt.Sprintf("http://foo.com/?count=1&start_period=%d", startPeriod)
request := httptest.NewRequest("GET", url, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -291,7 +290,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("multiple updates ssz", func(t *testing.T) {
url := fmt.Sprintf("http://foo.com/?count=100&start_period=%d", startPeriod)
request := httptest.NewRequest("GET", url, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -406,7 +405,7 @@ func TestLightClientHandler_GetLightClientByRange(t *testing.T) {
t.Run("ssz", func(t *testing.T) {
url := fmt.Sprintf("http://foo.com/?count=100&start_period=%d", startPeriod)
request := httptest.NewRequest("GET", url, nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -778,7 +777,7 @@ func TestLightClientHandler_GetLightClientFinalityUpdate(t *testing.T) {
s.LCStore.SetLastFinalityUpdate(update, false)
request := httptest.NewRequest("GET", "http://foo.com", nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetLightClientFinalityUpdate(writer, request)
@@ -871,7 +870,7 @@ func TestLightClientHandler_GetLightClientOptimisticUpdate(t *testing.T) {
s.LCStore.SetLastOptimisticUpdate(update, false)
request := httptest.NewRequest("GET", "http://foo.com", nil)
request.Header.Add(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetLightClientOptimisticUpdate(writer, request)

View File

@@ -4,7 +4,6 @@ import (
"encoding/json"
"fmt"
"net/http"
"slices"
"strconv"
"strings"
@@ -389,9 +388,12 @@ func syncRewardsVals(
scIndices := make([]primitives.ValidatorIndex, 0, len(allScIndices))
scVals := make([]*precompute.Validator, 0, len(allScIndices))
for _, valIdx := range valIndices {
if slices.Contains(allScIndices, valIdx) {
scVals = append(scVals, allVals[valIdx])
scIndices = append(scIndices, valIdx)
for _, scIdx := range allScIndices {
if valIdx == scIdx {
scVals = append(scVals, allVals[valIdx])
scIndices = append(scIndices, valIdx)
break
}
}
}

View File

@@ -127,7 +127,7 @@ func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Reques
}
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(v))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteSsz(w, data)
return
}
@@ -160,7 +160,7 @@ func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Reques
}
resp.Data = data
}
w.Header().Set(api.EthConsensusVersionHeader, version.String(v))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, resp)
}
@@ -397,9 +397,9 @@ func (s *Server) SubmitAggregateAndProofsV2(w http.ResponseWriter, r *http.Reque
return
}
versionHeader := r.Header.Get(api.EthConsensusVersionHeader)
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.EthConsensusVersionHeader+" header is required", http.StatusBadRequest)
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
@@ -597,18 +597,7 @@ func (s *Server) SubmitSyncCommitteeSubscription(w http.ResponseWriter, r *http.
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
totalDuration := epochDuration * time.Duration(epochsToWatch)
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
seen := make(map[uint64]bool)
var subnetIndices []uint64
for _, idx := range sub.SyncCommitteeIndices {
subnetIdx := idx / subcommitteeSize
if !seen[subnetIdx] {
seen[subnetIdx] = true
subnetIndices = append(subnetIndices, subnetIdx)
}
}
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, subnetIndices, totalDuration)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
}
}

View File

@@ -123,20 +123,20 @@ func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *h
consensusBlockValue = "0"
}
w.Header().Set(api.EthExecutionPayloadBlindedHeader, fmt.Sprintf("%v", v1alpha1resp.IsBlinded))
w.Header().Set(api.EthExecutionPayloadValueHeader, v1alpha1resp.PayloadValue)
w.Header().Set(api.EthConsensusBlockValueHeader, consensusBlockValue)
w.Header().Set(api.ExecutionPayloadBlindedHeader, fmt.Sprintf("%v", v1alpha1resp.IsBlinded))
w.Header().Set(api.ExecutionPayloadValueHeader, v1alpha1resp.PayloadValue)
w.Header().Set(api.ConsensusBlockValueHeader, consensusBlockValue)
phase0Block, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Phase0)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Phase0))
w.Header().Set(api.VersionHeader, version.String(version.Phase0))
// rewards aren't used in phase 0
handleProducePhase0V3(w, isSSZ, phase0Block, v1alpha1resp.PayloadValue)
return
}
altairBlock, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Altair)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Altair))
w.Header().Set(api.VersionHeader, version.String(version.Altair))
handleProduceAltairV3(w, isSSZ, altairBlock, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
@@ -151,61 +151,61 @@ func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *h
}
blindedBellatrixBlock, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedBellatrix)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Bellatrix))
w.Header().Set(api.VersionHeader, version.String(version.Bellatrix))
handleProduceBlindedBellatrixV3(w, isSSZ, blindedBellatrixBlock, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
bellatrixBlock, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Bellatrix)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Bellatrix))
w.Header().Set(api.VersionHeader, version.String(version.Bellatrix))
handleProduceBellatrixV3(w, isSSZ, bellatrixBlock, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
blindedCapellaBlock, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedCapella)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Capella))
w.Header().Set(api.VersionHeader, version.String(version.Capella))
handleProduceBlindedCapellaV3(w, isSSZ, blindedCapellaBlock, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
capellaBlock, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Capella)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Capella))
w.Header().Set(api.VersionHeader, version.String(version.Capella))
handleProduceCapellaV3(w, isSSZ, capellaBlock, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
blindedDenebBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedDeneb)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Deneb))
w.Header().Set(api.VersionHeader, version.String(version.Deneb))
handleProduceBlindedDenebV3(w, isSSZ, blindedDenebBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
denebBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Deneb)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Deneb))
w.Header().Set(api.VersionHeader, version.String(version.Deneb))
handleProduceDenebV3(w, isSSZ, denebBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
blindedElectraBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedElectra)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Electra))
w.Header().Set(api.VersionHeader, version.String(version.Electra))
handleProduceBlindedElectraV3(w, isSSZ, blindedElectraBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
electraBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Electra)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Electra))
w.Header().Set(api.VersionHeader, version.String(version.Electra))
handleProduceElectraV3(w, isSSZ, electraBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
blindedFuluBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedFulu)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Fulu))
w.Header().Set(api.VersionHeader, version.String(version.Fulu))
handleProduceBlindedFuluV3(w, isSSZ, blindedFuluBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
fuluBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Fulu)
if ok {
w.Header().Set(api.EthConsensusVersionHeader, version.String(version.Fulu))
w.Header().Set(api.VersionHeader, version.String(version.Fulu))
handleProduceFuluV3(w, isSSZ, fuluBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}

View File

@@ -64,10 +64,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"phase0","execution_payload_blinded":false,"execution_payload_value":"","consensus_block_value":"","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "phase0", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "phase0", writer.Header().Get(api.VersionHeader))
require.Equal(t, "", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Altair", func(t *testing.T) {
var block *structs.SignedBeaconBlockAltair
@@ -100,10 +100,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"altair","execution_payload_blinded":false,"execution_payload_value":"","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "altair", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "altair", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Bellatrix", func(t *testing.T) {
var block *structs.SignedBeaconBlockBellatrix
@@ -138,10 +138,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"bellatrix","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("BlindedBellatrix", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockBellatrix
@@ -176,10 +176,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"bellatrix","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Capella", func(t *testing.T) {
var block *structs.SignedBeaconBlockCapella
@@ -214,10 +214,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"capella","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Capella", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockCapella
@@ -252,10 +252,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"capella","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Deneb", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsDeneb
@@ -290,10 +290,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"deneb","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Deneb", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockDeneb
@@ -328,10 +328,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"deneb","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
@@ -366,10 +366,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"electra","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
@@ -404,10 +404,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"electra","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Fulu", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsFulu
@@ -442,10 +442,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"fulu","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Fulu", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockFulu
@@ -480,10 +480,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"fulu","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("invalid query parameter slot empty", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
@@ -573,10 +573,10 @@ func TestProduceBlockV3(t *testing.T) {
want := fmt.Sprintf(`{"version":"fulu","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"0","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "0", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "0", writer.Header().Get(api.ConsensusBlockValueHeader))
})
}
@@ -611,7 +611,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
SyncChecker: syncChecker,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -623,10 +623,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Phase0.Block.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "phase0", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "phase0", writer.Header().Get(api.VersionHeader))
require.Equal(t, "", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Altair", func(t *testing.T) {
var block *structs.SignedBeaconBlockAltair
@@ -649,7 +649,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -661,10 +661,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Altair.Block.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "altair", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "altair", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Bellatrix", func(t *testing.T) {
var block *structs.SignedBeaconBlockBellatrix
@@ -691,7 +691,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -703,10 +703,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Bellatrix.Block.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("BlindedBellatrix", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockBellatrix
@@ -732,7 +732,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -744,10 +744,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.BlindedBellatrix.Block.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "bellatrix", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Capella", func(t *testing.T) {
var block *structs.SignedBeaconBlockCapella
@@ -773,7 +773,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -785,10 +785,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Capella.Block.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Capella", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockCapella
@@ -814,7 +814,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -826,10 +826,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.BlindedCapella.Block.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "capella", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Deneb", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsDeneb
@@ -855,7 +855,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -867,10 +867,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Deneb.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Deneb", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockDeneb
@@ -896,7 +896,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -908,10 +908,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.BlindedDeneb.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
@@ -937,7 +937,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -949,10 +949,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Electra.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
@@ -978,7 +978,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -990,10 +990,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.BlindedElectra.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Fulu", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsFulu
@@ -1019,7 +1019,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -1031,10 +1031,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Fulu.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Fulu", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockFulu
@@ -1060,7 +1060,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -1072,10 +1072,10 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.BlindedFulu.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "true", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("0 block value is returned on error", func(t *testing.T) {
rewardFetcher := &rewardtesting.MockBlockRewardFetcher{Error: &httputil.DefaultJsonError{}}
@@ -1103,7 +1103,7 @@ func TestProduceBlockV3SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set(api.AcceptHeader, api.OctetStreamMediaType)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
@@ -1115,9 +1115,9 @@ func TestProduceBlockV3SSZ(t *testing.T) {
ssz, err := bl.Fulu.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.EthExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.EthExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.EthConsensusVersionHeader))
require.Equal(t, "0", writer.Header().Get(api.EthConsensusBlockValueHeader))
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "fulu", writer.Header().Get(api.VersionHeader))
require.Equal(t, "0", writer.Header().Get(api.ConsensusBlockValueHeader))
})
}

Some files were not shown because too many files have changed in this diff Show More