mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-09 21:38:05 -05:00
Compare commits
72 Commits
subscribe-
...
version-up
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a7565164e4 | ||
|
|
94d76a5685 | ||
|
|
2ef2dd16ba | ||
|
|
d6005026e0 | ||
|
|
33476f5d7b | ||
|
|
35e9c1752e | ||
|
|
8b6f187b15 | ||
|
|
8ad547c969 | ||
|
|
d3d7f67bec | ||
|
|
9959782f1c | ||
|
|
f0a099b275 | ||
|
|
1e7d74cf02 | ||
|
|
d945b1d905 | ||
|
|
7df60e8c9b | ||
|
|
165c4b0af1 | ||
|
|
d0f5253b8d | ||
|
|
1f926142b8 | ||
|
|
d394f00e9f | ||
|
|
11c6325b54 | ||
|
|
ec524ce99c | ||
|
|
b2a9db0826 | ||
|
|
040661bd68 | ||
|
|
d3bd0eaa30 | ||
|
|
577899bfec | ||
|
|
374bae9c81 | ||
|
|
3e0492a636 | ||
|
|
5a1a5b5ae5 | ||
|
|
dbb2f0b047 | ||
|
|
7b3c11c818 | ||
|
|
c9b34d556d | ||
|
|
10a2f0687b | ||
|
|
4fb75d6d0b | ||
|
|
6d596edea2 | ||
|
|
9153c5a202 | ||
|
|
26ce94e224 | ||
|
|
255ea2fac1 | ||
|
|
46bc81b4c8 | ||
|
|
9c4774b82e | ||
|
|
7dd4f5948c | ||
|
|
2f090c52d9 | ||
|
|
3ecb5d0b67 | ||
|
|
253f91930a | ||
|
|
7c3e45637f | ||
|
|
96429c5089 | ||
|
|
d613f3a262 | ||
|
|
5751dbf134 | ||
|
|
426fbcc3b0 | ||
|
|
a3baf98b05 | ||
|
|
5a897dfa6b | ||
|
|
90190883bc | ||
|
|
64ec665890 | ||
|
|
fdb06ea461 | ||
|
|
0486631d73 | ||
|
|
47764696ce | ||
|
|
b2d350b988 | ||
|
|
41e7607092 | ||
|
|
cd429dc253 | ||
|
|
5ced1125f2 | ||
|
|
f67ca6ae5e | ||
|
|
9742333f68 | ||
|
|
c811fadf33 | ||
|
|
55b9448d41 | ||
|
|
10f8d8c26e | ||
|
|
4eab41ea4c | ||
|
|
683608e34a | ||
|
|
fbbf2a1404 | ||
|
|
82f556c50f | ||
|
|
c88aa77ac1 | ||
|
|
0568bec935 | ||
|
|
e463bcd1e1 | ||
|
|
5f8eb69201 | ||
|
|
4b98451649 |
63
CHANGELOG.md
63
CHANGELOG.md
@@ -4,6 +4,67 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
|
||||
|
||||
## [v6.1.4](https://github.com/prysmaticlabs/prysm/compare/v6.1.3...v6.1.4) - 2025-10-24
|
||||
|
||||
This release includes a bug fix affecting block proposals in rare cases, along with an important update for Windows users running post-Fusaka fork.
|
||||
|
||||
### Added
|
||||
|
||||
- SSZ-QL: Add endpoints for `BeaconState`/`BeaconBlock`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15888)
|
||||
- Add native state diff type and marshalling functions. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15250)
|
||||
- Update the earliest available slot after pruning operations in beacon chain database pruner. This ensures the P2P layer accurately knows which historical data is available after pruning, preventing nodes from advertising or attempting to serve data that has been pruned. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15694)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Correctly advertise (in ENR and beacon API) attestation subnets when using `--subscribe-all-subnets`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15880)
|
||||
- `randomPeer`: Return if the context is cancelled when waiting for peers. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15876)
|
||||
- Improve error message when the byte count read from disk when reading a data column sidecars is lower than expected. (Mostly, because the file is truncated.). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15881)
|
||||
- Delete the genesis state file when --clear-db / --force-clear-db is specified. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15883)
|
||||
- Fix sync committee subscription to use subnet indices instead of committee indices. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15885)
|
||||
- Fixed metadata extraction on Windows by correctly splitting file paths. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15899)
|
||||
- `VerifyDataColumnsSidecarKZGProofs`: Check if sizes match. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15892)
|
||||
- Fix recoverStateSummary to persist state summaries in stateSummaryBucket instead of stateBucket (#15896). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15896)
|
||||
- `updateCustodyInfoInDB`: Use `NumberOfCustodyGroups` instead of `NumberOfColumns`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15908)
|
||||
- Sync committee uses correct state to calculate position. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15905)
|
||||
|
||||
## [v6.1.3](https://github.com/prysmaticlabs/prysm/compare/v6.1.2...v6.1.3) - 2025-10-20
|
||||
|
||||
This release has several important beacon API and p2p fixes.
|
||||
|
||||
### Added
|
||||
|
||||
- Add Grandine to P2P known agents. (Useful for metrics). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15829)
|
||||
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15805)
|
||||
- SSZ-QL: Use `fastssz`'s `SizeSSZ` method for calculating the size of `Container` type. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
|
||||
- SSZ-QL: Access n-th element in `List`/`Vector`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15767)
|
||||
|
||||
### Changed
|
||||
|
||||
- Do not verify block data when calculating rewards. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15819)
|
||||
- Process pending attestations after pending blocks are cleared. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15824)
|
||||
- updated web3signer to 25.9.1. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
|
||||
- Gracefully handle submit blind block returning 502 errors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15848)
|
||||
- Improve returning individual message errors from Beacon API. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15835)
|
||||
- SSZ-QL: Clarify `Size` method with more sophisticated `SSZType`s. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15864)
|
||||
|
||||
### Fixed
|
||||
|
||||
- Use service context and continue on slasher attestation errors (#15803). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15803)
|
||||
- block event probably shouldn't be sent on certain block processing failures, now sends only on successing processing Block is NON-CANONICAL, Block IS CANONICAL but getFCUArgs FAILS, and Full success. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15814)
|
||||
- Fixed web3signer e2e, issues caused due to a regression on old fork support. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15832)
|
||||
- Do not mark blocks as invalid from ErrNotDescendantOfFinalized. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15846)
|
||||
- Fixed [#15812](https://github.com/OffchainLabs/prysm/issues/15812): Gossip attestation validation incorrectly rejecting attestations that arrive before their referenced blocks. Previously, attestations were saved to the pending queue but immediately rejected by forkchoice validation, causing "not descendant of finalized checkpoint" errors. Now attestations for missing blocks return `ValidationIgnore` without error, allowing them to be properly processed when their blocks arrive. This eliminates false positive rejections and prevents potential incorrect peer downscoring during network congestion. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15840)
|
||||
- Mark the block as invalid if it has an invalid signature. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15847)
|
||||
- Display error messages from the server verbatim when they are not encoded as `application/json`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15860)
|
||||
- `HasAtLeastOneIndex`: Check the index is not too high. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15865)
|
||||
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15867)
|
||||
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15866)
|
||||
- `VerifyDataColumnSidecar`: Check if there is no too many commitments. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15859)
|
||||
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15872)
|
||||
- Mark epoch transition correctly on new head events. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15871)
|
||||
- reject committee index >= committees_per_slot in unaggregated attestation validation. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15855)
|
||||
- Decreased attestation gossip validation batch deadline to 5ms. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15882)
|
||||
|
||||
## [v6.1.2](https://github.com/prysmaticlabs/prysm/compare/v6.1.1...v6.1.2) - 2025-10-10
|
||||
|
||||
This release has several important fixes to improve Prysm's peering, stability, and attestation inclusion on mainnet and all testnets. All node operators are encouraged to update to this release as soon as practical for the best mainnet performance.
|
||||
@@ -3759,4 +3820,4 @@ There are no security updates in this release.
|
||||
|
||||
# Older than v2.0.0
|
||||
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
For changelog history for releases older than v2.0.0, please refer to https://github.com/prysmaticlabs/prysm/releases
|
||||
16
WORKSPACE
16
WORKSPACE
@@ -253,16 +253,16 @@ filegroup(
|
||||
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
|
||||
)
|
||||
|
||||
consensus_spec_version = "v1.6.0-beta.0"
|
||||
consensus_spec_version = "v1.6.0"
|
||||
|
||||
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
|
||||
|
||||
consensus_spec_tests(
|
||||
name = "consensus_spec_tests",
|
||||
flavors = {
|
||||
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
|
||||
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
|
||||
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
|
||||
"general": "sha256-54hTaUNF9nLg+hRr3oHoq0yjZpW3MNiiUUuCQu6Rajk=",
|
||||
"minimal": "sha256-1JHIGg3gVMjvcGYRHR5cwdDgOvX47oR/MWp6gyAeZfA=",
|
||||
"mainnet": "sha256-292h3W2Ffts0YExgDTyxYe9Os7R0bZIXuAaMO8P6kl4=",
|
||||
},
|
||||
version = consensus_spec_version,
|
||||
)
|
||||
@@ -278,7 +278,7 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
|
||||
integrity = "sha256-VzBgrEokvYSMIIXVnSA5XS9I3m9oxpvToQGxC1N5lzw=",
|
||||
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
|
||||
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
|
||||
)
|
||||
@@ -327,9 +327,9 @@ filegroup(
|
||||
visibility = ["//visibility:public"],
|
||||
)
|
||||
""",
|
||||
integrity = "sha256-NZr/gsQK9rBHRnznlPBiNzJpK8MPMrfUa3f+QYqn1+g=",
|
||||
strip_prefix = "mainnet-978f1794eada6f85bee76e4d2d5959a5fb8e0cc5",
|
||||
url = "https://github.com/eth-clients/mainnet/archive/978f1794eada6f85bee76e4d2d5959a5fb8e0cc5.tar.gz",
|
||||
integrity = "sha256-+mqMXyboedVw8Yp0v+U9GDz98QoC1SZET8mjaKPX+AI=",
|
||||
strip_prefix = "mainnet-980aee8893a2291d473c38f63797d5bc370fa381",
|
||||
url = "https://github.com/eth-clients/mainnet/archive/980aee8893a2291d473c38f63797d5bc370fa381.tar.gz",
|
||||
)
|
||||
|
||||
http_archive(
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"client.go",
|
||||
"errors.go",
|
||||
"options.go",
|
||||
"transport.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/api/client",
|
||||
visibility = ["//visibility:public"],
|
||||
@@ -14,7 +15,13 @@ go_library(
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = ["client_test.go"],
|
||||
srcs = [
|
||||
"client_test.go",
|
||||
"transport_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = ["//testing/require:go_default_library"],
|
||||
deps = [
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -284,7 +284,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*stru
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
decoder := json.NewDecoder(resp.Body)
|
||||
decoder.DisallowUnknownFields()
|
||||
errorJson := &server.IndexedVerificationFailureError{}
|
||||
errorJson := &server.IndexedErrorContainer{}
|
||||
if err := decoder.Decode(errorJson); err != nil {
|
||||
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
|
||||
}
|
||||
|
||||
@@ -726,6 +726,12 @@ func unexpectedStatusErr(response *http.Response, expected int) error {
|
||||
return errors.Wrap(jsonErr, "unable to read response body")
|
||||
}
|
||||
return errors.Wrap(ErrNotOK, errMessage.Message)
|
||||
case http.StatusBadGateway:
|
||||
log.WithError(ErrBadGateway).Debug(msg)
|
||||
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
|
||||
return errors.Wrap(jsonErr, "unable to read response body")
|
||||
}
|
||||
return errors.Wrap(ErrBadGateway, errMessage.Message)
|
||||
default:
|
||||
log.WithError(ErrNotOK).Debug(msg)
|
||||
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))
|
||||
|
||||
@@ -10,9 +10,9 @@ import (
|
||||
"net/url"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/api"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
@@ -22,7 +22,7 @@ import (
|
||||
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
|
||||
@@ -21,3 +21,4 @@ var ErrUnsupportedMediaType = errors.Wrap(ErrNotOK, "The media type in \"Content
|
||||
|
||||
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
|
||||
var ErrNotAcceptable = errors.Wrap(ErrNotOK, "The accept header value is not acceptable")
|
||||
var ErrBadGateway = errors.Wrap(ErrNotOK, "recv 502 BadGateway response from API")
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/api/server/structs"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
@@ -22,7 +23,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func ezDecode(t *testing.T, s string) []byte {
|
||||
|
||||
25
api/client/transport.go
Normal file
25
api/client/transport.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package client
|
||||
|
||||
import "net/http"
|
||||
|
||||
// CustomHeadersTransport adds custom headers to each request
|
||||
type CustomHeadersTransport struct {
|
||||
base http.RoundTripper
|
||||
headers map[string][]string
|
||||
}
|
||||
|
||||
func NewCustomHeadersTransport(base http.RoundTripper, headers map[string][]string) *CustomHeadersTransport {
|
||||
return &CustomHeadersTransport{
|
||||
base: base,
|
||||
headers: headers,
|
||||
}
|
||||
}
|
||||
|
||||
func (t *CustomHeadersTransport) RoundTrip(req *http.Request) (*http.Response, error) {
|
||||
for header, values := range t.headers {
|
||||
for _, value := range values {
|
||||
req.Header.Add(header, value)
|
||||
}
|
||||
}
|
||||
return t.base.RoundTrip(req)
|
||||
}
|
||||
25
api/client/transport_test.go
Normal file
25
api/client/transport_test.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
)
|
||||
|
||||
type noopTransport struct{}
|
||||
|
||||
func (*noopTransport) RoundTrip(*http.Request) (*http.Response, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func TestRoundTrip(t *testing.T) {
|
||||
tr := &CustomHeadersTransport{base: &noopTransport{}, headers: map[string][]string{"key1": []string{"value1", "value2"}, "key2": []string{"value3"}}}
|
||||
req := httptest.NewRequest("GET", "http://foo", nil)
|
||||
_, err := tr.RoundTrip(req)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, []string{"value1", "value2"}, req.Header.Values("key1"))
|
||||
assert.DeepEqual(t, []string{"value3"}, req.Header.Values("key2"))
|
||||
}
|
||||
@@ -6,6 +6,11 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrIndexedValidationFail = "One or more messages failed validation"
|
||||
ErrIndexedBroadcastFail = "One or more messages failed broadcast"
|
||||
)
|
||||
|
||||
// DecodeError represents an error resulting from trying to decode an HTTP request.
|
||||
// It tracks the full field name for which decoding failed.
|
||||
type DecodeError struct {
|
||||
@@ -29,19 +34,38 @@ func (e *DecodeError) Error() string {
|
||||
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
|
||||
}
|
||||
|
||||
// IndexedVerificationFailureError wraps a collection of verification failures.
|
||||
type IndexedVerificationFailureError struct {
|
||||
Message string `json:"message"`
|
||||
Code int `json:"code"`
|
||||
Failures []*IndexedVerificationFailure `json:"failures"`
|
||||
// IndexedErrorContainer wraps a collection of indexed errors.
|
||||
type IndexedErrorContainer struct {
|
||||
Message string `json:"message"`
|
||||
Code int `json:"code"`
|
||||
Failures []*IndexedError `json:"failures"`
|
||||
}
|
||||
|
||||
func (e *IndexedVerificationFailureError) StatusCode() int {
|
||||
func (e *IndexedErrorContainer) StatusCode() int {
|
||||
return e.Code
|
||||
}
|
||||
|
||||
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
|
||||
type IndexedVerificationFailure struct {
|
||||
// IndexedError represents an issue when processing a single indexed object e.g. an item in an array.
|
||||
type IndexedError struct {
|
||||
Index int `json:"index"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// BroadcastFailedError represents an error scenario where broadcasting a published message failed.
|
||||
type BroadcastFailedError struct {
|
||||
msg string
|
||||
err error
|
||||
}
|
||||
|
||||
// NewBroadcastFailedError creates a new instance of BroadcastFailedError.
|
||||
func NewBroadcastFailedError(msg string, err error) *BroadcastFailedError {
|
||||
return &BroadcastFailedError{
|
||||
msg: msg,
|
||||
err: err,
|
||||
}
|
||||
}
|
||||
|
||||
// Error returns the underlying error message.
|
||||
func (e *BroadcastFailedError) Error() string {
|
||||
return fmt.Sprintf("could not broadcast %s: %s", e.msg, e.err.Error())
|
||||
}
|
||||
|
||||
@@ -13,7 +13,6 @@ go_library(
|
||||
"conversions_state.go",
|
||||
"endpoints_beacon.go",
|
||||
"endpoints_blob.go",
|
||||
"endpoints_builder.go",
|
||||
"endpoints_config.go",
|
||||
"endpoints_debug.go",
|
||||
"endpoints_events.go",
|
||||
|
||||
@@ -1492,20 +1492,6 @@ func sszBytesToUint256String(b []byte) (string, error) {
|
||||
return bi.String(), nil
|
||||
}
|
||||
|
||||
func DepositSnapshotFromConsensus(ds *eth.DepositSnapshot) *DepositSnapshot {
|
||||
finalized := make([]string, 0, len(ds.Finalized))
|
||||
for _, f := range ds.Finalized {
|
||||
finalized = append(finalized, hexutil.Encode(f))
|
||||
}
|
||||
return &DepositSnapshot{
|
||||
Finalized: finalized,
|
||||
DepositRoot: hexutil.Encode(ds.DepositRoot),
|
||||
DepositCount: fmt.Sprintf("%d", ds.DepositCount),
|
||||
ExecutionBlockHash: hexutil.Encode(ds.ExecutionHash),
|
||||
ExecutionBlockHeight: fmt.Sprintf("%d", ds.ExecutionDepth),
|
||||
}
|
||||
}
|
||||
|
||||
func PendingDepositsFromConsensus(ds []*eth.PendingDeposit) []*PendingDeposit {
|
||||
deposits := make([]*PendingDeposit, len(ds))
|
||||
for i, d := range ds {
|
||||
|
||||
@@ -9,24 +9,6 @@ import (
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
)
|
||||
|
||||
func TestDepositSnapshotFromConsensus(t *testing.T) {
|
||||
ds := ð.DepositSnapshot{
|
||||
Finalized: [][]byte{{0xde, 0xad, 0xbe, 0xef}, {0xca, 0xfe, 0xba, 0xbe}},
|
||||
DepositRoot: []byte{0xab, 0xcd},
|
||||
DepositCount: 12345,
|
||||
ExecutionHash: []byte{0x12, 0x34},
|
||||
ExecutionDepth: 67890,
|
||||
}
|
||||
|
||||
res := DepositSnapshotFromConsensus(ds)
|
||||
require.NotNil(t, res)
|
||||
require.DeepEqual(t, []string{"0xdeadbeef", "0xcafebabe"}, res.Finalized)
|
||||
require.Equal(t, "0xabcd", res.DepositRoot)
|
||||
require.Equal(t, "12345", res.DepositCount)
|
||||
require.Equal(t, "0x1234", res.ExecutionBlockHash)
|
||||
require.Equal(t, "67890", res.ExecutionBlockHeight)
|
||||
}
|
||||
|
||||
func TestSignedBLSToExecutionChange_ToConsensus(t *testing.T) {
|
||||
s := &SignedBLSToExecutionChange{Message: nil, Signature: ""}
|
||||
_, err := s.ToConsensus()
|
||||
|
||||
@@ -206,18 +206,6 @@ type WeakSubjectivityData struct {
|
||||
StateRoot string `json:"state_root"`
|
||||
}
|
||||
|
||||
type GetDepositSnapshotResponse struct {
|
||||
Data *DepositSnapshot `json:"data"`
|
||||
}
|
||||
|
||||
type DepositSnapshot struct {
|
||||
Finalized []string `json:"finalized"`
|
||||
DepositRoot string `json:"deposit_root"`
|
||||
DepositCount string `json:"deposit_count"`
|
||||
ExecutionBlockHash string `json:"execution_block_hash"`
|
||||
ExecutionBlockHeight string `json:"execution_block_height"`
|
||||
}
|
||||
|
||||
type GetIndividualVotesRequest struct {
|
||||
Epoch string `json:"epoch"`
|
||||
PublicKeys []string `json:"public_keys,omitempty"`
|
||||
@@ -296,3 +284,8 @@ type GetBlobsResponse struct {
|
||||
Finalized bool `json:"finalized"`
|
||||
Data []string `json:"data"` //blobs
|
||||
}
|
||||
|
||||
type SSZQueryRequest struct {
|
||||
Query string `json:"query"`
|
||||
IncludeProof bool `json:"include_proof,omitempty"`
|
||||
}
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
package structs
|
||||
|
||||
type ExpectedWithdrawalsResponse struct {
|
||||
Data []*ExpectedWithdrawal `json:"data"`
|
||||
ExecutionOptimistic bool `json:"execution_optimistic"`
|
||||
Finalized bool `json:"finalized"`
|
||||
}
|
||||
|
||||
type ExpectedWithdrawal struct {
|
||||
Address string `json:"address" hex:"true"`
|
||||
Amount string `json:"amount"`
|
||||
Index string `json:"index"`
|
||||
ValidatorIndex string `json:"validator_index"`
|
||||
}
|
||||
@@ -173,6 +173,7 @@ go_test(
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//beacon-chain/state/stategen:go_default_library",
|
||||
"//beacon-chain/verification:go_default_library",
|
||||
"//cmd/beacon-chain/flags:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
|
||||
@@ -346,13 +346,24 @@ func (s *Service) notifyNewHeadEvent(
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not check if node is optimistically synced")
|
||||
}
|
||||
|
||||
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not obtain parent root in forkchoice")
|
||||
}
|
||||
parentSlot, err := s.RecentBlockSlot(parentRoot)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
|
||||
}
|
||||
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
|
||||
|
||||
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
|
||||
Type: statefeed.NewHead,
|
||||
Data: ðpbv1.EventHead{
|
||||
Slot: newHeadSlot,
|
||||
Block: newHeadRoot,
|
||||
State: newHeadStateRoot,
|
||||
EpochTransition: slots.IsEpochStart(newHeadSlot),
|
||||
EpochTransition: epochTransition,
|
||||
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
|
||||
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
|
||||
ExecutionOptimistic: isOptimistic,
|
||||
|
||||
@@ -162,6 +162,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
|
||||
events := notifier.ReceivedEvents()
|
||||
require.Equal(t, 1, len(events))
|
||||
@@ -196,6 +199,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
|
||||
require.NoError(t, err)
|
||||
events := notifier.ReceivedEvents()
|
||||
@@ -213,6 +219,37 @@ func Test_notifyNewHeadEvent(t *testing.T) {
|
||||
}
|
||||
require.DeepSSZEqual(t, wanted, eventHead)
|
||||
})
|
||||
t.Run("epoch transition", func(t *testing.T) {
|
||||
bState, _ := util.DeterministicGenesisState(t, 10)
|
||||
srv := testServiceWithDB(t)
|
||||
srv.SetGenesisTime(time.Now())
|
||||
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
|
||||
srv.originBlockRoot = [32]byte{1}
|
||||
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadStateRoot := [32]byte{2}
|
||||
newHeadRoot := [32]byte{3}
|
||||
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, ðpb.Checkpoint{}, ðpb.Checkpoint{})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
|
||||
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
|
||||
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
|
||||
events := notifier.ReceivedEvents()
|
||||
require.Equal(t, 1, len(events))
|
||||
|
||||
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
|
||||
require.Equal(t, true, ok)
|
||||
wanted := ðpbv1.EventHead{
|
||||
Slot: newHeadSlot,
|
||||
Block: newHeadRoot[:],
|
||||
State: newHeadStateRoot[:],
|
||||
EpochTransition: true,
|
||||
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
|
||||
CurrentDutyDependentRoot: srv.originBlockRoot[:],
|
||||
}
|
||||
require.DeepSSZEqual(t, wanted, eventHead)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRetrieveHead_ReadOnly(t *testing.T) {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"strconv"
|
||||
@@ -16,6 +17,7 @@ import (
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// The caller of this function must have a lock on forkchoice.
|
||||
@@ -27,6 +29,20 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
|
||||
if !s.cfg.ForkChoiceStore.IsCanonical([32]byte(c.Root)) {
|
||||
return nil
|
||||
}
|
||||
// Only use head state if the head state is compatible with the target checkpoint.
|
||||
headRoot, err := s.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
headTarget, err := s.cfg.ForkChoiceStore.TargetRootForEpoch([32]byte(headRoot), c.Epoch)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
if !bytes.Equal(c.Root, headTarget[:]) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If the head state alone is enough, we can return it directly read only.
|
||||
if c.Epoch == headEpoch {
|
||||
st, err := s.HeadStateReadOnly(ctx)
|
||||
if err != nil {
|
||||
@@ -34,11 +50,13 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
|
||||
}
|
||||
return st
|
||||
}
|
||||
// Otherwise we need to advance the head state to the start of the target epoch.
|
||||
// This point can only be reached if c.Root == headRoot and c.Epoch > headEpoch.
|
||||
slot, err := slots.EpochStart(c.Epoch)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
// Try if we have already set the checkpoint cache
|
||||
// Try if we have already set the checkpoint cache. This will be tried again if we fail here but the check is cheap anyway.
|
||||
epochKey := strconv.FormatUint(uint64(c.Epoch), 10 /* base 10 */)
|
||||
lock := async.NewMultilock(string(c.Root) + epochKey)
|
||||
lock.Lock()
|
||||
@@ -50,6 +68,7 @@ func (s *Service) getRecentPreState(ctx context.Context, c *ethpb.Checkpoint) st
|
||||
if cachedState != nil && !cachedState.IsNil() {
|
||||
return cachedState
|
||||
}
|
||||
// If we haven't advanced yet then process the slots from head state.
|
||||
st, err := s.HeadState(ctx)
|
||||
if err != nil {
|
||||
return nil
|
||||
@@ -114,6 +133,7 @@ func (s *Service) getAttPreState(ctx context.Context, c *ethpb.Checkpoint) (stat
|
||||
}
|
||||
|
||||
// Fallback to state regeneration.
|
||||
log.WithFields(logrus.Fields{"epoch": c.Epoch, "root": fmt.Sprintf("%#x", c.Root)}).Debug("Regenerating attestation pre-state")
|
||||
baseState, err := s.cfg.StateGen.StateByRoot(ctx, bytesutil.ToBytes32(c.Root))
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not get pre state for epoch %d", c.Epoch)
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
|
||||
@@ -29,7 +30,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
@@ -49,7 +50,6 @@ import (
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
gethtypes "github.com/ethereum/go-ethereum/core/types"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
|
||||
@@ -3302,7 +3302,6 @@ func Test_postBlockProcess_EventSending(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
|
||||
var l *util.TestLightClient
|
||||
switch v {
|
||||
|
||||
@@ -472,8 +472,8 @@ func (s *Service) removeStartupState() {
|
||||
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
|
||||
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
custodyRequirement := beaconConfig.CustodyRequirement
|
||||
cfg := params.BeaconConfig()
|
||||
custodyRequirement := cfg.CustodyRequirement
|
||||
|
||||
// Check if the node was previously subscribed to all data subnets, and if so,
|
||||
// store the new status accordingly.
|
||||
@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
|
||||
// Compute the custody group count.
|
||||
custodyGroupCount := custodyRequirement
|
||||
if isSubscribedToAllDataSubnets {
|
||||
custodyGroupCount = beaconConfig.NumberOfColumns
|
||||
custodyGroupCount = cfg.NumberOfCustodyGroups
|
||||
}
|
||||
|
||||
// Safely compute the fulu fork slot.
|
||||
@@ -536,11 +536,11 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
|
||||
}
|
||||
|
||||
func fuluForkSlot() (primitives.Slot, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
fuluForkEpoch := beaconConfig.FuluForkEpoch
|
||||
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
|
||||
return beaconConfig.FarFutureSlot, nil
|
||||
fuluForkEpoch := cfg.FuluForkEpoch
|
||||
if fuluForkEpoch == cfg.FarFutureEpoch {
|
||||
return cfg.FarFutureSlot, nil
|
||||
}
|
||||
|
||||
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)
|
||||
|
||||
@@ -23,9 +23,11 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
@@ -596,3 +598,103 @@ func TestNotifyIndex(t *testing.T) {
|
||||
t.Errorf("Notifier channel did not receive the index")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateCustodyInfoInDB(t *testing.T) {
|
||||
const (
|
||||
fuluForkEpoch = 10
|
||||
custodyRequirement = uint64(4)
|
||||
earliestStoredSlot = primitives.Slot(12)
|
||||
numberOfCustodyGroups = uint64(64)
|
||||
numberOfColumns = uint64(128)
|
||||
)
|
||||
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = fuluForkEpoch
|
||||
cfg.CustodyRequirement = custodyRequirement
|
||||
cfg.NumberOfCustodyGroups = numberOfCustodyGroups
|
||||
cfg.NumberOfColumns = numberOfColumns
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
ctx := t.Context()
|
||||
pbBlock := util.NewBeaconBlock()
|
||||
pbBlock.Block.Slot = 12
|
||||
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(pbBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
roBlock, err := blocks.NewROBlock(signedBeaconBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("CGC increases before fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(19)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
|
||||
t.Run("CGC increases after fulu", func(t *testing.T) {
|
||||
service, requirements := minimalTestService(t)
|
||||
err = requirements.db.SaveBlock(ctx, roBlock)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Before Fulu
|
||||
// -----------
|
||||
actualEas, actualCgc, err := service.updateCustodyInfoInDB(15)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(17)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, earliestStoredSlot, actualEas)
|
||||
require.Equal(t, custodyRequirement, actualCgc)
|
||||
|
||||
// After Fulu
|
||||
// ----------
|
||||
resetFlags := flags.Get()
|
||||
gFlags := new(flags.GlobalFlags)
|
||||
gFlags.SubscribeAllDataSubnets = true
|
||||
flags.Init(gFlags)
|
||||
defer flags.Init(resetFlags)
|
||||
|
||||
slot := fuluForkEpoch*primitives.Slot(cfg.SlotsPerEpoch) + 1
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
|
||||
actualEas, actualCgc, err = service.updateCustodyInfoInDB(slot + 2)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot, actualEas)
|
||||
require.Equal(t, numberOfCustodyGroups, actualCgc)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -130,6 +130,14 @@ func (dch *mockCustodyManager) UpdateCustodyInfo(earliestAvailableSlot primitive
|
||||
return earliestAvailableSlot, custodyGroupCount, nil
|
||||
}
|
||||
|
||||
func (dch *mockCustodyManager) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
dch.mut.Lock()
|
||||
defer dch.mut.Unlock()
|
||||
|
||||
dch.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
func (dch *mockCustodyManager) CustodyGroupCountFromPeer(peer.ID) uint64 {
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -472,6 +472,36 @@ func (s *ChainService) HasBlock(ctx context.Context, rt [32]byte) bool {
|
||||
return s.InitSyncBlockRoots[rt]
|
||||
}
|
||||
|
||||
func (s *ChainService) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
|
||||
if s.DB == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
count := len(blockRoots)
|
||||
availableRoots := make(map[[32]byte]bool, count)
|
||||
notInDBRoots := make([][32]byte, 0, count)
|
||||
for _, root := range blockRoots {
|
||||
if s.DB.HasBlock(ctx, root) {
|
||||
availableRoots[root] = true
|
||||
continue
|
||||
}
|
||||
|
||||
notInDBRoots = append(notInDBRoots, root)
|
||||
}
|
||||
|
||||
if s.InitSyncBlockRoots == nil {
|
||||
return availableRoots
|
||||
}
|
||||
|
||||
for _, root := range notInDBRoots {
|
||||
if s.InitSyncBlockRoots[root] {
|
||||
availableRoots[root] = true
|
||||
}
|
||||
}
|
||||
|
||||
return availableRoots
|
||||
}
|
||||
|
||||
// RecentBlockSlot mocks the same method in the chain service.
|
||||
func (s *ChainService) RecentBlockSlot([32]byte) (primitives.Slot, error) {
|
||||
return s.BlockSlot, nil
|
||||
|
||||
@@ -3,6 +3,7 @@ package blockchain
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"slices"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filters"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -81,12 +82,10 @@ func (v *WeakSubjectivityVerifier) VerifyWeakSubjectivity(ctx context.Context, f
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error while retrieving block roots to verify weak subjectivity")
|
||||
}
|
||||
for _, root := range roots {
|
||||
if v.root == root {
|
||||
log.Info("Weak subjectivity check has passed!!")
|
||||
v.verified = true
|
||||
return nil
|
||||
}
|
||||
if slices.Contains(roots, v.root) {
|
||||
log.Info("Weak subjectivity check has passed!!")
|
||||
v.verified = true
|
||||
return nil
|
||||
}
|
||||
return errors.Wrap(errWSBlockNotFoundInEpoch, fmt.Sprintf("root=%#x, epoch=%d", v.root, v.epoch))
|
||||
}
|
||||
|
||||
2
beacon-chain/cache/attestation_test.go
vendored
2
beacon-chain/cache/attestation_test.go
vendored
@@ -3,6 +3,7 @@ package cache
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls/blst"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
@@ -10,7 +11,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestAdd(t *testing.T) {
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -22,7 +23,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
gofuzz "github.com/google/gofuzz"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -19,7 +20,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
|
||||
|
||||
@@ -12,6 +12,46 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
)
|
||||
|
||||
// ConvertToAltair converts a Phase 0 beacon state to an Altair beacon state.
|
||||
func ConvertToAltair(state state.BeaconState) (state.BeaconState, error) {
|
||||
epoch := time.CurrentEpoch(state)
|
||||
|
||||
numValidators := state.NumValidators()
|
||||
s := ðpb.BeaconStateAltair{
|
||||
GenesisTime: uint64(state.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
|
||||
Slot: state.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: state.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().AltairForkVersion,
|
||||
Epoch: epoch,
|
||||
},
|
||||
LatestBlockHeader: state.LatestBlockHeader(),
|
||||
BlockRoots: state.BlockRoots(),
|
||||
StateRoots: state.StateRoots(),
|
||||
HistoricalRoots: state.HistoricalRoots(),
|
||||
Eth1Data: state.Eth1Data(),
|
||||
Eth1DataVotes: state.Eth1DataVotes(),
|
||||
Eth1DepositIndex: state.Eth1DepositIndex(),
|
||||
Validators: state.Validators(),
|
||||
Balances: state.Balances(),
|
||||
RandaoMixes: state.RandaoMixes(),
|
||||
Slashings: state.Slashings(),
|
||||
PreviousEpochParticipation: make([]byte, numValidators),
|
||||
CurrentEpochParticipation: make([]byte, numValidators),
|
||||
JustificationBits: state.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: state.FinalizedCheckpoint(),
|
||||
InactivityScores: make([]uint64, numValidators),
|
||||
}
|
||||
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return newState, nil
|
||||
}
|
||||
|
||||
// UpgradeToAltair updates input state to return the version Altair state.
|
||||
//
|
||||
// Spec code:
|
||||
@@ -64,39 +104,7 @@ import (
|
||||
// post.next_sync_committee = get_next_sync_committee(post)
|
||||
// return post
|
||||
func UpgradeToAltair(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
|
||||
epoch := time.CurrentEpoch(state)
|
||||
|
||||
numValidators := state.NumValidators()
|
||||
s := ðpb.BeaconStateAltair{
|
||||
GenesisTime: uint64(state.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
|
||||
Slot: state.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: state.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().AltairForkVersion,
|
||||
Epoch: epoch,
|
||||
},
|
||||
LatestBlockHeader: state.LatestBlockHeader(),
|
||||
BlockRoots: state.BlockRoots(),
|
||||
StateRoots: state.StateRoots(),
|
||||
HistoricalRoots: state.HistoricalRoots(),
|
||||
Eth1Data: state.Eth1Data(),
|
||||
Eth1DataVotes: state.Eth1DataVotes(),
|
||||
Eth1DepositIndex: state.Eth1DepositIndex(),
|
||||
Validators: state.Validators(),
|
||||
Balances: state.Balances(),
|
||||
RandaoMixes: state.RandaoMixes(),
|
||||
Slashings: state.Slashings(),
|
||||
PreviousEpochParticipation: make([]byte, numValidators),
|
||||
CurrentEpochParticipation: make([]byte, numValidators),
|
||||
JustificationBits: state.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: state.FinalizedCheckpoint(),
|
||||
InactivityScores: make([]uint64, numValidators),
|
||||
}
|
||||
|
||||
newState, err := state_native.InitializeFromProtoUnsafeAltair(s)
|
||||
newState, err := ConvertToAltair(state)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package altair_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
@@ -12,7 +13,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestTranslateParticipation(t *testing.T) {
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -12,7 +13,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
// Beaconfuzz discovered an off by one issue where an attestation could be produced which would pass
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -20,7 +21,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
|
||||
|
||||
@@ -6,3 +6,4 @@ var errNilSignedWithdrawalMessage = errors.New("nil SignedBLSToExecutionChange m
|
||||
var errNilWithdrawalMessage = errors.New("nil BLSToExecutionChange message")
|
||||
var errInvalidBLSPrefix = errors.New("withdrawal credential prefix is not a BLS prefix")
|
||||
var errInvalidWithdrawalCredentials = errors.New("withdrawal credentials do not match")
|
||||
var ErrInvalidSignature = errors.New("invalid signature")
|
||||
|
||||
@@ -86,7 +86,7 @@ func IsExecutionBlock(body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
|
||||
// def is_execution_enabled(state: BeaconState, body: ReadOnlyBeaconBlockBody) -> bool:
|
||||
//
|
||||
// return is_merge_block(state, body) or is_merge_complete(state)
|
||||
func IsExecutionEnabled(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
|
||||
func IsExecutionEnabled(st state.ReadOnlyBeaconState, body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
|
||||
if st == nil || body == nil {
|
||||
return false, errors.New("nil state or block body")
|
||||
}
|
||||
|
||||
@@ -114,9 +114,12 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
|
||||
}
|
||||
proposerPubKey := proposer.PublicKey
|
||||
sig := blk.Signature()
|
||||
return signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
|
||||
if err := signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
|
||||
return blkRoot, nil
|
||||
})
|
||||
}); err != nil {
|
||||
return ErrInvalidSignature
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.
|
||||
|
||||
@@ -89,3 +89,36 @@ func TestVerifyBlockSignatureUsingCurrentFork(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.NoError(t, blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot))
|
||||
}
|
||||
|
||||
func TestVerifyBlockSignatureUsingCurrentFork_InvalidSignature(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
bCfg := params.BeaconConfig()
|
||||
bCfg.AltairForkEpoch = 100
|
||||
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = 100
|
||||
params.OverrideBeaconConfig(bCfg)
|
||||
bState, keys := util.DeterministicGenesisState(t, 100)
|
||||
altairBlk := util.NewBeaconBlockAltair()
|
||||
altairBlk.Block.ProposerIndex = 0
|
||||
altairBlk.Block.Slot = params.BeaconConfig().SlotsPerEpoch * 100
|
||||
blkRoot, err := altairBlk.Block.HashTreeRoot()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Sign with wrong key (proposer index 0, but using key 1)
|
||||
fData := ðpb.Fork{
|
||||
Epoch: 100,
|
||||
CurrentVersion: params.BeaconConfig().AltairForkVersion,
|
||||
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
|
||||
}
|
||||
domain, err := signing.Domain(fData, 100, params.BeaconConfig().DomainBeaconProposer, bState.GenesisValidatorsRoot())
|
||||
assert.NoError(t, err)
|
||||
rt, err := signing.ComputeSigningRoot(altairBlk.Block, domain)
|
||||
assert.NoError(t, err)
|
||||
wrongSig := keys[1].Sign(rt[:]).Marshal()
|
||||
altairBlk.Signature = wrongSig
|
||||
|
||||
wsb, err := consensusblocks.NewSignedBeaconBlock(altairBlk)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot)
|
||||
require.ErrorIs(t, err, blocks.ErrInvalidSignature, "Expected ErrInvalidSignature for invalid signature")
|
||||
}
|
||||
|
||||
@@ -15,6 +15,129 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// ConvertToElectra converts a Deneb beacon state to an Electra beacon state. It does not perform any fork logic.
|
||||
func ConvertToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nextSyncCommittee, err := beaconState.NextSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
inactivityScores, err := beaconState.InactivityScores()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txRoot, err := payloadHeader.TransactionsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wdRoot, err := payloadHeader.WithdrawalsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wi, err := beaconState.NextWithdrawalIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
vi, err := beaconState.NextWithdrawalValidatorIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
summaries, err := beaconState.HistoricalSummaries()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
excessBlobGas, err := payloadHeader.ExcessBlobGas()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
blobGasUsed, err := payloadHeader.BlobGasUsed()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s := ðpb.BeaconStateElectra{
|
||||
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
|
||||
Slot: beaconState.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: beaconState.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
|
||||
Epoch: time.CurrentEpoch(beaconState),
|
||||
},
|
||||
LatestBlockHeader: beaconState.LatestBlockHeader(),
|
||||
BlockRoots: beaconState.BlockRoots(),
|
||||
StateRoots: beaconState.StateRoots(),
|
||||
HistoricalRoots: beaconState.HistoricalRoots(),
|
||||
Eth1Data: beaconState.Eth1Data(),
|
||||
Eth1DataVotes: beaconState.Eth1DataVotes(),
|
||||
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
|
||||
Validators: beaconState.Validators(),
|
||||
Balances: beaconState.Balances(),
|
||||
RandaoMixes: beaconState.RandaoMixes(),
|
||||
Slashings: beaconState.Slashings(),
|
||||
PreviousEpochParticipation: prevEpochParticipation,
|
||||
CurrentEpochParticipation: currentEpochParticipation,
|
||||
JustificationBits: beaconState.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
|
||||
InactivityScores: inactivityScores,
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: payloadHeader.ParentHash(),
|
||||
FeeRecipient: payloadHeader.FeeRecipient(),
|
||||
StateRoot: payloadHeader.StateRoot(),
|
||||
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
|
||||
LogsBloom: payloadHeader.LogsBloom(),
|
||||
PrevRandao: payloadHeader.PrevRandao(),
|
||||
BlockNumber: payloadHeader.BlockNumber(),
|
||||
GasLimit: payloadHeader.GasLimit(),
|
||||
GasUsed: payloadHeader.GasUsed(),
|
||||
Timestamp: payloadHeader.Timestamp(),
|
||||
ExtraData: payloadHeader.ExtraData(),
|
||||
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
|
||||
BlockHash: payloadHeader.BlockHash(),
|
||||
TransactionsRoot: txRoot,
|
||||
WithdrawalsRoot: wdRoot,
|
||||
ExcessBlobGas: excessBlobGas,
|
||||
BlobGasUsed: blobGasUsed,
|
||||
},
|
||||
NextWithdrawalIndex: wi,
|
||||
NextWithdrawalValidatorIndex: vi,
|
||||
HistoricalSummaries: summaries,
|
||||
|
||||
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
|
||||
DepositBalanceToConsume: 0,
|
||||
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
|
||||
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
|
||||
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
|
||||
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
|
||||
}
|
||||
|
||||
// need to cast the beaconState to use in helper functions
|
||||
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
|
||||
}
|
||||
return post, nil
|
||||
}
|
||||
|
||||
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
|
||||
//
|
||||
// nolint:dupword
|
||||
@@ -126,55 +249,7 @@ import (
|
||||
//
|
||||
// return post
|
||||
func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nextSyncCommittee, err := beaconState.NextSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
inactivityScores, err := beaconState.InactivityScores()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
txRoot, err := payloadHeader.TransactionsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wdRoot, err := payloadHeader.WithdrawalsRoot()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
wi, err := beaconState.NextWithdrawalIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
vi, err := beaconState.NextWithdrawalValidatorIndex()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
summaries, err := beaconState.HistoricalSummaries()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
excessBlobGas, err := payloadHeader.ExcessBlobGas()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
blobGasUsed, err := payloadHeader.BlobGasUsed()
|
||||
s, err := ConvertToElectra(beaconState)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -206,97 +281,38 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to get total active balance")
|
||||
}
|
||||
|
||||
s := ðpb.BeaconStateElectra{
|
||||
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
|
||||
Slot: beaconState.Slot(),
|
||||
Fork: ðpb.Fork{
|
||||
PreviousVersion: beaconState.Fork().CurrentVersion,
|
||||
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
|
||||
Epoch: time.CurrentEpoch(beaconState),
|
||||
},
|
||||
LatestBlockHeader: beaconState.LatestBlockHeader(),
|
||||
BlockRoots: beaconState.BlockRoots(),
|
||||
StateRoots: beaconState.StateRoots(),
|
||||
HistoricalRoots: beaconState.HistoricalRoots(),
|
||||
Eth1Data: beaconState.Eth1Data(),
|
||||
Eth1DataVotes: beaconState.Eth1DataVotes(),
|
||||
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
|
||||
Validators: beaconState.Validators(),
|
||||
Balances: beaconState.Balances(),
|
||||
RandaoMixes: beaconState.RandaoMixes(),
|
||||
Slashings: beaconState.Slashings(),
|
||||
PreviousEpochParticipation: prevEpochParticipation,
|
||||
CurrentEpochParticipation: currentEpochParticipation,
|
||||
JustificationBits: beaconState.JustificationBits(),
|
||||
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
|
||||
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
|
||||
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
|
||||
InactivityScores: inactivityScores,
|
||||
CurrentSyncCommittee: currentSyncCommittee,
|
||||
NextSyncCommittee: nextSyncCommittee,
|
||||
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
|
||||
ParentHash: payloadHeader.ParentHash(),
|
||||
FeeRecipient: payloadHeader.FeeRecipient(),
|
||||
StateRoot: payloadHeader.StateRoot(),
|
||||
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
|
||||
LogsBloom: payloadHeader.LogsBloom(),
|
||||
PrevRandao: payloadHeader.PrevRandao(),
|
||||
BlockNumber: payloadHeader.BlockNumber(),
|
||||
GasLimit: payloadHeader.GasLimit(),
|
||||
GasUsed: payloadHeader.GasUsed(),
|
||||
Timestamp: payloadHeader.Timestamp(),
|
||||
ExtraData: payloadHeader.ExtraData(),
|
||||
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
|
||||
BlockHash: payloadHeader.BlockHash(),
|
||||
TransactionsRoot: txRoot,
|
||||
WithdrawalsRoot: wdRoot,
|
||||
ExcessBlobGas: excessBlobGas,
|
||||
BlobGasUsed: blobGasUsed,
|
||||
},
|
||||
NextWithdrawalIndex: wi,
|
||||
NextWithdrawalValidatorIndex: vi,
|
||||
HistoricalSummaries: summaries,
|
||||
|
||||
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
|
||||
DepositBalanceToConsume: 0,
|
||||
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
|
||||
EarliestExitEpoch: earliestExitEpoch,
|
||||
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
|
||||
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
|
||||
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
|
||||
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
|
||||
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
|
||||
if err := s.SetExitBalanceToConsume(helpers.ActivationExitChurnLimit(primitives.Gwei(tab))); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set exit balance to consume")
|
||||
}
|
||||
if err := s.SetEarliestExitEpoch(earliestExitEpoch); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set earliest exit epoch")
|
||||
}
|
||||
if err := s.SetConsolidationBalanceToConsume(helpers.ConsolidationChurnLimit(primitives.Gwei(tab))); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set consolidation balance to consume")
|
||||
}
|
||||
|
||||
// Sorting preActivationIndices based on a custom criteria
|
||||
vals := s.Validators()
|
||||
sort.Slice(preActivationIndices, func(i, j int) bool {
|
||||
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
|
||||
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
|
||||
if vals[preActivationIndices[i]].ActivationEligibilityEpoch == vals[preActivationIndices[j]].ActivationEligibilityEpoch {
|
||||
return preActivationIndices[i] < preActivationIndices[j]
|
||||
}
|
||||
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
|
||||
return vals[preActivationIndices[i]].ActivationEligibilityEpoch < vals[preActivationIndices[j]].ActivationEligibilityEpoch
|
||||
})
|
||||
|
||||
// need to cast the beaconState to use in helper functions
|
||||
post, err := state_native.InitializeFromProtoUnsafeElectra(s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to initialize post electra beaconState")
|
||||
}
|
||||
|
||||
for _, index := range preActivationIndices {
|
||||
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
|
||||
if err := QueueEntireBalanceAndResetValidator(s, index); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure early adopters of compounding credentials go through the activation churn
|
||||
for _, index := range compoundWithdrawalIndices {
|
||||
if err := QueueExcessActiveBalance(post, index); err != nil {
|
||||
if err := QueueExcessActiveBalance(s, index); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to queue excess active balance")
|
||||
}
|
||||
}
|
||||
|
||||
return post, nil
|
||||
return s, nil
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package precompute_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -12,7 +13,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestUpdateValidator_Works(t *testing.T) {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package precompute
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
@@ -8,7 +9,6 @@ import (
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
var errNilState = errors.New("nil state")
|
||||
|
||||
@@ -3,6 +3,7 @@ package precompute_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
@@ -12,7 +13,6 @@ import (
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestProcessJustificationAndFinalizationPreCompute_ConsecutiveEpochs(t *testing.T) {
|
||||
|
||||
@@ -3,6 +3,7 @@ package precompute
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
@@ -16,7 +17,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestProcessRewardsAndPenaltiesPrecompute(t *testing.T) {
|
||||
|
||||
@@ -7,6 +7,7 @@ go_library(
|
||||
visibility = [
|
||||
"//beacon-chain:__subpackages__",
|
||||
"//cmd/prysmctl/testnet:__pkg__",
|
||||
"//consensus-types/hdiff:__subpackages__",
|
||||
"//testing/spectest:__subpackages__",
|
||||
"//validator/client:__pkg__",
|
||||
],
|
||||
|
||||
@@ -15,6 +15,7 @@ go_library(
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//beacon-chain/state/state-native:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//monitoring/tracing/trace:go_default_library",
|
||||
"//proto/engine/v1:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
@@ -17,6 +18,25 @@ import (
|
||||
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/fork.md#upgrading-the-state
|
||||
func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
s, err := ConvertToFulu(beaconState)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert to fulu")
|
||||
}
|
||||
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pl := make([]primitives.ValidatorIndex, len(proposerLookahead))
|
||||
for i, v := range proposerLookahead {
|
||||
pl[i] = primitives.ValidatorIndex(v)
|
||||
}
|
||||
if err := s.SetProposerLookahead(pl); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to set proposer lookahead")
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
func ConvertToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
|
||||
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -105,11 +125,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s := ðpb.BeaconStateFulu{
|
||||
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
|
||||
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
|
||||
@@ -171,14 +186,6 @@ func UpgradeToFulu(ctx context.Context, beaconState state.BeaconState) (state.Be
|
||||
PendingDeposits: pendingDeposits,
|
||||
PendingPartialWithdrawals: pendingPartialWithdrawals,
|
||||
PendingConsolidations: pendingConsolidations,
|
||||
ProposerLookahead: proposerLookahead,
|
||||
}
|
||||
|
||||
// Need to cast the beaconState to use in helper functions
|
||||
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
|
||||
}
|
||||
|
||||
return post, nil
|
||||
return state_native.InitializeFromProtoUnsafeFulu(s)
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"fmt"
|
||||
"sort"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
|
||||
@@ -23,7 +24,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/runtime/version"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
var (
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
|
||||
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
|
||||
@@ -18,7 +19,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestComputeCommittee_WithoutCache(t *testing.T) {
|
||||
|
||||
@@ -401,7 +401,7 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
|
||||
return 0, errors.New("empty active indices list")
|
||||
}
|
||||
hashFunc := hash.CustomSHA256Hasher()
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
seedBuffer := make([]byte, len(seed)+8)
|
||||
copy(seedBuffer, seed[:])
|
||||
|
||||
@@ -426,14 +426,14 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
|
||||
offset := (i % 16) * 2
|
||||
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
|
||||
|
||||
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
|
||||
if effectiveBal*fieldparams.MaxRandomValueElectra >= cfg.MaxEffectiveBalanceElectra*randomValue {
|
||||
return candidateIndex, nil
|
||||
}
|
||||
} else {
|
||||
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/32)
|
||||
randomByte := hashFunc(seedBuffer)[i%32]
|
||||
|
||||
if effectiveBal*fieldparams.MaxRandomByte >= beaconConfig.MaxEffectiveBalance*uint64(randomByte) {
|
||||
if effectiveBal*fieldparams.MaxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
|
||||
return candidateIndex, nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -201,14 +201,3 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
|
||||
Root: bRoot,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
|
||||
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
|
||||
// value defined:
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#configuration
|
||||
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
|
||||
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
|
||||
func MinEpochsForBlockRequests() primitives.Epoch {
|
||||
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
|
||||
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
|
||||
}
|
||||
|
||||
@@ -285,21 +285,3 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
|
||||
|
||||
return beaconState
|
||||
}
|
||||
|
||||
func TestMinEpochsForBlockRequests(t *testing.T) {
|
||||
helpers.ClearCache()
|
||||
|
||||
params.SetActiveTestCleanup(t, params.MainnetConfig())
|
||||
var expected primitives.Epoch = 33024
|
||||
// expected value of 33024 via spec commentary:
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
|
||||
//
|
||||
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
|
||||
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
|
||||
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
|
||||
// )
|
||||
//
|
||||
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
|
||||
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
|
||||
}
|
||||
|
||||
@@ -89,14 +89,14 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
|
||||
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
|
||||
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
|
||||
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
|
||||
|
||||
if custodyGroup >= numberOfCustodyGroups {
|
||||
return nil, ErrCustodyGroupTooLarge
|
||||
}
|
||||
|
||||
numberOfColumns := beaconConfig.NumberOfColumns
|
||||
numberOfColumns := cfg.NumberOfColumns
|
||||
|
||||
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
|
||||
|
||||
@@ -112,9 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
|
||||
// ComputeCustodyGroupForColumn computes the custody group for a given column.
|
||||
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
|
||||
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfColumns := beaconConfig.NumberOfColumns
|
||||
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfColumns := cfg.NumberOfColumns
|
||||
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
|
||||
|
||||
if columnIndex >= numberOfColumns {
|
||||
return 0, ErrIndexTooLarge
|
||||
|
||||
@@ -43,6 +43,13 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
|
||||
return ErrNoKzgCommitments
|
||||
}
|
||||
|
||||
// A sidecar with more commitments than the max blob count for this block is invalid.
|
||||
slot := sidecar.Slot()
|
||||
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
|
||||
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
|
||||
return ErrTooManyCommitments
|
||||
}
|
||||
|
||||
// The column length must be equal to the number of commitments/proofs.
|
||||
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
|
||||
return ErrMismatchLength
|
||||
@@ -72,10 +79,30 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) error {
|
||||
|
||||
for _, sidecar := range sidecars {
|
||||
for i := range sidecar.Column {
|
||||
commitments = append(commitments, kzg.Bytes48(sidecar.KzgCommitments[i]))
|
||||
var (
|
||||
commitment kzg.Bytes48
|
||||
cell kzg.Cell
|
||||
proof kzg.Bytes48
|
||||
)
|
||||
|
||||
commitmentBytes := sidecar.KzgCommitments[i]
|
||||
cellBytes := sidecar.Column[i]
|
||||
proofBytes := sidecar.KzgProofs[i]
|
||||
|
||||
if len(commitmentBytes) != len(commitment) ||
|
||||
len(cellBytes) != len(cell) ||
|
||||
len(proofBytes) != len(proof) {
|
||||
return ErrMismatchLength
|
||||
}
|
||||
|
||||
copy(commitment[:], commitmentBytes)
|
||||
copy(cell[:], cellBytes)
|
||||
copy(proof[:], proofBytes)
|
||||
|
||||
commitments = append(commitments, commitment)
|
||||
indices = append(indices, sidecar.Index)
|
||||
cells = append(cells, kzg.Cell(sidecar.Column[i]))
|
||||
proofs = append(proofs, kzg.Bytes48(sidecar.KzgProofs[i]))
|
||||
cells = append(cells, cell)
|
||||
proofs = append(proofs, proof)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -18,38 +18,46 @@ import (
|
||||
)
|
||||
|
||||
func TestVerifyDataColumnSidecar(t *testing.T) {
|
||||
t.Run("index too large", func(t *testing.T) {
|
||||
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
|
||||
})
|
||||
testCases := []struct {
|
||||
name string
|
||||
index uint64
|
||||
blobCount int
|
||||
commitmentCount int
|
||||
proofCount int
|
||||
maxBlobsPerBlock uint64
|
||||
expectedError error
|
||||
}{
|
||||
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
|
||||
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
|
||||
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
|
||||
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
|
||||
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
|
||||
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
|
||||
}
|
||||
|
||||
t.Run("no commitments", func(t *testing.T) {
|
||||
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
|
||||
})
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
cfg := params.BeaconConfig()
|
||||
cfg.FuluForkEpoch = 0
|
||||
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
t.Run("KZG commitments size mismatch", func(t *testing.T) {
|
||||
kzgCommitments := make([][]byte, 1)
|
||||
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
column := make([][]byte, tc.blobCount)
|
||||
kzgCommitments := make([][]byte, tc.commitmentCount)
|
||||
kzgProof := make([][]byte, tc.proofCount)
|
||||
|
||||
t.Run("KZG proofs size mismatch", func(t *testing.T) {
|
||||
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
|
||||
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
|
||||
t.Run("nominal", func(t *testing.T) {
|
||||
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
|
||||
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
|
||||
err := peerdas.VerifyDataColumnSidecar(roSidecar)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
if tc.expectedError != nil {
|
||||
require.ErrorIs(t, err, tc.expectedError)
|
||||
return
|
||||
}
|
||||
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
|
||||
@@ -60,6 +68,14 @@ func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
|
||||
err := kzg.Start()
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("size mismatch", func(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
sidecars[0].Column[0] = sidecars[0].Column[0][:len(sidecars[0].Column[0])-1] // Remove one byte to create size mismatch
|
||||
|
||||
err := peerdas.VerifyDataColumnsSidecarKZGProofs(sidecars)
|
||||
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
|
||||
})
|
||||
|
||||
t.Run("invalid proof", func(t *testing.T) {
|
||||
sidecars := generateRandomSidecars(t, seed, blobCount)
|
||||
sidecars[0].Column[0][0]++ // It is OK to overflow
|
||||
|
||||
@@ -257,7 +257,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
|
||||
return nil, errors.Wrap(err, "compute cells")
|
||||
}
|
||||
|
||||
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
|
||||
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
|
||||
for _, kzgProofBytes := range blobAndProof.KzgProofs {
|
||||
if len(kzgProofBytes) != kzg.BytesPerProof {
|
||||
return nil, errors.New("wrong KZG proof size - should never happen")
|
||||
|
||||
@@ -441,6 +441,7 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
|
||||
for i := range blobCount {
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
|
||||
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), cap(actualCellsAndProofs[i].Proofs))
|
||||
|
||||
// Compare cells
|
||||
for j, expectedCell := range expectedCellsAndProofs[i].Cells {
|
||||
|
||||
@@ -84,10 +84,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
|
||||
totalNodeBalance += validator.EffectiveBalance()
|
||||
}
|
||||
|
||||
beaconConfig := params.BeaconConfig()
|
||||
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
|
||||
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
|
||||
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
|
||||
cfg := params.BeaconConfig()
|
||||
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
|
||||
validatorCustodyRequirement := cfg.ValidatorCustodyRequirement
|
||||
balancePerAdditionalCustodyGroup := cfg.BalancePerAdditionalCustodyGroup
|
||||
|
||||
count := totalNodeBalance / balancePerAdditionalCustodyGroup
|
||||
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil
|
||||
|
||||
@@ -196,7 +196,7 @@ func TestAltairCompatible(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestCanUpgradeTo(t *testing.T) {
|
||||
beaconConfig := params.BeaconConfig()
|
||||
cfg := params.BeaconConfig()
|
||||
|
||||
outerTestCases := []struct {
|
||||
name string
|
||||
@@ -205,32 +205,32 @@ func TestCanUpgradeTo(t *testing.T) {
|
||||
}{
|
||||
{
|
||||
name: "Altair",
|
||||
forkEpoch: &beaconConfig.AltairForkEpoch,
|
||||
forkEpoch: &cfg.AltairForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToAltair,
|
||||
},
|
||||
{
|
||||
name: "Bellatrix",
|
||||
forkEpoch: &beaconConfig.BellatrixForkEpoch,
|
||||
forkEpoch: &cfg.BellatrixForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToBellatrix,
|
||||
},
|
||||
{
|
||||
name: "Capella",
|
||||
forkEpoch: &beaconConfig.CapellaForkEpoch,
|
||||
forkEpoch: &cfg.CapellaForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToCapella,
|
||||
},
|
||||
{
|
||||
name: "Deneb",
|
||||
forkEpoch: &beaconConfig.DenebForkEpoch,
|
||||
forkEpoch: &cfg.DenebForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToDeneb,
|
||||
},
|
||||
{
|
||||
name: "Electra",
|
||||
forkEpoch: &beaconConfig.ElectraForkEpoch,
|
||||
forkEpoch: &cfg.ElectraForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToElectra,
|
||||
},
|
||||
{
|
||||
name: "Fulu",
|
||||
forkEpoch: &beaconConfig.FuluForkEpoch,
|
||||
forkEpoch: &cfg.FuluForkEpoch,
|
||||
upgradeFunc: time.CanUpgradeToFulu,
|
||||
},
|
||||
}
|
||||
@@ -238,7 +238,7 @@ func TestCanUpgradeTo(t *testing.T) {
|
||||
for _, otc := range outerTestCases {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
*otc.forkEpoch = 5
|
||||
params.OverrideBeaconConfig(beaconConfig)
|
||||
params.OverrideBeaconConfig(cfg)
|
||||
|
||||
innerTestCases := []struct {
|
||||
name string
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -20,7 +21,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestExecuteAltairStateTransitionNoVerify_FullProcess(t *testing.T) {
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -22,7 +23,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestExecuteBellatrixStateTransitionNoVerify_FullProcess(t *testing.T) {
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
|
||||
@@ -23,7 +24,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func init() {
|
||||
|
||||
@@ -51,16 +51,20 @@ func Test_commitmentsToCheck(t *testing.T) {
|
||||
name: "commitments within da",
|
||||
block: func(t *testing.T) blocks.ROBlock {
|
||||
d := util.NewBeaconBlockFulu()
|
||||
d.Block.Body.BlobKzgCommitments = commits[:maxBlobs]
|
||||
d.Block.Slot = fulu + 100
|
||||
mb := params.GetNetworkScheduleEntry(slots.ToEpoch(d.Block.Slot)).MaxBlobsPerBlock
|
||||
d.Block.Body.BlobKzgCommitments = commits[:mb]
|
||||
sb, err := blocks.NewSignedBeaconBlock(d)
|
||||
require.NoError(t, err)
|
||||
rb, err := blocks.NewROBlock(sb)
|
||||
require.NoError(t, err)
|
||||
return rb
|
||||
},
|
||||
commits: commits[:maxBlobs],
|
||||
slot: fulu + 100,
|
||||
commits: func() [][]byte {
|
||||
mb := params.GetNetworkScheduleEntry(slots.ToEpoch(fulu + 100)).MaxBlobsPerBlock
|
||||
return commits[:mb]
|
||||
}(),
|
||||
slot: fulu + 100,
|
||||
},
|
||||
{
|
||||
name: "commitments outside da",
|
||||
|
||||
@@ -200,6 +200,7 @@ func (dcs *DataColumnStorage) WarmCache() {
|
||||
fileMetadata, err := extractFileMetadata(path)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Error encountered while extracting file metadata")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Open the data column filesystem file.
|
||||
@@ -988,8 +989,8 @@ func filePath(root [fieldparams.RootLength]byte, epoch primitives.Epoch) string
|
||||
// extractFileMetadata extracts the metadata from a file path.
|
||||
// If the path is not a leaf, it returns nil.
|
||||
func extractFileMetadata(path string) (*fileMetadata, error) {
|
||||
// Is this Windows friendly?
|
||||
parts := strings.Split(path, "/")
|
||||
// Use filepath.Separator to handle both Windows (\) and Unix (/) path separators
|
||||
parts := strings.Split(path, string(filepath.Separator))
|
||||
if len(parts) != 3 {
|
||||
return nil, errors.Errorf("unexpected file %s", path)
|
||||
}
|
||||
@@ -1032,5 +1033,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
|
||||
|
||||
// period computes the period of a given epoch.
|
||||
func period(epoch primitives.Epoch) uint64 {
|
||||
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
|
||||
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
|
||||
}
|
||||
|
||||
@@ -35,8 +35,9 @@ func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
|
||||
|
||||
// HasAtLeastOneIndex returns true if at least one of the DataColumnSidecars at the given indices is available in the filesystem.
|
||||
func (s DataColumnStorageSummary) HasAtLeastOneIndex(indices []uint64) bool {
|
||||
size := uint64(len(s.mask))
|
||||
for _, index := range indices {
|
||||
if s.mask[index] {
|
||||
if index < size && s.mask[index] {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -25,11 +25,11 @@ func TestHasIndex(t *testing.T) {
|
||||
func TestHasAtLeastOneIndex(t *testing.T) {
|
||||
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
|
||||
|
||||
hasAtLeastOneIndex := summary.HasAtLeastOneIndex([]uint64{3, 1, 2})
|
||||
require.Equal(t, true, hasAtLeastOneIndex)
|
||||
actual := summary.HasAtLeastOneIndex([]uint64{3, 1, fieldparams.NumberOfColumns, 2})
|
||||
require.Equal(t, true, actual)
|
||||
|
||||
hasAtLeastOneIndex = summary.HasAtLeastOneIndex([]uint64{3, 4, 2})
|
||||
require.Equal(t, false, hasAtLeastOneIndex)
|
||||
actual = summary.HasAtLeastOneIndex([]uint64{3, 4, fieldparams.NumberOfColumns, 2})
|
||||
require.Equal(t, false, actual)
|
||||
}
|
||||
|
||||
func TestCount(t *testing.T) {
|
||||
|
||||
@@ -3,6 +3,7 @@ package filesystem
|
||||
import (
|
||||
"encoding/binary"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
@@ -725,3 +726,37 @@ func TestPrune(t *testing.T) {
|
||||
require.Equal(t, true, compareSlices([]string{"0x0de28a18cae63cbc6f0b20dc1afb0b1df38da40824a5f09f92d485ade04de97f.sszs"}, dirs))
|
||||
})
|
||||
}
|
||||
|
||||
func TestExtractFileMetadata(t *testing.T) {
|
||||
t.Run("Unix", func(t *testing.T) {
|
||||
// Test with Unix-style path separators (/)
|
||||
path := "12/1234/0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
|
||||
metadata, err := extractFileMetadata(path)
|
||||
if filepath.Separator == '/' {
|
||||
// On Unix systems, this should succeed
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(12), metadata.period)
|
||||
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
|
||||
return
|
||||
}
|
||||
|
||||
// On Windows systems, this should fail because it uses the wrong separator
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
|
||||
t.Run("Windows", func(t *testing.T) {
|
||||
// Test with Windows-style path separators (\)
|
||||
path := "12\\1234\\0x8bb2f09de48c102635622dc27e6de03ae2b22639df7c33edbc8222b2ec423746.sszs"
|
||||
metadata, err := extractFileMetadata(path)
|
||||
if filepath.Separator == '\\' {
|
||||
// On Windows systems, this should succeed
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(12), metadata.period)
|
||||
require.Equal(t, primitives.Epoch(1234), metadata.epoch)
|
||||
return
|
||||
}
|
||||
|
||||
// On Unix systems, this should fail because it uses the wrong separator
|
||||
require.NotNil(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
@@ -212,7 +212,8 @@ func filterNoop(_ string) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func isRootDir(p string) bool {
|
||||
// IsBlockRootDir returns true if the path segment looks like a block root directory.
|
||||
func IsBlockRootDir(p string) bool {
|
||||
dir := filepath.Base(p)
|
||||
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
|
||||
}
|
||||
|
||||
@@ -188,7 +188,7 @@ func TestListDir(t *testing.T) {
|
||||
name: "root filter",
|
||||
dirPath: ".",
|
||||
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
|
||||
filter: isRootDir,
|
||||
filter: IsBlockRootDir,
|
||||
},
|
||||
{
|
||||
name: "ssz filter",
|
||||
|
||||
@@ -19,12 +19,14 @@ import (
|
||||
const (
|
||||
// Full root in directory will be 66 chars, eg:
|
||||
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
|
||||
rootStringLen = 66
|
||||
sszExt = "ssz"
|
||||
partExt = "part"
|
||||
periodicEpochBaseDir = "by-epoch"
|
||||
rootStringLen = 66
|
||||
sszExt = "ssz"
|
||||
partExt = "part"
|
||||
)
|
||||
|
||||
// PeriodicEpochBaseDir is the name of the base directory for the by-epoch layout.
|
||||
const PeriodicEpochBaseDir = "by-epoch"
|
||||
|
||||
const (
|
||||
LayoutNameFlat = "flat"
|
||||
LayoutNameByEpoch = "by-epoch"
|
||||
@@ -130,11 +132,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
|
||||
if iter.atEOF() {
|
||||
return errLayoutNotDetected
|
||||
}
|
||||
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
|
||||
lastMoved := ""
|
||||
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
|
||||
moved := 0
|
||||
dc := newDirCleaner()
|
||||
migrationLogged := false
|
||||
for ident, err := iter.next(); !errors.Is(err, io.EOF); ident, err = iter.next() {
|
||||
if err != nil {
|
||||
if errors.Is(err, errIdentFailure) {
|
||||
@@ -146,6 +148,11 @@ func migrateLayout(fs afero.Fs, from, to fsLayout, cache *blobStorageSummaryCach
|
||||
}
|
||||
return errors.Wrapf(errMigrationFailure, "failed to iterate previous layout structure while migrating blobs, err=%s", err.Error())
|
||||
}
|
||||
if !migrationLogged {
|
||||
log.WithField("fromLayout", from.name()).WithField("toLayout", to.name()).
|
||||
Info("Migrating blob filesystem layout. This one-time operation can take extra time (up to a few minutes for systems with extended blob storage and a cold disk cache).")
|
||||
migrationLogged = true
|
||||
}
|
||||
src := from.dir(ident)
|
||||
target := to.dir(ident)
|
||||
if src != lastMoved {
|
||||
|
||||
@@ -34,7 +34,7 @@ func (l *periodicEpochLayout) name() string {
|
||||
|
||||
func (l *periodicEpochLayout) blockParentDirs(ident blobIdent) []string {
|
||||
return []string{
|
||||
periodicEpochBaseDir,
|
||||
PeriodicEpochBaseDir,
|
||||
l.periodDir(ident.epoch),
|
||||
l.epochDir(ident.epoch),
|
||||
}
|
||||
@@ -50,28 +50,28 @@ func (l *periodicEpochLayout) notify(ident blobIdent) error {
|
||||
|
||||
// If before == 0, it won't be used as a filter and all idents will be returned.
|
||||
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
|
||||
_, err := l.fs.Stat(periodicEpochBaseDir)
|
||||
_, err := l.fs.Stat(PeriodicEpochBaseDir)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
|
||||
}
|
||||
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrapf(err, "error reading path %s", PeriodicEpochBaseDir)
|
||||
}
|
||||
// iterate root, which should have directories named by "period"
|
||||
entries, err := listDir(l.fs, periodicEpochBaseDir)
|
||||
entries, err := listDir(l.fs, PeriodicEpochBaseDir)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrapf(err, "failed to list %s", PeriodicEpochBaseDir)
|
||||
}
|
||||
|
||||
return &identIterator{
|
||||
fs: l.fs,
|
||||
path: periodicEpochBaseDir,
|
||||
path: PeriodicEpochBaseDir,
|
||||
// Please see comments on the `layers` field in `identIterator`` if the role of the layers is unclear.
|
||||
layers: []layoutLayer{
|
||||
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
|
||||
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
|
||||
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
|
||||
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
|
||||
{populateIdent: populateRoot, filter: IsBlockRootDir}, // extract root from path
|
||||
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
|
||||
},
|
||||
entries: entries,
|
||||
}, nil
|
||||
@@ -98,7 +98,7 @@ func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
|
||||
}
|
||||
|
||||
func (l *periodicEpochLayout) periodDir(epoch primitives.Epoch) string {
|
||||
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
|
||||
return filepath.Join(PeriodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)))
|
||||
}
|
||||
|
||||
func (l *periodicEpochLayout) sszPath(n blobIdent) string {
|
||||
|
||||
@@ -30,7 +30,7 @@ func (l *flatLayout) iterateIdents(before primitives.Epoch) (*identIterator, err
|
||||
if os.IsNotExist(err) {
|
||||
return &identIterator{eof: true}, nil // The directory is non-existent, which is fine; stop iteration.
|
||||
}
|
||||
return nil, errors.Wrapf(err, "error reading path %s", periodicEpochBaseDir)
|
||||
return nil, errors.Wrap(err, "error reading blob base dir")
|
||||
}
|
||||
entries, err := listDir(l.fs, ".")
|
||||
if err != nil {
|
||||
@@ -199,10 +199,10 @@ func (l *flatSlotReader) isSSZAndBefore(fname string) bool {
|
||||
// the epoch can be determined.
|
||||
func isFlatCachedAndBefore(cache *blobStorageSummaryCache, before primitives.Epoch) func(string) bool {
|
||||
if before == 0 {
|
||||
return isRootDir
|
||||
return IsBlockRootDir
|
||||
}
|
||||
return func(p string) bool {
|
||||
if !isRootDir(p) {
|
||||
if !IsBlockRootDir(p) {
|
||||
return false
|
||||
}
|
||||
root, err := rootFromPath(p)
|
||||
|
||||
@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
|
||||
|
||||
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
|
||||
opts = append(opts,
|
||||
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
|
||||
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
|
||||
WithDataColumnFs(fs),
|
||||
)
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ type ReadOnlyDatabase interface {
|
||||
BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
BlockRootsBySlot(ctx context.Context, slot primitives.Slot) (bool, [][32]byte, error)
|
||||
HasBlock(ctx context.Context, blockRoot [32]byte) bool
|
||||
AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool
|
||||
GenesisBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error)
|
||||
GenesisBlockRoot(ctx context.Context) ([32]byte, error)
|
||||
IsFinalizedBlock(ctx context.Context, blockRoot [32]byte) bool
|
||||
@@ -129,6 +130,7 @@ type NoHeadAccessDatabase interface {
|
||||
// Custody operations.
|
||||
UpdateSubscribedToAllDataSubnets(ctx context.Context, subscribed bool) (bool, error)
|
||||
UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error)
|
||||
UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error
|
||||
|
||||
// P2P Metadata operations.
|
||||
SaveMetadataSeqNum(ctx context.Context, seqNum uint64) error
|
||||
|
||||
@@ -336,6 +336,42 @@ func (s *Store) HasBlock(ctx context.Context, blockRoot [32]byte) bool {
|
||||
return exists
|
||||
}
|
||||
|
||||
// AvailableBlocks returns a set of roots indicating which blocks corresponding to `blockRoots` are available in the storage.
|
||||
func (s *Store) AvailableBlocks(ctx context.Context, blockRoots [][32]byte) map[[32]byte]bool {
|
||||
_, span := trace.StartSpan(ctx, "BeaconDB.AvailableBlocks")
|
||||
defer span.End()
|
||||
|
||||
count := len(blockRoots)
|
||||
availableRoots := make(map[[32]byte]bool, count)
|
||||
|
||||
// First, check the cache for each block root.
|
||||
notInCacheRoots := make([][32]byte, 0, count)
|
||||
for _, root := range blockRoots {
|
||||
if v, ok := s.blockCache.Get(string(root[:])); v != nil && ok {
|
||||
availableRoots[root] = true
|
||||
continue
|
||||
}
|
||||
|
||||
notInCacheRoots = append(notInCacheRoots, root)
|
||||
}
|
||||
|
||||
// Next, check the database for the remaining block roots.
|
||||
if err := s.db.View(func(tx *bolt.Tx) error {
|
||||
bkt := tx.Bucket(blocksBucket)
|
||||
for _, root := range notInCacheRoots {
|
||||
if bkt.Get(root[:]) != nil {
|
||||
availableRoots[root] = true
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
panic(err) // lint:nopanic -- View never returns an error.
|
||||
}
|
||||
|
||||
return availableRoots
|
||||
}
|
||||
|
||||
// BlocksBySlot retrieves a list of beacon blocks and its respective roots by slot.
|
||||
func (s *Store) BlocksBySlot(ctx context.Context, slot primitives.Slot) ([]interfaces.ReadOnlySignedBeaconBlock, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlocksBySlot")
|
||||
|
||||
@@ -656,6 +656,44 @@ func TestStore_BlocksCRUD_NoCache(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestAvailableBlocks(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
db := setupDB(t)
|
||||
|
||||
b0, b1, b2 := util.NewBeaconBlock(), util.NewBeaconBlock(), util.NewBeaconBlock()
|
||||
b0.Block.Slot, b1.Block.Slot, b2.Block.Slot = 10, 20, 30
|
||||
|
||||
sb0, err := blocks.NewSignedBeaconBlock(b0)
|
||||
require.NoError(t, err)
|
||||
r0, err := b0.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save b0 but remove it from cache.
|
||||
err = db.SaveBlock(ctx, sb0)
|
||||
require.NoError(t, err)
|
||||
db.blockCache.Del(string(r0[:]))
|
||||
|
||||
// b1 is not saved at all.
|
||||
r1, err := b1.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save b2 in cache and DB.
|
||||
sb2, err := blocks.NewSignedBeaconBlock(b2)
|
||||
require.NoError(t, err)
|
||||
r2, err := b2.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, db.SaveBlock(ctx, sb2))
|
||||
require.NoError(t, err)
|
||||
|
||||
expected := map[[32]byte]bool{r0: true, r2: true}
|
||||
actual := db.AvailableBlocks(ctx, [][32]byte{r0, r1, r2})
|
||||
|
||||
require.Equal(t, len(expected), len(actual))
|
||||
for i := range expected {
|
||||
require.Equal(t, true, actual[i])
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_Blocks_FiltersCorrectly(t *testing.T) {
|
||||
for _, tt := range blockTests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
|
||||
@@ -132,6 +132,6 @@ func recoverStateSummary(ctx context.Context, tx *bolt.Tx, root []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
summaryBucket := tx.Bucket(stateBucket)
|
||||
summaryBucket := tx.Bucket(stateSummaryBucket)
|
||||
return summaryBucket.Put(root, summaryEnc)
|
||||
}
|
||||
|
||||
@@ -137,3 +137,32 @@ func TestStore_FinalizedCheckpoint_StateMustExist(t *testing.T) {
|
||||
|
||||
require.ErrorContains(t, errMissingStateForCheckpoint.Error(), db.SaveFinalizedCheckpoint(ctx, cp))
|
||||
}
|
||||
|
||||
// Regression test: verify that saving a checkpoint triggers recovery which writes
|
||||
// the state summary into the correct stateSummaryBucket so that HasStateSummary/StateSummary see it.
|
||||
func TestRecoverStateSummary_WritesToStateSummaryBucket(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
ctx := t.Context()
|
||||
|
||||
// Create a block without saving a state or summary, so recovery is needed.
|
||||
blk := util.HydrateSignedBeaconBlock(ðpb.SignedBeaconBlock{})
|
||||
root, err := blk.Block.HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, db.SaveBlock(ctx, wsb))
|
||||
|
||||
// Precondition: summary not present yet.
|
||||
require.Equal(t, false, db.HasStateSummary(ctx, root))
|
||||
|
||||
// Saving justified checkpoint should trigger recovery path calling recoverStateSummary.
|
||||
cp := ðpb.Checkpoint{Epoch: 2, Root: root[:]}
|
||||
require.NoError(t, db.SaveJustifiedCheckpoint(ctx, cp))
|
||||
|
||||
// Postcondition: summary is visible via the public summary APIs (which read stateSummaryBucket).
|
||||
require.Equal(t, true, db.HasStateSummary(ctx, root))
|
||||
summary, err := db.StateSummary(ctx, root)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, summary)
|
||||
assert.DeepEqual(t, ðpb.StateSummary{Slot: blk.Block.Slot, Root: root[:]}, summary)
|
||||
}
|
||||
|
||||
@@ -2,16 +2,19 @@ package kv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
// UpdateCustodyInfo atomically updates the custody group count only it is greater than the stored one.
|
||||
// UpdateCustodyInfo atomically updates the custody group count only if it is greater than the stored one.
|
||||
// In this case, it also updates the earliest available slot with the provided value.
|
||||
// It returns the (potentially updated) custody group count and earliest available slot.
|
||||
func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot primitives.Slot, custodyGroupCount uint64) (primitives.Slot, uint64, error) {
|
||||
@@ -70,6 +73,79 @@ func (s *Store) UpdateCustodyInfo(ctx context.Context, earliestAvailableSlot pri
|
||||
return storedEarliestAvailableSlot, storedGroupCount, nil
|
||||
}
|
||||
|
||||
// UpdateEarliestAvailableSlot updates the earliest available slot.
|
||||
func (s *Store) UpdateEarliestAvailableSlot(ctx context.Context, earliestAvailableSlot primitives.Slot) error {
|
||||
_, span := trace.StartSpan(ctx, "BeaconDB.UpdateEarliestAvailableSlot")
|
||||
defer span.End()
|
||||
|
||||
storedEarliestAvailableSlot := primitives.Slot(0)
|
||||
if err := s.db.Update(func(tx *bolt.Tx) error {
|
||||
// Retrieve the custody bucket.
|
||||
bucket, err := tx.CreateBucketIfNotExists(custodyBucket)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "create custody bucket")
|
||||
}
|
||||
|
||||
// Retrieve the stored earliest available slot.
|
||||
storedEarliestAvailableSlotBytes := bucket.Get(earliestAvailableSlotKey)
|
||||
if len(storedEarliestAvailableSlotBytes) != 0 {
|
||||
storedEarliestAvailableSlot = primitives.Slot(bytesutil.BytesToUint64BigEndian(storedEarliestAvailableSlotBytes))
|
||||
}
|
||||
|
||||
// Allow decrease (for backfill scenarios)
|
||||
if earliestAvailableSlot <= storedEarliestAvailableSlot {
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Prevent increase within the MIN_EPOCHS_FOR_BLOCK_REQUESTS period
|
||||
// This ensures we don't voluntarily refuse to serve mandatory block data
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
// Calculate the minimum required epoch (or 0 if we're early in the chain)
|
||||
minRequiredEpoch := primitives.Epoch(0)
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
}
|
||||
|
||||
// Convert to slot to ensure we compare at slot-level granularity
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "calculate minimum required slot")
|
||||
}
|
||||
|
||||
// Prevent any increase that would put earliest available slot beyond the minimum required slot
|
||||
if earliestAvailableSlot > minRequiredSlot {
|
||||
return errors.Errorf(
|
||||
"cannot increase earliest available slot to %d (epoch %d) as it exceeds minimum required slot %d (epoch %d)",
|
||||
earliestAvailableSlot, slots.ToEpoch(earliestAvailableSlot),
|
||||
minRequiredSlot, minRequiredEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
storedEarliestAvailableSlot = earliestAvailableSlot
|
||||
bytes := bytesutil.Uint64ToBytesBigEndian(uint64(earliestAvailableSlot))
|
||||
if err := bucket.Put(earliestAvailableSlotKey, bytes); err != nil {
|
||||
return errors.Wrap(err, "put earliest available slot")
|
||||
}
|
||||
|
||||
return nil
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.WithField("earliestAvailableSlot", storedEarliestAvailableSlot).Debug("Updated earliest available slot")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateSubscribedToAllDataSubnets updates the "subscribed to all data subnets" status in the database
|
||||
// only if `subscribed` is `true`.
|
||||
// It returns the previous subscription status.
|
||||
|
||||
@@ -3,10 +3,13 @@ package kv
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
@@ -132,6 +135,131 @@ func TestUpdateCustodyInfo(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestUpdateEarliestAvailableSlot(t *testing.T) {
|
||||
ctx := t.Context()
|
||||
|
||||
t.Run("allow decreasing earliest slot (backfill scenario)", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(300)
|
||||
initialCount = uint64(10)
|
||||
earliestSlot = primitives.Slot(200) // Lower than initial (backfill discovered earlier blocks)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Initialize custody info
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Update with a lower slot (should update for backfill)
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, earliestSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, earliestSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("allow increasing slot within MIN_EPOCHS_FOR_BLOCK_REQUESTS (pruning scenario)", func(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
|
||||
// Calculate the current slot and minimum required slot based on actual current time
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initial setup: set earliest slot well before minRequiredSlot
|
||||
const groupCount = uint64(5)
|
||||
initialSlot := primitives.Slot(1000)
|
||||
|
||||
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, groupCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to increase to a slot that's still BEFORE minRequiredSlot (should succeed)
|
||||
validSlot := minRequiredSlot - 100
|
||||
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, validSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify the database was updated
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, validSlot, storedSlot)
|
||||
require.Equal(t, groupCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("prevent increasing slot beyond MIN_EPOCHS_FOR_BLOCK_REQUESTS", func(t *testing.T) {
|
||||
db := setupDB(t)
|
||||
|
||||
// Calculate the current slot and minimum required slot based on actual current time
|
||||
genesisTime := time.Unix(int64(params.BeaconConfig().MinGenesisTime+params.BeaconConfig().GenesisDelay), 0)
|
||||
currentSlot := slots.CurrentSlot(genesisTime)
|
||||
currentEpoch := slots.ToEpoch(currentSlot)
|
||||
minEpochsForBlocks := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests)
|
||||
|
||||
var minRequiredEpoch primitives.Epoch
|
||||
if currentEpoch > minEpochsForBlocks {
|
||||
minRequiredEpoch = currentEpoch - minEpochsForBlocks
|
||||
} else {
|
||||
minRequiredEpoch = 0
|
||||
}
|
||||
|
||||
minRequiredSlot, err := slots.EpochStart(minRequiredEpoch)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Initial setup: set a valid earliest slot (well before minRequiredSlot)
|
||||
const initialCount = uint64(5)
|
||||
initialSlot := primitives.Slot(1000)
|
||||
|
||||
_, _, err = db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Try to set earliest slot beyond the minimum required slot
|
||||
invalidSlot := minRequiredSlot + 100
|
||||
|
||||
// This should fail
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, invalidSlot)
|
||||
require.ErrorContains(t, "cannot increase earliest available slot", err)
|
||||
require.ErrorContains(t, "exceeds minimum required slot", err)
|
||||
|
||||
// Verify the database wasn't updated
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
|
||||
t.Run("no change when slot equals current slot", func(t *testing.T) {
|
||||
const (
|
||||
initialSlot = primitives.Slot(100)
|
||||
initialCount = uint64(5)
|
||||
)
|
||||
|
||||
db := setupDB(t)
|
||||
|
||||
// Initialize custody info
|
||||
_, _, err := db.UpdateCustodyInfo(ctx, initialSlot, initialCount)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Update with the same slot
|
||||
err = db.UpdateEarliestAvailableSlot(ctx, initialSlot)
|
||||
require.NoError(t, err)
|
||||
|
||||
storedSlot, storedCount := getCustodyInfoFromDB(t, db)
|
||||
require.Equal(t, initialSlot, storedSlot)
|
||||
require.Equal(t, initialCount, storedCount)
|
||||
})
|
||||
}
|
||||
|
||||
func TestUpdateSubscribedToAllDataSubnets(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ go_library(
|
||||
"//beacon-chain:__subpackages__",
|
||||
],
|
||||
deps = [
|
||||
"//beacon-chain/core/helpers:go_default_library",
|
||||
"//beacon-chain/db:go_default_library",
|
||||
"//beacon-chain/db/iface:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
@@ -29,6 +28,7 @@ go_test(
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//time/slots/testing:go_default_library",
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/iface"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
@@ -25,17 +24,24 @@ const (
|
||||
defaultNumBatchesToPrune = 15
|
||||
)
|
||||
|
||||
// custodyUpdater is a tiny interface that p2p service implements; kept here to avoid
|
||||
// importing the p2p package and creating a cycle.
|
||||
type custodyUpdater interface {
|
||||
UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error
|
||||
}
|
||||
|
||||
type ServiceOption func(*Service)
|
||||
|
||||
// WithRetentionPeriod allows the user to specify a different data retention period than the spec default.
|
||||
// The retention period is specified in epochs, and must be >= MIN_EPOCHS_FOR_BLOCK_REQUESTS.
|
||||
func WithRetentionPeriod(retentionEpochs primitives.Epoch) ServiceOption {
|
||||
return func(s *Service) {
|
||||
defaultRetentionEpochs := helpers.MinEpochsForBlockRequests() + 1
|
||||
defaultRetentionEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1
|
||||
if retentionEpochs < defaultRetentionEpochs {
|
||||
log.WithField("userEpochs", retentionEpochs).
|
||||
WithField("minRequired", defaultRetentionEpochs).
|
||||
Warn("Retention period too low, using minimum required value")
|
||||
Warn("Retention period too low, ignoring and using minimum required value")
|
||||
retentionEpochs = defaultRetentionEpochs
|
||||
}
|
||||
|
||||
s.ps = pruneStartSlotFunc(retentionEpochs)
|
||||
@@ -58,17 +64,23 @@ type Service struct {
|
||||
slotTicker slots.Ticker
|
||||
backfillWaiter func() error
|
||||
initSyncWaiter func() error
|
||||
custody custodyUpdater
|
||||
}
|
||||
|
||||
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, opts ...ServiceOption) (*Service, error) {
|
||||
func New(ctx context.Context, db iface.Database, genesisTime time.Time, initSyncWaiter, backfillWaiter func() error, custody custodyUpdater, opts ...ServiceOption) (*Service, error) {
|
||||
if custody == nil {
|
||||
return nil, errors.New("custody updater is required for pruner but was not provided")
|
||||
}
|
||||
|
||||
p := &Service{
|
||||
ctx: ctx,
|
||||
db: db,
|
||||
ps: pruneStartSlotFunc(helpers.MinEpochsForBlockRequests() + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
|
||||
ps: pruneStartSlotFunc(primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests) + 1), // Default retention epochs is MIN_EPOCHS_FOR_BLOCK_REQUESTS + 1 from the current slot.
|
||||
done: make(chan struct{}),
|
||||
slotTicker: slots.NewSlotTicker(slots.UnsafeStartTime(genesisTime, 0), params.BeaconConfig().SecondsPerSlot),
|
||||
initSyncWaiter: initSyncWaiter,
|
||||
backfillWaiter: backfillWaiter,
|
||||
custody: custody,
|
||||
}
|
||||
|
||||
for _, o := range opts {
|
||||
@@ -157,17 +169,45 @@ func (p *Service) prune(slot primitives.Slot) error {
|
||||
return errors.Wrap(err, "failed to prune batches")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"prunedUpto": pruneUpto,
|
||||
"duration": time.Since(tt),
|
||||
"currentSlot": slot,
|
||||
"batchSize": defaultPrunableBatchSize,
|
||||
"numBatches": numBatches,
|
||||
}).Debug("Successfully pruned chain data")
|
||||
earliestAvailableSlot := pruneUpto + 1
|
||||
|
||||
// Update pruning checkpoint.
|
||||
p.prunedUpto = pruneUpto
|
||||
|
||||
// Update the earliest available slot after pruning
|
||||
if err := p.updateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
|
||||
return errors.Wrap(err, "update earliest available slot")
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"prunedUpto": pruneUpto,
|
||||
"earliestAvailableSlot": earliestAvailableSlot,
|
||||
"duration": time.Since(tt),
|
||||
"currentSlot": slot,
|
||||
"batchSize": defaultPrunableBatchSize,
|
||||
"numBatches": numBatches,
|
||||
}).Debug("Successfully pruned chain data")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// updateEarliestAvailableSlot updates the earliest available slot via the injected custody updater
|
||||
// and also persists it to the database.
|
||||
func (p *Service) updateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
if !params.FuluEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update the p2p in-memory state
|
||||
if err := p.custody.UpdateEarliestAvailableSlot(earliestAvailableSlot); err != nil {
|
||||
return errors.Wrapf(err, "update earliest available slot after pruning to %d", earliestAvailableSlot)
|
||||
}
|
||||
|
||||
// Persist to database to ensure it survives restarts
|
||||
if err := p.db.UpdateEarliestAvailableSlot(p.ctx, earliestAvailableSlot); err != nil {
|
||||
return errors.Wrapf(err, "update earliest available slot in database for slot %d", earliestAvailableSlot)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package pruner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -15,6 +16,7 @@ import (
|
||||
|
||||
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
@@ -62,7 +64,9 @@ func TestPruner_PruningConditions(t *testing.T) {
|
||||
if !tt.backfillCompleted {
|
||||
backfillWaiter = waiter
|
||||
}
|
||||
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, WithSlotTicker(slotTicker))
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(ctx, beaconDB, time.Now(), initSyncWaiter, backfillWaiter, mockCustody, WithSlotTicker(slotTicker))
|
||||
require.NoError(t, err)
|
||||
|
||||
go p.Start()
|
||||
@@ -97,12 +101,14 @@ func TestPruner_PruneSuccess(t *testing.T) {
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
@@ -133,3 +139,242 @@ func TestPruner_PruneSuccess(t *testing.T) {
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
// Mock custody updater for testing
|
||||
type mockCustodyUpdater struct {
|
||||
custodyGroupCount uint64
|
||||
earliestAvailableSlot primitives.Slot
|
||||
updateCallCount int
|
||||
}
|
||||
|
||||
func (m *mockCustodyUpdater) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
m.updateCallCount++
|
||||
m.earliestAvailableSlot = earliestAvailableSlot
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestPruner_UpdatesEarliestAvailableSlot(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
hook := logTest.NewGlobal()
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
// Create mock custody updater
|
||||
mockCustody := &mockCustodyUpdater{
|
||||
custodyGroupCount: 4,
|
||||
earliestAvailableSlot: 0,
|
||||
}
|
||||
|
||||
// Create pruner with mock custody updater
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
p.ps = func(current primitives.Slot) primitives.Slot {
|
||||
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
|
||||
}
|
||||
|
||||
// Save some blocks to be pruned
|
||||
for i := primitives.Slot(1); i <= 32; i++ {
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = i
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
|
||||
}
|
||||
|
||||
// Start pruner and trigger at slot 80 (middle of 3rd epoch)
|
||||
go p.Start()
|
||||
currentSlot := primitives.Slot(80)
|
||||
slotTicker.Channel <- currentSlot
|
||||
|
||||
// Wait for pruning to complete
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Check that UpdateEarliestAvailableSlot was called
|
||||
assert.Equal(t, true, mockCustody.updateCallCount > 0, "UpdateEarliestAvailableSlot should have been called")
|
||||
|
||||
// The earliest available slot should be pruneUpto + 1
|
||||
// pruneUpto = currentSlot - retentionEpochs*slotsPerEpoch = 80 - 2*32 = 16
|
||||
// So earliest available slot should be 16 + 1 = 17
|
||||
expectedEarliestSlot := primitives.Slot(17)
|
||||
require.Equal(t, expectedEarliestSlot, mockCustody.earliestAvailableSlot, "Earliest available slot should be updated correctly")
|
||||
require.Equal(t, uint64(4), mockCustody.custodyGroupCount, "Custody group count should be preserved")
|
||||
|
||||
// Verify that no error was logged
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.ErrorLevel {
|
||||
t.Errorf("Unexpected error log: %s", entry.Message)
|
||||
}
|
||||
}
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
// Mock custody updater that returns an error for UpdateEarliestAvailableSlot
|
||||
type mockCustodyUpdaterWithUpdateError struct {
|
||||
updateCallCount int
|
||||
}
|
||||
|
||||
func (m *mockCustodyUpdaterWithUpdateError) UpdateEarliestAvailableSlot(earliestAvailableSlot primitives.Slot) error {
|
||||
m.updateCallCount++
|
||||
return errors.New("failed to update earliest available slot")
|
||||
}
|
||||
|
||||
func TestWithRetentionPeriod_EnforcesMinimum(t *testing.T) {
|
||||
// Use minimal config for testing
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.MinimalSpecConfig()
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
ctx := t.Context()
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
|
||||
// Get the minimum required epochs (272 + 1 = 273 for minimal)
|
||||
minRequiredEpochs := primitives.Epoch(params.BeaconConfig().MinEpochsForBlockRequests + 1)
|
||||
|
||||
// Use a slot that's guaranteed to be after the minimum retention period
|
||||
currentSlot := primitives.Slot(minRequiredEpochs+100) * (params.BeaconConfig().SlotsPerEpoch)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
userRetentionEpochs primitives.Epoch
|
||||
expectedPruneSlot primitives.Slot
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "User value below minimum - should use minimum",
|
||||
userRetentionEpochs: 2, // Way below minimum
|
||||
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
|
||||
description: "Should use minimum when user value is too low",
|
||||
},
|
||||
{
|
||||
name: "User value at minimum",
|
||||
userRetentionEpochs: minRequiredEpochs,
|
||||
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs)*params.BeaconConfig().SlotsPerEpoch,
|
||||
description: "Should use user value when at minimum",
|
||||
},
|
||||
{
|
||||
name: "User value above minimum",
|
||||
userRetentionEpochs: minRequiredEpochs + 10,
|
||||
expectedPruneSlot: currentSlot - primitives.Slot(minRequiredEpochs+10)*params.BeaconConfig().SlotsPerEpoch,
|
||||
description: "Should use user value when above minimum",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
logrus.SetLevel(logrus.WarnLevel)
|
||||
|
||||
mockCustody := &mockCustodyUpdater{}
|
||||
// Create pruner with retention period
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithRetentionPeriod(tt.userRetentionEpochs),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test the pruning calculation
|
||||
pruneUptoSlot := p.ps(currentSlot)
|
||||
|
||||
// Verify the pruning slot
|
||||
assert.Equal(t, tt.expectedPruneSlot, pruneUptoSlot, tt.description)
|
||||
|
||||
// Check if warning was logged when value was too low
|
||||
if tt.userRetentionEpochs < minRequiredEpochs {
|
||||
assert.LogsContain(t, hook, "Retention period too low, ignoring and using minimum required value")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPruner_UpdateEarliestSlotError(t *testing.T) {
|
||||
params.SetupTestConfigCleanup(t)
|
||||
config := params.BeaconConfig()
|
||||
config.FuluForkEpoch = 0 // Enable Fulu from epoch 0
|
||||
params.OverrideBeaconConfig(config)
|
||||
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
hook := logTest.NewGlobal()
|
||||
ctx, cancel := context.WithCancel(t.Context())
|
||||
defer cancel()
|
||||
|
||||
beaconDB := dbtest.SetupDB(t)
|
||||
retentionEpochs := primitives.Epoch(2)
|
||||
|
||||
slotTicker := &slottest.MockTicker{Channel: make(chan primitives.Slot)}
|
||||
|
||||
// Create mock custody updater that returns an error for UpdateEarliestAvailableSlot
|
||||
mockCustody := &mockCustodyUpdaterWithUpdateError{}
|
||||
|
||||
// Create pruner with mock custody updater
|
||||
p, err := New(
|
||||
ctx,
|
||||
beaconDB,
|
||||
time.Now(),
|
||||
nil,
|
||||
nil,
|
||||
mockCustody,
|
||||
WithSlotTicker(slotTicker),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
p.ps = func(current primitives.Slot) primitives.Slot {
|
||||
return current - primitives.Slot(retentionEpochs)*params.BeaconConfig().SlotsPerEpoch
|
||||
}
|
||||
|
||||
// Save some blocks to be pruned
|
||||
for i := primitives.Slot(1); i <= 32; i++ {
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = i
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, wsb))
|
||||
}
|
||||
|
||||
// Start pruner and trigger at slot 80
|
||||
go p.Start()
|
||||
currentSlot := primitives.Slot(80)
|
||||
slotTicker.Channel <- currentSlot
|
||||
|
||||
// Wait for pruning to complete
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Should have called UpdateEarliestAvailableSlot
|
||||
assert.Equal(t, 1, mockCustody.updateCallCount, "UpdateEarliestAvailableSlot should be called")
|
||||
|
||||
// Check that error was logged by the prune function
|
||||
found := false
|
||||
for _, entry := range hook.AllEntries() {
|
||||
if entry.Level == logrus.ErrorLevel && entry.Message == "Failed to prune database" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Equal(t, true, found, "Should log error when UpdateEarliestAvailableSlot fails")
|
||||
|
||||
require.NoError(t, p.Stop())
|
||||
}
|
||||
|
||||
@@ -27,6 +27,7 @@ go_library(
|
||||
"//beacon-chain/forkchoice:go_default_library",
|
||||
"//beacon-chain/forkchoice/types:go_default_library",
|
||||
"//beacon-chain/state:go_default_library",
|
||||
"//config/features:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice"
|
||||
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
consensus_blocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
@@ -239,9 +240,12 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
|
||||
if node.slot == epochStart {
|
||||
return true, nil
|
||||
}
|
||||
nodeEpoch := slots.ToEpoch(node.slot)
|
||||
if nodeEpoch >= cp.Epoch {
|
||||
return false, nil
|
||||
if !features.Get().DisableLastEpochTargets {
|
||||
// Allow any node from the checkpoint epoch - 1 to be viable.
|
||||
nodeEpoch := slots.ToEpoch(node.slot)
|
||||
if nodeEpoch+1 == cp.Epoch {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
for _, child := range node.children {
|
||||
if child.slot > epochStart {
|
||||
|
||||
@@ -813,9 +813,10 @@ func TestForkChoiceIsViableForCheckpoint(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, viable)
|
||||
|
||||
// Last epoch blocks are still viable
|
||||
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: blk.Root(), Epoch: 1})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, viable)
|
||||
require.Equal(t, true, viable)
|
||||
|
||||
// No Children but impossible checkpoint
|
||||
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: blk2.Root()})
|
||||
@@ -835,9 +836,10 @@ func TestForkChoiceIsViableForCheckpoint(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, viable)
|
||||
|
||||
// Last epoch blocks are still viable
|
||||
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: blk2.Root(), Epoch: 1})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, viable)
|
||||
require.Equal(t, true, viable)
|
||||
|
||||
st, blk4, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'d'}, blk2.Root(), [32]byte{'D'}, 0, 0)
|
||||
require.NoError(t, err)
|
||||
@@ -848,9 +850,10 @@ func TestForkChoiceIsViableForCheckpoint(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, viable)
|
||||
|
||||
// Last epoch blocks are still viable
|
||||
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: blk2.Root(), Epoch: 1})
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, false, viable)
|
||||
require.Equal(t, true, viable)
|
||||
|
||||
// Boundary block
|
||||
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: blk4.Root(), Epoch: 1})
|
||||
|
||||
@@ -6,6 +6,7 @@ go_library(
|
||||
"cache.go",
|
||||
"helpers.go",
|
||||
"lightclient.go",
|
||||
"log.go",
|
||||
"store.go",
|
||||
],
|
||||
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/light-client",
|
||||
|
||||
5
beacon-chain/light-client/log.go
Normal file
5
beacon-chain/light-client/log.go
Normal file
@@ -0,0 +1,5 @@
|
||||
package light_client
|
||||
|
||||
import "github.com/sirupsen/logrus"
|
||||
|
||||
var log = logrus.WithField("prefix", "light-client")
|
||||
@@ -14,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var ErrLightClientBootstrapNotFound = errors.New("light client bootstrap not found")
|
||||
|
||||
@@ -4,12 +4,12 @@ import (
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
"github.com/sirupsen/logrus"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
|
||||
@@ -3,11 +3,11 @@ package monitor
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
|
||||
|
||||
@@ -58,7 +58,6 @@ go_library(
|
||||
"//config/params:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//container/slice:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//genesis:go_default_library",
|
||||
"//monitoring/prometheus:go_default_library",
|
||||
"//monitoring/tracing:go_default_library",
|
||||
|
||||
@@ -2,11 +2,13 @@ package node
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
|
||||
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
|
||||
"github.com/OffchainLabs/prysm/v6/cmd"
|
||||
"github.com/OffchainLabs/prysm/v6/genesis"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
@@ -36,6 +38,22 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
|
||||
return kv.NewKVStore(ctx, db.DatabasePath())
|
||||
}
|
||||
|
||||
func (c *dbClearer) clearGenesis(dir string) error {
|
||||
if !c.shouldProceed() {
|
||||
return nil
|
||||
}
|
||||
|
||||
gfile, err := genesis.FindStateFile(dir)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := os.Remove(gfile.FilePath()); err != nil {
|
||||
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
|
||||
if !c.shouldProceed() {
|
||||
return nil
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"slices"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -60,7 +61,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/container/slice"
|
||||
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
|
||||
"github.com/OffchainLabs/prysm/v6/genesis"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/prometheus"
|
||||
"github.com/OffchainLabs/prysm/v6/runtime"
|
||||
@@ -178,6 +178,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
|
||||
}
|
||||
beacon.db = kvdb
|
||||
|
||||
if err := dbClearer.clearGenesis(dataDir); err != nil {
|
||||
return nil, errors.Wrap(err, "could not clear genesis state")
|
||||
}
|
||||
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
|
||||
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
|
||||
return nil, errors.Wrap(err, "could not initialize genesis state")
|
||||
@@ -598,22 +601,7 @@ func (b *BeaconNode) startStateGen(ctx context.Context, bfs coverage.AvailableBl
|
||||
return err
|
||||
}
|
||||
|
||||
r := bytesutil.ToBytes32(cp.Root)
|
||||
// Consider edge case where finalized root are zeros instead of genesis root hash.
|
||||
if r == params.BeaconConfig().ZeroHash {
|
||||
genesisBlock, err := b.db.GenesisBlock(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if genesisBlock != nil && !genesisBlock.IsNil() {
|
||||
r, err = genesisBlock.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, r)
|
||||
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, [32]byte(cp.Root))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1121,6 +1109,7 @@ func (b *BeaconNode) registerPrunerService(cliCtx *cli.Context) error {
|
||||
genesis,
|
||||
initSyncWaiter(cliCtx.Context, b.initialSyncComplete),
|
||||
backfillService.WaitForCompletion,
|
||||
b.fetchP2P(),
|
||||
opts...,
|
||||
)
|
||||
if err != nil {
|
||||
@@ -1147,10 +1136,8 @@ func (b *BeaconNode) registerLightClientStore() {
|
||||
|
||||
func hasNetworkFlag(cliCtx *cli.Context) bool {
|
||||
for _, flag := range features.NetworkFlags {
|
||||
for _, name := range flag.Names() {
|
||||
if cliCtx.IsSet(name) {
|
||||
return true
|
||||
}
|
||||
if slices.ContainsFunc(flag.Names(), cliCtx.IsSet) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
"github.com/OffchainLabs/prysm/v6/crypto/bls"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -13,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
c "github.com/patrickmn/go-cache"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestKV_Aggregated_AggregateUnaggregatedAttestations(t *testing.T) {
|
||||
|
||||
@@ -4,11 +4,11 @@ import (
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestKV_BlockAttestation_CanSaveRetrieve(t *testing.T) {
|
||||
|
||||
@@ -4,11 +4,11 @@ import (
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/assert"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestKV_Forkchoice_CanSaveRetrieve(t *testing.T) {
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
package kv
|
||||
|
||||
import (
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/patrickmn/go-cache"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
|
||||
|
||||
@@ -3,11 +3,11 @@ package kv
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestAttCaches_hasSeenBit(t *testing.T) {
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
|
||||
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
|
||||
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
|
||||
@@ -13,7 +14,6 @@ import (
|
||||
"github.com/OffchainLabs/prysm/v6/testing/require"
|
||||
"github.com/OffchainLabs/prysm/v6/testing/util"
|
||||
c "github.com/patrickmn/go-cache"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
func TestKV_Unaggregated_UnaggregatedAttestations(t *testing.T) {
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/OffchainLabs/go-bitfield"
|
||||
"github.com/OffchainLabs/prysm/v6/config/features"
|
||||
"github.com/OffchainLabs/prysm/v6/config/params"
|
||||
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
|
||||
@@ -13,7 +14,6 @@ import (
|
||||
attaggregation "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation/aggregation/attestations"
|
||||
"github.com/OffchainLabs/prysm/v6/time/slots"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/go-bitfield"
|
||||
)
|
||||
|
||||
// This prepares fork choice attestations by running batchForkChoiceAtts
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user