Compare commits

...

33 Commits

Author SHA1 Message Date
james-prysm
34857ce004 moving s.registeredNetworkEntry registration 2025-09-29 13:49:02 -05:00
james-prysm
c3027862b2 fixing naming 2025-09-29 11:58:31 -05:00
james-prysm
7e2b730a04 Merge branch 'develop' into fix-subscriber-race 2025-09-29 11:54:55 -05:00
james-prysm
0cded0c1d6 gaz 2025-09-29 11:54:37 -05:00
james-prysm
6139d58fa5 fixing config string parsing regression (#15773)
* adding string parsing and test

* gaz
2025-09-29 16:15:01 +00:00
james-prysm
f08ae428b2 fixing test 2025-09-29 08:32:16 -05:00
james-prysm
94ef8d780d first attempt 2025-09-28 23:39:20 -05:00
Sahil Sojitra
0ea5e2cf9d refactor to use reflect.TypeFor (#15627)
* refactor to use reflect.TypeFor

* added changelog fragment file

* update changelog

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-26 17:26:44 +00:00
Jun Song
29fe707143 SSZ-QL: Support nested List type (#15725)
* Add nested 2d list cases

* Add elementSize member for listInfo to track each element's byte size

* Fix misleading variable in RunStructTest

* Changelog

* Regen pb file

* Update encoding/ssz/query/list.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Rename elementSize into plural

* Update changelog/syjn99_ssz-ql-nested-list.md

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2025-09-26 14:23:22 +00:00
kasey
d68196822b additional log information around invalid payloads (#15754)
* additional log information around invalid payloads

* fix test with reversed require.ErrorIs args

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-26 02:33:57 +00:00
terence
924fe4de98 Restructure golangci-lint config: explicit opt-in (#15744)
* update golangci-lint configuration to enable basic linters only

* Add back formatter

* feedback

* Add nolint
2025-09-25 17:01:22 +00:00
Preston Van Loon
fe9dd255c7 slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations (#15746)
* slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions.

* Fix CI

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-25 15:46:28 +00:00
Potuz
83d75bcb78 Update quick-go (#15749) 2025-09-25 11:02:10 +00:00
james-prysm
ba5f7361ad flipping if statement check to fix metric (#15743) 2025-09-24 20:43:39 +00:00
Manu NALEPA
8fa956036a Update go.mod to v1.25.1. (#15740)
* Updated go.mod to v1.25.1.

Run `golangci-lint migrate`
GolangCI lint: Add `noinlineerr`.
Run `golangci-lint run --config=.golangci.yml`.
`golangci`: Add `--new`.

* `go.yml`: Added `fetch-depth: 0` to have something working with merge queue according to

https://github.com/golangci/golangci-lint-action/issues/956

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-24 19:57:17 +00:00
james-prysm
58ce1c25f5 fixing error handling of unfound block (#15742) 2025-09-24 19:03:55 +00:00
Potuz
98532a2df3 update spectests to 1.6.0-beta.0 (#15741)
* update spectests to 1.6.0-beta.0

* start fixing ethspecify
2025-09-24 18:13:46 +00:00
Ragnar
08be6fde92 fix: replace fmt.Printf with proper test error handling in web3signer… (#15723)
* fix: replace fmt.Printf with proper test error handling in web3signer test

* Update keymanager_test.go

* Create DeVikingMark_fix-web3signer-test-error-handling.md
2025-09-23 19:22:11 +00:00
Manu NALEPA
4585cdc932 createLocalNode: Wait before retrying to retrieve the custody group count if not present. (#15735) 2025-09-23 15:46:39 +00:00
Preston Van Loon
aa47435c91 Update eth clients pinned deps (#15733)
* Update eth-clients/hoodi

* Update eth-clients/holesky

* Update eth-clients/sepolia

* Changelog fragment

* Remove deprecated and unused eth2-networks dependency.
2025-09-23 14:26:46 +00:00
Manu NALEPA
80eba4e6dd Fix no custody info available at start (#15732)
* Change wrap message to avoid the could not...: could not...: could not... effect.

Reference: https://github.com/uber-go/guide/blob/master/style.md#error-wrapping.

* Log: remove period at the end of the latest sentence.

* Dirty quick fix to ensure that the custody group count is set at P2P service start.

A real fix would involve a chan implement a proper synchronization scheme.

* Add changelog.
2025-09-23 14:09:29 +00:00
Manu NALEPA
606294e17f Improve logging of data column sidecars (#15728)
* Implement `SortedSliceFromMap`, `PrettySlice`, and `SortedSliceFromMap`.

* Use `SortedPrettySliceFromMap` and `SortedSliceFromMap` when needed.
2025-09-23 02:10:23 +00:00
terence
977e923692 Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets (#15721)
* Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets

* Update commits

* Add fulu.yaml to the presets file path loader

* Add the following placeholder fields:
- CELLS_PER_EXT_BLOB
- FIELD_ELEMENTS_PER_CELL
- FIELD_ELEMENTS_PER_EXT_BLOB
- KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2025-09-23 02:10:20 +00:00
sashass1315
013b6b1d60 fix: race in PriorityQueue.Pop by checking emptiness under write lock (#15726)
* fix: race in PriorityQueue.Pop by checking emptiness under write lock

* Create sashass1315_fix-priority-queue-pop-lock-race

* Update changelog/sashass1315_fix-priority-queue-pop-lock-race

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Move changelog/sashass1315_fix-priority-queue-pop-lock-race to changelog/sashass1315_fix-priority-queue-pop-lock-race.md

* Fix bullet in changelog

---------

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-09-22 20:12:56 +00:00
Preston Van Loon
66ff6f70b8 Update to Go 1.25 (#15641)
* Update rules_go to v0.54.1

* Fix NotEmpty assertion for new protobuf private fields.

* Update rules_go to v0.55.0

* Update protobuf to 28.3

* Update rules_go to v0.57.0

* Update go to v1.25.0

* Changelog fragment

* Update go to v1.25.1

* Update generated protobuf and ssz files
2025-09-22 19:22:32 +00:00
Preston Van Loon
5b1a9fb077 blst: Change from blst from http_archive to go_repository (#15709) 2025-09-22 19:00:57 +00:00
JihoonSong
35bc9b1a0f Support Fulu genesis block (#15652)
* Support Fulu genesis block

* `NewGenesisBlockForState`: Factorize Electra and Fulu, which share the same `BeaconBlock`.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-22 19:00:26 +00:00
Muzry
02dca85251 fix getStateRandao not returning historic RANDAO mix values (#15653)
* fix getStateRandao not returning historic RANDAO mix values

* update: the lower bound statement

* Update changelog/muzry_fix_state_randao.md

* Update changelog/muzry_fix_state_randao.md

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2025-09-21 23:35:03 +00:00
Manu NALEPA
39b2163702 Fix orphaned blocks (#15720)
* `Broadcasted data column sidecar` log: Add `blobCount`.

* `broadcastAndReceiveDataColumns`: Broadcast and receive data columns in parallel.

* `ProposeBeaconBlock`: First broadcast/receive block, and then sidecars.

* `broadcastReceiveBlock`: Add log.

* Add changelog

* Fix deadlock-option 1.

* Fix deadlock-option 2.

* Take notifier out of the critical section

* only compute common info once, for all sidecars

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-19 15:45:44 +00:00
james-prysm
d67ee62efa Debug data columns API endpoint (#15701)
* initial

* fixing from self review

* changelog

* fixing endpoints and adding test

* removed unneeded test

* self review

* fixing mock columns

* fixing tests

* gofmt

* fixing endpoint

* gaz

* gofmt

* fixing tests

* gofmt

* gaz

* radek comments

* gaz

* fixing formatting

* deduplicating and fixing an old bug, will break into separate PR

* better way for version

* optimizing post merge and fixing tests

* Update beacon-chain/rpc/eth/debug/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/debug/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* adding some of radek's feedback

* reverting and gaz

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-19 15:20:43 +00:00
Jun Song
9f9401e615 SSZ-QL: Handle Bitlist and Bitvector (#15704)
* Add bitvector field for FixedTestContainer

* Handle Bitvector type using isBitfield flag

* Add Bitvector case for Stringify

* Add bitlist field for VariableTestContainer

* Add bitlistInfo

* Changelog

* Add bitvectorInfo

* Remove analyzeBit* functions and just inline them

* Fix misleading comments

* Add comments for bitlistInfo's Size

* Apply reviews from Radek
2025-09-19 13:58:56 +00:00
Muzry
6b89d839f6 Fix prysmctl panic when baseFee is not set in genesis.json (#15687)
* Fix prysmctl panic when baseFee is not set in genesis.json

* update add unit test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-09-18 21:48:28 +00:00
james-prysm
d826a3c7fe adding justified support to blocker and optimizing logic (#15715)
* adding justified support to blocker and optimizing logic

* gaz

* fixing tests

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* better error handling from radek feedback

* unneeded change

* more radek review

* addressing more feedback

* fixing small comment

* fixing errors

* removing fallback

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-18 17:01:27 +00:00
153 changed files with 6628 additions and 8891 deletions

View File

@@ -1,4 +1,4 @@
FROM golang:1.24-alpine
FROM golang:1.25.1-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.23.5'
go-version: '1.25.1'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.23.5'
go-version: '1.25.1'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -31,7 +31,7 @@ jobs:
- name: Set up Go 1.24
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.25.1'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -44,27 +44,27 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.24
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
id: go
fetch-depth: 0
- name: Set up Go 1.25.1
uses: actions/setup-go@v5
with:
go-version: '1.25.1'
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
uses: golangci/golangci-lint-action@v8
with:
version: v1.64.5
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
version: v2.4
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.x
- name: Set up Go 1.25.1
uses: actions/setup-go@v4
with:
go-version: '1.24.0'
go-version: '1.25.1'
id: go
- name: Check out code into the Go module directory

View File

@@ -1,90 +1,41 @@
version: "2"
run:
timeout: 10m
go: '1.23.5'
issues:
exclude-files:
- validator/web/site_data.go
- .*_test.go
exclude-dirs:
- proto
- tools/analyzers
go: 1.23.5
linters:
enable-all: true
disable:
# Deprecated linters:
enable:
- errcheck
- ineffassign
- govet
# Disabled for now:
- asasalint
- bodyclose
- containedctx
- contextcheck
- cyclop
- depguard
- dogsled
- dupl
- durationcheck
- errname
- err113
- exhaustive
- exhaustruct
- forbidigo
- forcetypeassert
- funlen
- gci
- gochecknoglobals
- gochecknoinits
- goconst
- gocritic
- gocyclo
- godot
- godox
- gofumpt
- gomoddirectives
- gosec
- inamedparam
- interfacebloat
- intrange
- ireturn
- lll
- maintidx
- makezero
- mnd
- musttag
- nakedret
- nestif
- nilnil
- nlreturn
- noctx
- nolintlint
- nonamedreturns
- nosprintfhostport
- perfsprint
- prealloc
- predeclared
- promlinter
- protogetter
- recvcheck
- revive
- spancheck
disable:
- staticcheck
- stylecheck
- tagalign
- tagliatelle
- thelper
- unparam
- usetesting
- varnamelen
- wrapcheck
- wsl
- unused
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
output:
print-issued-lines: true
sort-results: true
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$

View File

@@ -158,15 +158,15 @@ oci_register_toolchains(
http_archive(
name = "io_bazel_rules_go",
integrity = "sha256-JD8o94crTb2DFiJJR8nMAGdBAW95zIENB4cbI+JnrI4=",
patch_args = ["-p1"],
patches = [
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
strip_prefix = "rules_go-cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9",
sha256 = "a729c8ed2447c90fe140077689079ca0acfb7580ec41637f312d650ce9d93d96",
urls = [
"https://github.com/bazel-contrib/rules_go/archive/cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9.tar.gz",
"https://mirror.bazel.build/github.com/bazel-contrib/rules_go/releases/download/v0.57.0/rules_go-v0.57.0.zip",
"https://github.com/bazel-contrib/rules_go/releases/download/v0.57.0/rules_go-v0.57.0.zip",
],
)
@@ -208,7 +208,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.24.6",
go_version = "1.25.1",
nogo = "@//:nogo",
)
@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.6"
consensus_spec_version = "v1.6.0-beta.0"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-7wkWuahuCO37uVYnxq8Badvi+jY907pBj68ixL8XDOI=",
"minimal": "sha256-Qy/f27N0LffS/ej7VhIubwDejD6LMK0VdenKkqtZVt4=",
"mainnet": "sha256-3H7mu5yE+FGz2Wr/nc8Nd9aEu93YoEpsYtn0zBSoeDE=",
"general": "sha256-rT3jQp2+ZaDiO66gIQggetzqr+kGeexaLqEhbx4HDMY=",
"minimal": "sha256-wowwwyvd0KJLsE+oDOtPkrhZyJndJpJ0lbXYsLH6XBw=",
"mainnet": "sha256-4ZLrLNeO7NihZ4TuWH5V5fUhvW9Y3mAPBQDCqrfShps=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-uvz3XfMTGfy3/BtQQoEp5XQOgrWgcH/5Zo/gR0iiP+k=",
integrity = "sha256-sBe3Rx8zGq9IrvfgIhZQpYidGjy3mE1SiCb6/+pjLdY=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -300,22 +300,6 @@ filegroup(
url = "https://github.com/ethereum/bls12-381-tests/releases/download/%s/bls_tests_yaml.tar.gz" % bls_test_version,
)
http_archive(
name = "eth2_networks",
build_file_content = """
filegroup(
name = "configs",
srcs = glob([
"shared/**/config.yaml",
]),
visibility = ["//visibility:public"],
)
""",
sha256 = "77e7e3ed65e33b7bb19d30131f4c2bb39e4dfeb188ab9ae84651c3cc7600131d",
strip_prefix = "eth2-networks-934c948e69205dcf2deb87e4ae6cc140c335f94d",
url = "https://github.com/eth-clients/eth2-networks/archive/934c948e69205dcf2deb87e4ae6cc140c335f94d.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """
@@ -327,9 +311,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-YVFFrCmjoGZ3fXMWpsCpSsYbANy1grnqYwOLKIg2SsA=",
strip_prefix = "holesky-32a72e21c6e53c262f27d50dd540cb654517d03a",
url = "https://github.com/eth-clients/holesky/archive/32a72e21c6e53c262f27d50dd540cb654517d03a.tar.gz", # 2025-03-17
integrity = "sha256-htyxg8Ln2o8eCiifFN7/hcHGZg8Ir9CPzCEx+FUnnCs=",
strip_prefix = "holesky-8aec65f11f0c986d6b76b2eb902420635eb9b815",
url = "https://github.com/eth-clients/holesky/archive/8aec65f11f0c986d6b76b2eb902420635eb9b815.tar.gz",
)
http_archive(
@@ -359,9 +343,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-b5F7Wg9LLMqGRIpP2uqb/YsSFVn2ynzlV7g/Nb1EFLk=",
strip_prefix = "sepolia-562d9938f08675e9ba490a1dfba21fb05843f39f",
url = "https://github.com/eth-clients/sepolia/archive/562d9938f08675e9ba490a1dfba21fb05843f39f.tar.gz", # 2025-03-17
integrity = "sha256-+UZgfvBcea0K0sbvAJZOz5ZNmxdWZYbohP38heUuc6w=",
strip_prefix = "sepolia-f9158732adb1a2a6440613ad2232eb50e7384c4f",
url = "https://github.com/eth-clients/sepolia/archive/f9158732adb1a2a6440613ad2232eb50e7384c4f.tar.gz",
)
http_archive(
@@ -375,17 +359,17 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-dPiEWUd8QvbYGwGtIm0QtCekitVLOLsW5rpQIGzz8PU=",
strip_prefix = "hoodi-828c2c940e1141092bd4bb979cef547ea926d272",
url = "https://github.com/eth-clients/hoodi/archive/828c2c940e1141092bd4bb979cef547ea926d272.tar.gz",
integrity = "sha256-G+4c9c/vci1OyPrQJnQCI+ZCv/E0cWN4hrHDY3i7ns0=",
strip_prefix = "hoodi-b6ee51b2045a5e7fe3efac52534f75b080b049c6",
url = "https://github.com/eth-clients/hoodi/archive/b6ee51b2045a5e7fe3efac52534f75b080b049c6.tar.gz",
)
http_archive(
name = "com_google_protobuf",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
sha256 = "7c3ebd7aaedd86fa5dc479a0fda803f602caaf78d8aff7ce83b89e1b8ae7442a",
strip_prefix = "protobuf-28.3",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v28.3.tar.gz",
],
)

View File

@@ -56,3 +56,19 @@ type ForkChoiceNodeExtraData struct {
TimeStamp string `json:"timestamp"`
Target string `json:"target"`
}
type GetDebugDataColumnSidecarsResponse struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*DataColumnSidecar `json:"data"`
}
type DataColumnSidecar struct {
Index string `json:"index"`
Column []string `json:"column"`
KzgCommitments []string `json:"kzg_commitments"`
KzgProofs []string `json:"kzg_proofs"`
SignedBeaconBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitmentsInclusionProof []string `json:"kzg_commitments_inclusion_proof"`
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/config/params"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
blocktypes "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
payloadattribute "github.com/OffchainLabs/prysm/v6/consensus-types/payload-attribute"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -218,24 +218,18 @@ func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, er
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
preStateHeader interfaces.ExecutionData, blk interfaces.ReadOnlySignedBeaconBlock) (bool, error) {
// stVersion should represent the version of the pre-state; header should also be from the pre-state.
func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header interfaces.ExecutionData, blk blocktypes.ROBlock) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
// Execution payload is only supported in Bellatrix and beyond. Pre
// merge blocks are never optimistic
if blk == nil {
return false, errors.New("signed beacon block can't be nil")
}
if preStateVersion < version.Bellatrix {
if stVersion < version.Bellatrix {
return true, nil
}
if err := consensusblocks.BeaconBlockIsNil(blk); err != nil {
return false, err
}
body := blk.Block().Body()
enabled, err := blocks.IsExecutionEnabledUsingHeader(preStateHeader, body)
enabled, err := blocks.IsExecutionEnabledUsingHeader(header, body)
if err != nil {
return false, errors.Wrap(invalidBlock{error: err}, "could not determine if execution is enabled")
}
@@ -268,28 +262,32 @@ func (s *Service) notifyNewPayload(ctx context.Context, preStateVersion int,
return false, errors.New("nil execution requests")
}
}
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
switch {
case err == nil:
lastValidHash, err = s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, parentRoot, requests)
if err == nil {
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
}
logFields := logrus.Fields{
"slot": blk.Block().Slot(),
"parentRoot": fmt.Sprintf("%#x", parentRoot),
"root": fmt.Sprintf("%#x", blk.Root()),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}
if errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus) {
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"slot": blk.Block().Slot(),
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
log.WithFields(logFields).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
}
if errors.Is(err, execution.ErrInvalidPayloadStatus) {
log.WithFields(logFields).WithError(err).Error("Invalid payload status")
return false, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: lvh,
lastValidHash: bytesutil.ToBytes32(lastValidHash),
}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
log.WithFields(logFields).WithError(err).Error("Unexpected execution engine error")
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer

View File

@@ -481,33 +481,12 @@ func Test_NotifyNewPayload(t *testing.T) {
phase0State, _ := util.DeterministicGenesisState(t, 1)
altairState, _ := util.DeterministicGenesisStateAltair(t, 1)
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
a := &ethpb.SignedBeaconBlockAltair{
Block: &ethpb.BeaconBlockAltair{
Body: &ethpb.BeaconBlockBodyAltair{},
},
}
a := util.NewBeaconBlockAltair()
altairBlk, err := consensusblocks.NewSignedBeaconBlock(a)
require.NoError(t, err)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Slot: 1,
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
BlockNumber: 1,
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Slot = 1
blk.Block.Body.ExecutionPayload.BlockNumber = 1
bellatrixBlk, err := consensusblocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlockBellatrix(blk))
require.NoError(t, err)
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
@@ -544,12 +523,6 @@ func Test_NotifyNewPayload(t *testing.T) {
blk: altairBlk,
isValidPayload: true,
},
{
name: "nil beacon block",
postState: bellatrixState,
errString: "signed beacon block can't be nil",
isValidPayload: false,
},
{
name: "new payload with optimistic block",
postState: bellatrixState,
@@ -576,15 +549,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "altair pre state, happy case",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -595,24 +561,7 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "not at merge transition",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -623,15 +572,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "happy case",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -642,15 +584,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "undefined error from ee",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -662,15 +597,8 @@ func Test_NotifyNewPayload(t *testing.T) {
name: "invalid block hash error from ee",
postState: bellatrixState,
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
b, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
return b
@@ -701,7 +629,9 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, tt.blk)
rob, err := consensusblocks.NewROBlock(tt.blk)
require.NoError(t, err)
isValidPayload, err := service.notifyNewPayload(ctx, postVersion, postHeader, rob)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
if tt.invalidBlock {
@@ -725,17 +655,12 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
ctx := tr.ctx
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength),
},
},
},
}
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.ParentHash = bytesutil.PadTo([]byte{'a'}, fieldparams.RootLength)
bellatrixBlk, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
rob, err := consensusblocks.NewROBlock(bellatrixBlk)
require.NoError(t, err)
e := &mockExecution.EngineClient{BlockByHashMap: map[[32]byte]*v1.ExecutionBlock{}}
e.BlockByHashMap[[32]byte{'a'}] = &v1.ExecutionBlock{
Header: gethtypes.Header{
@@ -752,7 +677,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
service.cfg.ExecutionEngineCaller = e
postVersion, postHeader, err := getStateVersionAndPayload(bellatrixState)
require.NoError(t, err)
validated, err := service.notifyNewPayload(ctx, postVersion, postHeader, bellatrixBlk)
validated, err := service.notifyNewPayload(ctx, postVersion, postHeader, rob)
require.NoError(t, err)
require.Equal(t, true, validated)
}

View File

@@ -3,7 +3,6 @@ package blockchain
import (
"context"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -746,14 +745,14 @@ func (s *Service) areDataColumnsAvailable(
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
missing, err := missingDataColumnIndices(s.dataColumnStorage, root, peerInfo.CustodyColumns)
if err != nil {
return errors.Wrap(err, "missing data columns")
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missingMap) == 0 {
if len(missing) == 0 {
return nil
}
@@ -770,33 +769,17 @@ func (s *Service) areDataColumnsAvailable(
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
timer := time.AfterFunc(time.Until(nextSlot), func() {
missingMapCount := uint64(len(missingMap))
missingCount := uint64(len(missing))
if missingMapCount == 0 {
if missingCount == 0 {
return
}
var (
expected interface{} = "all"
missing interface{} = "all"
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(peerInfo.CustodyColumns))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(peerInfo.CustodyColumns)
}
if missingMapCount < numberOfColumns {
missing = uint64MapToSortedSlice(missingMap)
}
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": missing,
"columnsExpected": helpers.SortedPrettySliceFromMap(peerInfo.CustodyColumns),
"columnsWaiting": helpers.SortedPrettySliceFromMap(missing),
}).Warning("Data columns still missing at slot end")
})
defer timer.Stop()
@@ -812,7 +795,7 @@ func (s *Service) areDataColumnsAvailable(
for _, index := range idents.Indices {
// This is a data column we are expecting.
if _, ok := missingMap[index]; ok {
if _, ok := missing[index]; ok {
storedDataColumnsCount++
}
@@ -823,10 +806,10 @@ func (s *Service) areDataColumnsAvailable(
}
// Remove the index from the missing map.
delete(missingMap, index)
delete(missing, index)
// Return if there is no more missing data columns.
if len(missingMap) == 0 {
if len(missing) == 0 {
return nil
}
}
@@ -834,10 +817,10 @@ func (s *Service) areDataColumnsAvailable(
case <-ctx.Done():
var missingIndices interface{} = "all"
numberOfColumns := params.BeaconConfig().NumberOfColumns
missingIndicesCount := uint64(len(missingMap))
missingIndicesCount := uint64(len(missing))
if missingIndicesCount < numberOfColumns {
missingIndices = uint64MapToSortedSlice(missingMap)
missingIndices = helpers.SortedPrettySliceFromMap(missing)
}
return errors.Wrapf(ctx.Err(), "data column sidecars slot: %d, BlockRoot: %#x, missing: %v", block.Slot(), root, missingIndices)
@@ -921,16 +904,6 @@ func (s *Service) areBlobsAvailable(ctx context.Context, root [fieldparams.RootL
}
}
// uint64MapToSortedSlice produces a sorted uint64 slice from a map.
func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
output := make([]uint64, 0, len(input))
for idx := range input {
output = append(output, idx)
}
slices.Sort[[]uint64](output)
return output
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot

View File

@@ -248,7 +248,7 @@ func (s *Service) handleDA(ctx context.Context, avs das.AvailabilityStore, block
err = s.isDataAvailable(ctx, block)
}
elapsed := time.Since(start)
if err != nil {
if err == nil {
dataAvailWaitedTime.Observe(float64(elapsed.Milliseconds()))
}
return elapsed, err

View File

@@ -184,45 +184,54 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
})
case *ethpb.BeaconStateElectra:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockElectra{
Block: &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
},
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateFulu:
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockFulu{
Block: electraGenesisBlock(root),
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}
}
func electraGenesisBlock(root [fieldparams.RootLength]byte) *ethpb.BeaconBlockElectra {
return &ethpb.BeaconBlockElectra{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyElectra{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
}
}

View File

@@ -10,6 +10,7 @@ go_library(
"legacy.go",
"metrics.go",
"randao.go",
"ranges.go",
"rewards_penalties.go",
"shuffle.go",
"sync_committee.go",
@@ -56,6 +57,7 @@ go_test(
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
"ranges_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",
"sync_committee_test.go",

View File

@@ -0,0 +1,62 @@
package helpers
import (
"fmt"
"slices"
)
// SortedSliceFromMap takes a map with uint64 keys and returns a sorted slice of the keys.
func SortedSliceFromMap(toSort map[uint64]bool) []uint64 {
slice := make([]uint64, 0, len(toSort))
for key := range toSort {
slice = append(slice, key)
}
slices.Sort(slice)
return slice
}
// PrettySlice returns a pretty string representation of a sorted slice of uint64.
// `sortedSlice` must be sorted in ascending order.
// Example: [1,2,3,5,6,7,8,10] -> "1-3,5-8,10"
func PrettySlice(sortedSlice []uint64) string {
if len(sortedSlice) == 0 {
return ""
}
var result string
start := sortedSlice[0]
end := sortedSlice[0]
for i := 1; i < len(sortedSlice); i++ {
if sortedSlice[i] == end+1 {
end = sortedSlice[i]
continue
}
if start == end {
result += fmt.Sprintf("%d,", start)
start = sortedSlice[i]
end = sortedSlice[i]
continue
}
result += fmt.Sprintf("%d-%d,", start, end)
start = sortedSlice[i]
end = sortedSlice[i]
}
if start == end {
result += fmt.Sprintf("%d", start)
return result
}
result += fmt.Sprintf("%d-%d", start, end)
return result
}
// SortedPrettySliceFromMap combines SortedSliceFromMap and PrettySlice to return a pretty string representation of the keys in a map.
func SortedPrettySliceFromMap(toSort map[uint64]bool) string {
sorted := SortedSliceFromMap(toSort)
return PrettySlice(sorted)
}

View File

@@ -0,0 +1,64 @@
package helpers_test
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestSortedSliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 3: true, 8: true, 1: true}
expected := []uint64{1, 3, 5, 8}
actual := helpers.SortedSliceFromMap(input)
require.Equal(t, len(expected), len(actual))
for i := range expected {
require.Equal(t, expected[i], actual[i])
}
}
func TestPrettySlice(t *testing.T) {
tests := []struct {
name string
input []uint64
expected string
}{
{
name: "empty slice",
input: []uint64{},
expected: "",
},
{
name: "only distinct elements",
input: []uint64{1, 3, 5, 7, 9},
expected: "1,3,5,7,9",
},
{
name: "single range",
input: []uint64{1, 2, 3, 4, 5},
expected: "1-5",
},
{
name: "multiple ranges and distinct elements",
input: []uint64{1, 2, 3, 5, 6, 7, 8, 10, 12, 13, 14},
expected: "1-3,5-8,10,12-14",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
actual := helpers.PrettySlice(tt.input)
require.Equal(t, tt.expected, actual)
})
}
}
func TestSortedPrettySliceFromMap(t *testing.T) {
input := map[uint64]bool{5: true, 7: true, 8: true, 10: true}
expected := "5,7-8,10"
actual := helpers.SortedPrettySliceFromMap(input)
require.Equal(t, expected, actual)
}

View File

@@ -109,15 +109,14 @@ func DataColumnSidecars(rows []kzg.CellsAndProofs, src ConstructionPopulator) ([
if err != nil {
return nil, errors.Wrap(err, "rotate cells and proofs")
}
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
maxIdx := params.BeaconConfig().NumberOfColumns
roSidecars := make([]blocks.RODataColumn, 0, maxIdx)
for idx := range maxIdx {
info, err := src.extract()
if err != nil {
return nil, errors.Wrap(err, "extract block info")
}
sidecar := &ethpb.DataColumnSidecar{
Index: idx,
Column: cells[idx],

View File

@@ -122,18 +122,18 @@ type BlobStorage struct {
func (bs *BlobStorage) WarmCache() {
start := time.Now()
if bs.layoutName == LayoutNameFlat {
log.Info("Blob filesystem cache warm-up started. This may take a few minutes.")
log.Info("Blob filesystem cache warm-up started. This may take a few minutes")
} else {
log.Info("Blob filesystem cache warm-up started.")
log.Info("Blob filesystem cache warm-up started")
}
if err := warmCache(bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while warming up blob filesystem cache.")
log.WithError(err).Error("Error encountered while warming up blob filesystem cache")
}
if err := bs.migrateLayouts(); err != nil {
log.WithError(err).Error("Error encountered while migrating blob storage.")
log.WithError(err).Error("Error encountered while migrating blob storage")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete")
}
// If any blob storage directories are found for layouts besides the configured layout, migrate them.

View File

@@ -4,17 +4,26 @@ import (
"bytes"
"context"
"encoding/binary"
"time"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
)
var errTimeOut = errors.New("operation timed out")
// PruneAttestationsAtEpoch deletes all attestations from the slasher DB with target epoch
// less than or equal to the specified epoch.
func (s *Store) PruneAttestationsAtEpoch(
_ context.Context, maxEpoch primitives.Epoch,
ctx context.Context, maxEpoch primitives.Epoch,
) (numPruned uint, err error) {
// In some cases, pruning may take a very long time and consume significant memory in the
// open Update transaction. Therefore, we impose a 1 minute timeout on this operation.
ctx, cancel := context.WithTimeout(ctx, 1*time.Minute)
defer cancel()
// We can prune everything less than the current epoch - history length.
encodedEndPruneEpoch := make([]byte, 8)
binary.BigEndian.PutUint64(encodedEndPruneEpoch, uint64(maxEpoch))
@@ -48,13 +57,18 @@ func (s *Store) PruneAttestationsAtEpoch(
return
}
if err = s.db.Update(func(tx *bolt.Tx) error {
err = s.db.Update(func(tx *bolt.Tx) error {
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
c := signingRootsBkt.Cursor()
// We begin a pruning iteration starting from the first item in the bucket.
for k, v := c.First(); k != nil; k, v = c.Next() {
if ctx.Err() != nil {
// Exit the routine if the context has expired.
return errTimeOut
}
// We check the epoch from the current key in the database.
// If we have hit an epoch that is greater than the end epoch of the pruning process,
// we then completely exit the process as we are done.
@@ -67,18 +81,27 @@ func (s *Store) PruneAttestationsAtEpoch(
// so it is possible we have a few adjacent objects that have the same slot, such as
// (target_epoch = 3 ++ _) => encode(attestation)
if err := signingRootsBkt.Delete(k); err != nil {
return err
return errors.Wrap(err, "delete attestation signing root")
}
if err := attRecordsBkt.Delete(v); err != nil {
return err
return errors.Wrap(err, "delete attestation record")
}
slasherAttestationsPrunedTotal.Inc()
numPruned++
}
return nil
}); err != nil {
})
if errors.Is(err, errTimeOut) {
log.Warning("Aborting pruning routine")
return
}
if err != nil {
log.WithError(err).Error("Failed to prune attestations")
return
}
return
}

View File

@@ -142,10 +142,9 @@ var ErrEmptyBlockHash = errors.New("Block hash is empty 0x0000...")
func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionData, versionedHashes []common.Hash, parentBlockRoot *common.Hash, executionRequests *pb.ExecutionRequests) ([]byte, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.NewPayload")
defer span.End()
start := time.Now()
defer func() {
defer func(start time.Time) {
newPayloadLatency.Observe(float64(time.Since(start).Milliseconds()))
}()
}(time.Now())
d := time.Now().Add(time.Duration(params.BeaconConfig().ExecutionEngineTimeoutValue) * time.Second)
ctx, cancel := context.WithDeadline(ctx, d)
@@ -183,7 +182,10 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
return nil, errors.New("unknown execution data type")
}
if result.ValidationError != "" {
log.WithError(errors.New(result.ValidationError)).Error("Got a validation error in newPayload")
log.WithField("status", result.Status.String()).
WithField("parentRoot", fmt.Sprintf("%#x", parentBlockRoot)).
WithError(errors.New(result.ValidationError)).
Error("Got a validation error in newPayload")
}
switch result.Status {
case pb.PayloadStatus_INVALID_BLOCK_HASH:
@@ -195,7 +197,7 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
case pb.PayloadStatus_VALID:
return result.LatestValidHash, nil
default:
return nil, ErrUnknownPayloadStatus
return nil, errors.Wrapf(ErrUnknownPayloadStatus, "unknown payload status: %s", result.Status.String())
}
}

View File

@@ -928,7 +928,7 @@ func TestClient_HTTP(t *testing.T) {
wrappedPayload, err := blocks.WrappedExecutionPayload(execPayload)
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{}, nil)
require.ErrorIs(t, ErrUnknownPayloadStatus, err)
require.ErrorIs(t, err, ErrUnknownPayloadStatus)
require.DeepEqual(t, []uint8(nil), resp)
})
t.Run(BlockByNumberMethod, func(t *testing.T) {

View File

@@ -381,6 +381,7 @@ func (s *Service) internalBroadcastDataColumnSidecar(
"timeSinceSlotStart": time.Since(slotStartTime),
"root": fmt.Sprintf("%#x", dataColumnSidecar.BlockRoot()),
"columnSubnet": columnSubnet,
"blobCount": len(dataColumnSidecar.Column),
}).Debug("Broadcasted data column sidecar")
// Increase the number of successful broadcasts.

View File

@@ -10,6 +10,8 @@ import (
"github.com/sirupsen/logrus"
)
var errNoCustodyInfo = errors.New("no custody info available")
var _ CustodyManager = (*Service)(nil)
// EarliestAvailableSlot returns the earliest available slot.
@@ -30,7 +32,7 @@ func (s *Service) CustodyGroupCount() (uint64, error) {
defer s.custodyInfoLock.Unlock()
if s.custodyInfo == nil {
return 0, errors.New("no custody info available")
return 0, errNoCustodyInfo
}
return s.custodyInfo.groupCount, nil

View File

@@ -79,7 +79,7 @@ func (quicProtocol) ENRKey() string { return quickProtocolEnrKey }
func newListener(listenerCreator func() (*discover.UDPv5, error)) (*listenerWrapper, error) {
rawListener, err := listenerCreator()
if err != nil {
return nil, errors.Wrap(err, "could not create new listener")
return nil, errors.Wrap(err, "create new listener")
}
return &listenerWrapper{
listener: rawListener,
@@ -536,7 +536,7 @@ func (s *Service) createListener(
int(s.cfg.QUICPort),
)
if err != nil {
return nil, errors.Wrap(err, "could not create local node")
return nil, errors.Wrap(err, "create local node")
}
bootNodes := make([]*enode.Node, 0, len(s.cfg.Discv5BootStrapAddrs))
@@ -604,13 +604,27 @@ func (s *Service) createLocalNode(
localNode = initializeSyncCommSubnets(localNode)
if params.FuluEnabled() {
custodyGroupCount, err := s.CustodyGroupCount()
if err != nil {
return nil, errors.Wrap(err, "could not retrieve custody group count")
}
// TODO: Replace this quick fix with a proper synchronization scheme (chan?)
const delay = 1 * time.Second
custodyGroupCountEntry := peerdas.Cgc(custodyGroupCount)
localNode.Set(custodyGroupCountEntry)
var custodyGroupCount uint64
err := errNoCustodyInfo
for errors.Is(err, errNoCustodyInfo) {
custodyGroupCount, err = s.CustodyGroupCount()
if errors.Is(err, errNoCustodyInfo) {
log.WithField("delay", delay).Debug("No custody info available yet, retrying later")
time.Sleep(delay)
continue
}
if err != nil {
return nil, errors.Wrap(err, "retrieve custody group count")
}
custodyGroupCountEntry := peerdas.Cgc(custodyGroupCount)
localNode.Set(custodyGroupCountEntry)
}
}
if s.cfg != nil && s.cfg.HostAddress != "" {
@@ -652,7 +666,7 @@ func (s *Service) startDiscoveryV5(
}
wrappedListener, err := newListener(createListener)
if err != nil {
return nil, errors.Wrap(err, "could not create listener")
return nil, errors.Wrap(err, "create listener")
}
record := wrappedListener.Self()

View File

@@ -106,7 +106,7 @@ func (s *Service) endpoints(
}
if enableDebug {
endpoints = append(endpoints, s.debugEndpoints(stater)...)
endpoints = append(endpoints, s.debugEndpoints(stater, blocker)...)
}
return endpoints
@@ -1097,7 +1097,7 @@ func (s *Service) lightClientEndpoints() []endpoint {
}
}
func (s *Service) debugEndpoints(stater lookup.Stater) []endpoint {
func (s *Service) debugEndpoints(stater lookup.Stater, blocker lookup.Blocker) []endpoint {
server := &debug.Server{
BeaconDB: s.cfg.BeaconDB,
HeadFetcher: s.cfg.HeadFetcher,
@@ -1107,6 +1107,8 @@ func (s *Service) debugEndpoints(stater lookup.Stater) []endpoint {
ForkchoiceFetcher: s.cfg.ForkchoiceFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
Blocker: blocker,
}
const namespace = "debug"
@@ -1141,6 +1143,16 @@ func (s *Service) debugEndpoints(stater lookup.Stater) []endpoint {
handler: server.GetForkChoice,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/debug/beacon/data_column_sidecars/{block_id}",
name: namespace + ".GetDataColumnSidecars",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType, api.OctetStreamMediaType}),
middleware.AcceptEncodingHeaderHandler(),
},
handler: server.DataColumnSidecars,
methods: []string{http.MethodGet},
},
}
}

View File

@@ -80,9 +80,10 @@ func Test_endpoints(t *testing.T) {
}
debugRoutes := map[string][]string{
"/eth/v2/debug/beacon/states/{state_id}": {http.MethodGet},
"/eth/v2/debug/beacon/heads": {http.MethodGet},
"/eth/v1/debug/fork_choice": {http.MethodGet},
"/eth/v2/debug/beacon/states/{state_id}": {http.MethodGet},
"/eth/v2/debug/beacon/heads": {http.MethodGet},
"/eth/v1/debug/fork_choice": {http.MethodGet},
"/eth/v1/debug/beacon/data_column_sidecars/{block_id}": {http.MethodGet},
}
eventsRoutes := map[string][]string{

View File

@@ -1465,8 +1465,7 @@ func (s *Server) GetBlockHeader(w http.ResponseWriter, r *http.Request) {
}
blk, err := s.Blocker.Block(ctx, []byte(blockID))
ok := shared.WriteBlockFetchError(w, blk, err)
if !ok {
if !shared.WriteBlockFetchError(w, blk, err) {
return
}
blockHeader, err := blk.Header()

View File

@@ -109,10 +109,10 @@ func (s *Server) GetRandao(w http.ResponseWriter, r *http.Request) {
// future epochs and epochs too far back are not supported.
randaoEpochLowerBound := uint64(0)
// Lower bound should not underflow.
if uint64(stEpoch) > uint64(st.RandaoMixesLength()) {
randaoEpochLowerBound = uint64(stEpoch) - uint64(st.RandaoMixesLength())
if uint64(stEpoch) >= uint64(st.RandaoMixesLength()) {
randaoEpochLowerBound = uint64(stEpoch) - uint64(st.RandaoMixesLength()) + 1
}
if epoch > stEpoch || uint64(epoch) < randaoEpochLowerBound+1 {
if epoch > stEpoch || (uint64(epoch) < randaoEpochLowerBound) {
httputil.HandleError(w, "Epoch is out of range for the randao mixes of the state", http.StatusBadRequest)
return
}

View File

@@ -192,8 +192,14 @@ func TestGetRandao(t *testing.T) {
assert.Equal(t, hexutil.Encode(mixOld[:]), resp.Data.Randao)
})
t.Run("head state below `EpochsPerHistoricalVector`", func(t *testing.T) {
s.Stater = &testutil.MockStater{
BeaconState: headSt,
s := &Server{
Stater: &testutil.MockStater{
BeaconState: headSt,
},
HeadFetcher: chainService,
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
BeaconDB: db,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com//eth/v1/beacon/states/{state_id}/randao", nil)
@@ -303,6 +309,74 @@ func TestGetRandao(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.DeepEqual(t, true, resp.Finalized)
})
t.Run("early epoch scenario - epoch 0 from state at epoch (EpochsPerHistoricalVector - 1)", func(t *testing.T) {
// Create a state at early epoch
earlyEpochState, err := util.NewBeaconState()
require.NoError(t, err)
earlyEpoch := params.BeaconConfig().EpochsPerHistoricalVector - 1
require.NoError(t, earlyEpochState.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(earlyEpoch)))
// Set up RANDAO mix for epoch 0
// In real networks, this would be the ETH1 block hash used for genesis
epoch0Randao := bytesutil.ToBytes32([]byte("epoch0"))
require.NoError(t, earlyEpochState.UpdateRandaoMixesAtIndex(0, epoch0Randao))
earlyServer := &Server{
Stater: &testutil.MockStater{
BeaconState: earlyEpochState,
},
HeadFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
FinalizationFetcher: &chainMock.ChainService{},
}
// Query epoch 0 from state at epoch (EpochsPerHistoricalVector - 1) - should succeed
request := httptest.NewRequest(http.MethodGet, "http://example.com//eth/v1/beacon/states/{state_id}/randao?epoch=0", nil)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
earlyServer.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Early epoch queries should succeed when within bounds")
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, hexutil.Encode(epoch0Randao[:]), resp.Data.Randao)
})
t.Run("early epoch scenario - epoch 0 from state at epoch EpochsPerHistoricalVector", func(t *testing.T) {
// Create a state at early epoch
earlyEpochState, err := util.NewBeaconState()
require.NoError(t, err)
earlyEpoch := params.BeaconConfig().EpochsPerHistoricalVector
require.NoError(t, earlyEpochState.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(earlyEpoch)))
// Set up RANDAO mix for epoch 0
// In real networks, this would be the ETH1 block hash used for genesis
epoch0Randao := bytesutil.ToBytes32([]byte("epoch0"))
require.NoError(t, earlyEpochState.UpdateRandaoMixesAtIndex(0, epoch0Randao))
earlyServer := &Server{
Stater: &testutil.MockStater{
BeaconState: earlyEpochState,
},
HeadFetcher: &chainMock.ChainService{},
OptimisticModeFetcher: &chainMock.ChainService{},
FinalizationFetcher: &chainMock.ChainService{},
}
// Query epoch 0 from state at epoch EpochsPerHistoricalVector - should fail
request := httptest.NewRequest(http.MethodGet, "http://example.com//eth/v1/beacon/states/{state_id}/randao?epoch=0", nil)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
earlyServer.GetRandao(writer, request)
require.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.StringContains(t, "Epoch is out of range for the randao mixes of the state", e.Message)
})
}
func Test_currentCommitteeIndicesFromState(t *testing.T) {

View File

@@ -58,12 +58,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
}
blk, err := s.Blocker.Block(ctx, []byte(blockId))
if err != nil {
httputil.HandleError(w, "Could not fetch block: "+err.Error(), http.StatusInternalServerError)
return
}
if blk == nil {
httputil.HandleError(w, "Block not found", http.StatusNotFound)
if !shared.WriteBlockFetchError(w, blk, err) {
return
}
@@ -174,12 +169,7 @@ func (s *Server) GetBlobs(w http.ResponseWriter, r *http.Request) {
}
blk, err := s.Blocker.Block(ctx, []byte(blockId))
if err != nil {
httputil.HandleError(w, "Could not fetch block: "+err.Error(), http.StatusInternalServerError)
return
}
if blk == nil {
httputil.HandleError(w, "Block not found", http.StatusNotFound)
if !shared.WriteBlockFetchError(w, blk, err) {
return
}

View File

@@ -71,7 +71,7 @@ func TestBlobs(t *testing.T) {
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, "blobs are not supported for Phase 0 fork", e.Message)
assert.StringContains(t, "not supported for Phase 0 fork", e.Message)
})
t.Run("head", func(t *testing.T) {
u := "http://foo.example/head"
@@ -336,11 +336,23 @@ func TestBlobs(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("slot before Deneb fork", func(t *testing.T) {
// Create and save a pre-Deneb block at slot 31
predenebBlock := util.NewBeaconBlock()
predenebBlock.Block.Slot = 31
util.SaveBlock(t, t.Context(), db, predenebBlock)
u := "http://foo.example/31"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
s.Blobs(writer, request)
@@ -348,7 +360,7 @@ func TestBlobs(t *testing.T) {
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, "blobs are not supported before Deneb fork", e.Message)
assert.StringContains(t, "not supported before", e.Message)
})
t.Run("malformed block ID", func(t *testing.T) {
u := "http://foo.example/foo"
@@ -609,7 +621,7 @@ func TestGetBlobs(t *testing.T) {
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, "blobs are not supported for Phase 0 fork", e.Message)
assert.StringContains(t, "not supported for Phase 0 fork", e.Message)
})
t.Run("head", func(t *testing.T) {
u := "http://foo.example/head"
@@ -808,11 +820,22 @@ func TestGetBlobs(t *testing.T) {
require.Equal(t, false, resp.Finalized)
})
t.Run("slot before Deneb fork", func(t *testing.T) {
// Create and save a pre-Deneb block at slot 31
predenebBlock := util.NewBeaconBlock()
predenebBlock.Block.Slot = 31
util.SaveBlock(t, t.Context(), db, predenebBlock)
u := "http://foo.example/31"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
s.Blocker = &lookup.BeaconDbBlocker{
BeaconDB: db,
ChainInfoFetcher: &mockChain.ChainService{},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
}
s.GetBlobs(writer, request)
@@ -820,7 +843,7 @@ func TestGetBlobs(t *testing.T) {
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.StringContains(t, "blobs are not supported before Deneb fork", e.Message)
assert.StringContains(t, "not supported before", e.Message)
})
t.Run("malformed block ID", func(t *testing.T) {
u := "http://foo.example/foo"

View File

@@ -27,5 +27,7 @@ go_test(
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -132,6 +132,10 @@ func convertValueForJSON(v reflect.Value, tag string) interface{} {
}
return m
// ===== String =====
case reflect.String:
return v.String()
// ===== Default =====
default:
log.WithFields(log.Fields{

View File

@@ -8,6 +8,7 @@ import (
"math"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
@@ -17,6 +18,8 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
log "github.com/sirupsen/logrus"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestGetDepositContract(t *testing.T) {
@@ -715,3 +718,35 @@ func TestGetSpec_BlobSchedule_NotFulu(t *testing.T) {
_, exists := data["BLOB_SCHEDULE"]
require.Equal(t, false, exists)
}
func TestConvertValueForJSON_NoErrorLogsForStrings(t *testing.T) {
logHook := logTest.NewLocal(log.StandardLogger())
defer logHook.Reset()
stringTestCases := []struct {
tag string
value string
}{
{"CONFIG_NAME", "mainnet"},
{"PRESET_BASE", "mainnet"},
{"DEPOSIT_CONTRACT_ADDRESS", "0x00000000219ab540356cBB839Cbe05303d7705Fa"},
{"TERMINAL_TOTAL_DIFFICULTY", "58750000000000000000000"},
}
for _, tc := range stringTestCases {
t.Run(tc.tag, func(t *testing.T) {
logHook.Reset()
// Convert the string value
v := reflect.ValueOf(tc.value)
result := convertValueForJSON(v, tc.tag)
// Verify the result is correct
require.Equal(t, tc.value, result)
// Verify NO error was logged about unsupported field kind
require.LogsDoNotContain(t, logHook, "Unsupported config field kind")
require.LogsDoNotContain(t, logHook, "kind=string")
})
}
}

View File

@@ -13,13 +13,19 @@ go_library(
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
@@ -34,9 +40,13 @@ go_test(
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -4,19 +4,31 @@ import (
"context"
"encoding/json"
"fmt"
"math"
"net/http"
"net/url"
"strconv"
"strings"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/eth/shared"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
"github.com/OffchainLabs/prysm/v6/network/httputil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
const errMsgStateFromConsensus = "Could not convert consensus state to response"
const (
errMsgStateFromConsensus = "Could not convert consensus state to response"
)
// GetBeaconStateV2 returns the full beacon state for a given state ID.
func (s *Server) GetBeaconStateV2(w http.ResponseWriter, r *http.Request) {
@@ -208,3 +220,176 @@ func (s *Server) GetForkChoice(w http.ResponseWriter, r *http.Request) {
}
httputil.WriteJson(w, resp)
}
// DataColumnSidecars retrieves data column sidecars for a given block id.
func (s *Server) DataColumnSidecars(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "debug.DataColumnSidecars")
defer span.End()
// Check if we're before Fulu fork - data columns are only available from Fulu onwards
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
if fuluForkEpoch == math.MaxUint64 {
httputil.HandleError(w, "Data columns are not supported - Fulu fork not configured", http.StatusBadRequest)
return
}
// Check if we're before Fulu fork based on current slot
currentSlot := s.GenesisTimeFetcher.CurrentSlot()
currentEpoch := primitives.Epoch(currentSlot / params.BeaconConfig().SlotsPerEpoch)
if currentEpoch < fuluForkEpoch {
httputil.HandleError(w, "Data columns are not supported - before Fulu fork", http.StatusBadRequest)
return
}
indices, err := parseDataColumnIndices(r.URL)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
segments := strings.Split(r.URL.Path, "/")
blockId := segments[len(segments)-1]
verifiedDataColumns, rpcErr := s.Blocker.DataColumns(ctx, blockId, indices)
if rpcErr != nil {
code := core.ErrorReasonToHTTP(rpcErr.Reason)
switch code {
case http.StatusBadRequest:
httputil.HandleError(w, "Bad request: "+rpcErr.Err.Error(), code)
return
case http.StatusNotFound:
httputil.HandleError(w, "Not found: "+rpcErr.Err.Error(), code)
return
case http.StatusInternalServerError:
httputil.HandleError(w, "Internal server error: "+rpcErr.Err.Error(), code)
return
default:
httputil.HandleError(w, rpcErr.Err.Error(), code)
return
}
}
blk, err := s.Blocker.Block(ctx, []byte(blockId))
if !shared.WriteBlockFetchError(w, blk, err) {
return
}
if httputil.RespondWithSsz(r) {
sszResp, err := buildDataColumnSidecarsSSZResponse(verifiedDataColumns)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteSsz(w, sszResp)
return
}
blkRoot, err := blk.Block().HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not hash block: "+err.Error(), http.StatusInternalServerError)
return
}
isOptimistic, err := s.OptimisticModeFetcher.IsOptimisticForRoot(ctx, blkRoot)
if err != nil {
httputil.HandleError(w, "Could not check if block is optimistic: "+err.Error(), http.StatusInternalServerError)
return
}
data := buildDataColumnSidecarsJsonResponse(verifiedDataColumns)
resp := &structs.GetDebugDataColumnSidecarsResponse{
Version: version.String(blk.Version()),
Data: data,
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, blkRoot),
}
w.Header().Set(api.VersionHeader, version.String(blk.Version()))
httputil.WriteJson(w, resp)
}
// parseDataColumnIndices filters out invalid and duplicate data column indices
func parseDataColumnIndices(url *url.URL) ([]int, error) {
numberOfColumns := params.BeaconConfig().NumberOfColumns
rawIndices := url.Query()["indices"]
indices := make([]int, 0, numberOfColumns)
invalidIndices := make([]string, 0)
loop:
for _, raw := range rawIndices {
ix, err := strconv.Atoi(raw)
if err != nil {
invalidIndices = append(invalidIndices, raw)
continue
}
if !(0 <= ix && uint64(ix) < numberOfColumns) {
invalidIndices = append(invalidIndices, raw)
continue
}
for i := range indices {
if ix == indices[i] {
continue loop
}
}
indices = append(indices, ix)
}
if len(invalidIndices) > 0 {
return nil, fmt.Errorf("requested data column indices %v are invalid", invalidIndices)
}
return indices, nil
}
func buildDataColumnSidecarsJsonResponse(verifiedDataColumns []blocks.VerifiedRODataColumn) []*structs.DataColumnSidecar {
sidecars := make([]*structs.DataColumnSidecar, len(verifiedDataColumns))
for i, dc := range verifiedDataColumns {
column := make([]string, len(dc.Column))
for j, cell := range dc.Column {
column[j] = hexutil.Encode(cell)
}
kzgCommitments := make([]string, len(dc.KzgCommitments))
for j, commitment := range dc.KzgCommitments {
kzgCommitments[j] = hexutil.Encode(commitment)
}
kzgProofs := make([]string, len(dc.KzgProofs))
for j, proof := range dc.KzgProofs {
kzgProofs[j] = hexutil.Encode(proof)
}
kzgCommitmentsInclusionProof := make([]string, len(dc.KzgCommitmentsInclusionProof))
for j, proof := range dc.KzgCommitmentsInclusionProof {
kzgCommitmentsInclusionProof[j] = hexutil.Encode(proof)
}
sidecars[i] = &structs.DataColumnSidecar{
Index: strconv.FormatUint(dc.Index, 10),
Column: column,
KzgCommitments: kzgCommitments,
KzgProofs: kzgProofs,
SignedBeaconBlockHeader: structs.SignedBeaconBlockHeaderFromConsensus(dc.SignedBlockHeader),
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
}
return sidecars
}
// buildDataColumnSidecarsSSZResponse builds SSZ response for data column sidecars
func buildDataColumnSidecarsSSZResponse(verifiedDataColumns []blocks.VerifiedRODataColumn) ([]byte, error) {
if len(verifiedDataColumns) == 0 {
return []byte{}, nil
}
// Pre-allocate buffer for all sidecars using the known SSZ size
sizePerSidecar := (&ethpb.DataColumnSidecar{}).SizeSSZ()
ssz := make([]byte, 0, sizePerSidecar*len(verifiedDataColumns))
// Marshal and append each sidecar
for i, sidecar := range verifiedDataColumns {
sszrep, err := sidecar.MarshalSSZ()
if err != nil {
return nil, errors.Wrapf(err, "failed to marshal data column sidecar at index %d", i)
}
ssz = append(ssz, sszrep...)
}
return ssz, nil
}

View File

@@ -2,9 +2,13 @@ package debug
import (
"bytes"
"context"
"encoding/json"
"errors"
"math"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/OffchainLabs/prysm/v6/api"
@@ -13,9 +17,13 @@ import (
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core"
"github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/testutil"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -515,3 +523,285 @@ func TestGetForkChoice(t *testing.T) {
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, "2", resp.FinalizedCheckpoint.Epoch)
}
func TestDataColumnSidecars(t *testing.T) {
t.Run("Fulu fork not configured", func(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set Fulu fork epoch to MaxUint64 (unconfigured)
config := params.BeaconConfig().Copy()
config.FuluForkEpoch = math.MaxUint64
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{}
// Create a mock blocker to avoid nil pointer
mockBlocker := &testutil.MockBlocker{}
s := &Server{
GenesisTimeFetcher: chainService,
Blocker: mockBlocker,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/debug/beacon/data_column_sidecars/head", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.DataColumnSidecars(writer, request)
require.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Data columns are not supported - Fulu fork not configured", writer.Body.String())
})
t.Run("Before Fulu fork", func(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set Fulu fork epoch to 100
config := params.BeaconConfig().Copy()
config.FuluForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{}
currentSlot := primitives.Slot(0) // Current slot 0 (epoch 0, before epoch 100)
chainService.Slot = &currentSlot
// Create a mock blocker to avoid nil pointer
mockBlocker := &testutil.MockBlocker{
DataColumnsFunc: func(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
return nil, &core.RpcError{Err: errors.New("before Fulu fork"), Reason: core.BadRequest}
},
}
s := &Server{
GenesisTimeFetcher: chainService,
Blocker: mockBlocker,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/debug/beacon/data_column_sidecars/head", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.DataColumnSidecars(writer, request)
require.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Data columns are not supported - before Fulu fork", writer.Body.String())
})
t.Run("Invalid indices", func(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set Fulu fork epoch to 0 (already activated)
config := params.BeaconConfig().Copy()
config.FuluForkEpoch = 0
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{}
currentSlot := primitives.Slot(0) // Current slot 0 (epoch 0, at fork)
chainService.Slot = &currentSlot
// Create a mock blocker to avoid nil pointer
mockBlocker := &testutil.MockBlocker{
DataColumnsFunc: func(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
return nil, &core.RpcError{Err: errors.New("invalid index"), Reason: core.BadRequest}
},
}
s := &Server{
GenesisTimeFetcher: chainService,
Blocker: mockBlocker,
}
// Test with invalid index (out of range)
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/debug/beacon/data_column_sidecars/head?indices=9999", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.DataColumnSidecars(writer, request)
require.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "requested data column indices [9999] are invalid", writer.Body.String())
})
t.Run("Block not found", func(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set Fulu fork epoch to 0 (already activated)
config := params.BeaconConfig().Copy()
config.FuluForkEpoch = 0
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{}
currentSlot := primitives.Slot(0) // Current slot 0
chainService.Slot = &currentSlot
// Create a mock blocker that returns block not found
mockBlocker := &testutil.MockBlocker{
DataColumnsFunc: func(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
return nil, &core.RpcError{Err: errors.New("block not found"), Reason: core.NotFound}
},
BlockToReturn: nil, // Block not found
}
s := &Server{
GenesisTimeFetcher: chainService,
Blocker: mockBlocker,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/debug/beacon/data_column_sidecars/head", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.DataColumnSidecars(writer, request)
require.Equal(t, http.StatusNotFound, writer.Code)
})
t.Run("Empty data columns", func(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set Fulu fork epoch to 0
config := params.BeaconConfig().Copy()
config.FuluForkEpoch = 0
params.OverrideBeaconConfig(config)
// Create a simple test block
signedTestBlock := util.NewBeaconBlock()
roBlock, err := blocks.NewSignedBeaconBlock(signedTestBlock)
require.NoError(t, err)
chainService := &blockchainmock.ChainService{}
currentSlot := primitives.Slot(0) // Current slot 0
chainService.Slot = &currentSlot
chainService.OptimisticRoots = make(map[[32]byte]bool)
chainService.FinalizedRoots = make(map[[32]byte]bool)
mockBlocker := &testutil.MockBlocker{
DataColumnsFunc: func(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
return []blocks.VerifiedRODataColumn{}, nil // Empty data columns
},
BlockToReturn: roBlock,
}
s := &Server{
GenesisTimeFetcher: chainService,
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
Blocker: mockBlocker,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/debug/beacon/data_column_sidecars/head", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.DataColumnSidecars(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetDebugDataColumnSidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 0, len(resp.Data))
})
}
func TestParseDataColumnIndices(t *testing.T) {
// Save the original config
originalConfig := params.BeaconConfig()
defer func() { params.OverrideBeaconConfig(originalConfig) }()
// Set NumberOfColumns to 128 for testing
config := params.BeaconConfig().Copy()
config.NumberOfColumns = 128
params.OverrideBeaconConfig(config)
tests := []struct {
name string
queryParams map[string][]string
expected []int
expectError bool
}{
{
name: "no indices",
queryParams: map[string][]string{},
expected: []int{},
expectError: false,
},
{
name: "valid indices",
queryParams: map[string][]string{"indices": {"0", "1", "127"}},
expected: []int{0, 1, 127},
expectError: false,
},
{
name: "duplicate indices",
queryParams: map[string][]string{"indices": {"0", "1", "0"}},
expected: []int{0, 1},
expectError: false,
},
{
name: "invalid string index",
queryParams: map[string][]string{"indices": {"abc"}},
expected: nil,
expectError: true,
},
{
name: "negative index",
queryParams: map[string][]string{"indices": {"-1"}},
expected: nil,
expectError: true,
},
{
name: "index too large",
queryParams: map[string][]string{"indices": {"128"}}, // 128 >= NumberOfColumns (128)
expected: nil,
expectError: true,
},
{
name: "mixed valid and invalid",
queryParams: map[string][]string{"indices": {"0", "abc", "1"}},
expected: nil,
expectError: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
u, err := url.Parse("http://example.com/test")
require.NoError(t, err)
q := u.Query()
for key, values := range tt.queryParams {
for _, value := range values {
q.Add(key, value)
}
}
u.RawQuery = q.Encode()
result, err := parseDataColumnIndices(u)
if tt.expectError {
assert.NotNil(t, err)
} else {
require.NoError(t, err)
assert.DeepEqual(t, tt.expected, result)
}
})
}
}
func TestBuildDataColumnSidecarsSSZResponse(t *testing.T) {
t.Run("empty data columns", func(t *testing.T) {
result, err := buildDataColumnSidecarsSSZResponse([]blocks.VerifiedRODataColumn{})
require.NoError(t, err)
require.DeepEqual(t, []byte{}, result)
})
t.Run("get SSZ size", func(t *testing.T) {
size := (&ethpb.DataColumnSidecar{}).SizeSSZ()
assert.Equal(t, true, size > 0)
})
}

View File

@@ -20,4 +20,6 @@ type Server struct {
ForkchoiceFetcher blockchain.ForkchoiceFetcher
FinalizationFetcher blockchain.FinalizationFetcher
ChainInfoFetcher blockchain.ChainInfoFetcher
GenesisTimeFetcher blockchain.TimeFetcher
Blocker lookup.Blocker
}

View File

@@ -39,8 +39,8 @@ func HandleIsOptimisticError(w http.ResponseWriter, err error) {
return
}
var blockRootsNotFoundErr *lookup.BlockRootsNotFoundError
if errors.As(err, &blockRootsNotFoundErr) {
var blockNotFoundErr *lookup.BlockNotFoundError
if errors.As(err, &blockNotFoundErr) {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusNotFound)
return
}

View File

@@ -138,7 +138,7 @@ func isStateRootOptimistic(
return true, errors.Wrapf(err, "could not get block roots for slot %d", st.Slot())
}
if !has {
return true, lookup.NewBlockRootsNotFoundError()
return true, lookup.NewBlockNotFoundError("no block roots returned from the database")
}
for _, r := range roots {
b, err := database.Block(ctx, r)

View File

@@ -286,8 +286,8 @@ func TestIsOptimistic(t *testing.T) {
require.NoError(t, err)
mf := &testutil.MockStater{BeaconState: st}
_, err = IsOptimistic(ctx, []byte(hexutil.Encode(bytesutil.PadTo([]byte("root"), 32))), cs, mf, cs, db)
var blockRootsNotFoundErr *lookup.BlockRootsNotFoundError
require.Equal(t, true, errors.As(err, &blockRootsNotFoundErr))
var blockNotFoundErr *lookup.BlockNotFoundError
require.Equal(t, true, errors.As(err, &blockNotFoundErr))
})
})
@@ -395,11 +395,11 @@ func TestHandleIsOptimisticError(t *testing.T) {
})
t.Run("no block roots error handled as 404", func(t *testing.T) {
rr := httptest.NewRecorder()
blockRootsErr := lookup.NewBlockRootsNotFoundError()
HandleIsOptimisticError(rr, blockRootsErr)
blockNotFoundErr := lookup.NewBlockNotFoundError("no block roots returned from the database")
HandleIsOptimisticError(rr, blockNotFoundErr)
require.Equal(t, http.StatusNotFound, rr.Code)
require.StringContains(t, blockRootsErr.Error(), rr.Body.String())
require.StringContains(t, blockNotFoundErr.Error(), rr.Body.String())
})
t.Run("generic error handled as 500", func(t *testing.T) {

View File

@@ -29,6 +29,11 @@ func WriteStateFetchError(w http.ResponseWriter, err error) {
// WriteBlockFetchError writes an appropriate error based on the supplied argument.
// The argument error should be a result of fetching block.
func WriteBlockFetchError(w http.ResponseWriter, blk interfaces.ReadOnlySignedBeaconBlock, err error) bool {
var blockNotFoundErr *lookup.BlockNotFoundError
if errors.As(err, &blockNotFoundErr) {
httputil.HandleError(w, "Block not found: "+blockNotFoundErr.Error(), http.StatusNotFound)
return false
}
var invalidBlockIdErr *lookup.BlockIdParseError
if errors.As(err, &invalidBlockIdErr) {
httputil.HandleError(w, "Invalid block ID: "+invalidBlockIdErr.Error(), http.StatusBadRequest)

View File

@@ -49,3 +49,59 @@ func TestWriteStateFetchError(t *testing.T) {
assert.NoError(t, json.Unmarshal(writer.Body.Bytes(), e), "failed to unmarshal response")
}
}
// TestWriteBlockFetchError tests the WriteBlockFetchError function
// to ensure that the correct error message and code are written to the response
// and that the function returns the correct boolean value.
func TestWriteBlockFetchError(t *testing.T) {
cases := []struct {
name string
err error
expectedMessage string
expectedCode int
expectedReturn bool
}{
{
name: "BlockNotFoundError should return 404",
err: lookup.NewBlockNotFoundError("block not found at slot 123"),
expectedMessage: "Block not found",
expectedCode: http.StatusNotFound,
expectedReturn: false,
},
{
name: "BlockIdParseError should return 400",
err: &lookup.BlockIdParseError{},
expectedMessage: "Invalid block ID",
expectedCode: http.StatusBadRequest,
expectedReturn: false,
},
{
name: "Generic error should return 500",
err: errors.New("database connection failed"),
expectedMessage: "Could not get block from block ID",
expectedCode: http.StatusInternalServerError,
expectedReturn: false,
},
{
name: "Nil block should return 404",
err: nil,
expectedMessage: "Could not find requested block",
expectedCode: http.StatusNotFound,
expectedReturn: false,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
writer := httptest.NewRecorder()
result := WriteBlockFetchError(writer, nil, c.err)
assert.Equal(t, c.expectedReturn, result, "incorrect return value")
assert.Equal(t, c.expectedCode, writer.Code, "incorrect status code")
assert.StringContains(t, c.expectedMessage, writer.Body.String(), "incorrect error message")
e := &httputil.DefaultJsonError{}
assert.NoError(t, json.Unmarshal(writer.Body.Bytes(), e), "failed to unmarshal response")
})
}
}

View File

@@ -24,17 +24,19 @@ import (
"github.com/pkg/errors"
)
type BlockRootsNotFoundError struct {
// BlockNotFoundError represents an error when a block cannot be found.
type BlockNotFoundError struct {
message string
}
func NewBlockRootsNotFoundError() *BlockRootsNotFoundError {
return &BlockRootsNotFoundError{
message: "no block roots returned from the database",
// NewBlockNotFoundError creates a new BlockNotFoundError with a custom message.
func NewBlockNotFoundError(msg string) *BlockNotFoundError {
return &BlockNotFoundError{
message: msg,
}
}
func (e BlockRootsNotFoundError) Error() string {
func (e *BlockNotFoundError) Error() string {
return e.message
}
@@ -59,6 +61,7 @@ func (e BlockIdParseError) Error() string {
type Blocker interface {
Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error)
Blobs(ctx context.Context, id string, opts ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError)
DataColumns(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError)
}
// BeaconDbBlocker is an implementation of Blocker. It retrieves blocks from the beacon chain database.
@@ -70,6 +73,141 @@ type BeaconDbBlocker struct {
DataColumnStorage *filesystem.DataColumnStorage
}
// resolveBlockIDByRootOrSlot resolves a block ID that is either a root (hex string or raw bytes) or a slot number.
func (p *BeaconDbBlocker) resolveBlockIDByRootOrSlot(ctx context.Context, id string) ([fieldparams.RootLength]byte, interfaces.ReadOnlySignedBeaconBlock, error) {
var rootSlice []byte
// Check if it's a hex-encoded root
if bytesutil.IsHex([]byte(id)) {
var err error
rootSlice, err = bytesutil.DecodeHexWithLength(id, fieldparams.RootLength)
if err != nil {
e := NewBlockIdParseError(err)
return [32]byte{}, nil, &e
}
} else if len(id) == 32 {
// Handle raw 32-byte root
rootSlice = []byte(id)
} else {
// Try to parse as slot number
slot, err := strconv.ParseUint(id, 10, 64)
if err != nil {
e := NewBlockIdParseError(err)
return [32]byte{}, nil, &e
}
// Get block roots for the slot
ok, roots, err := p.BeaconDB.BlockRootsBySlot(ctx, primitives.Slot(slot))
if err != nil {
return [32]byte{}, nil, errors.Wrapf(err, "could not retrieve block roots for slot %d", slot)
}
if !ok || len(roots) == 0 {
return [32]byte{}, nil, NewBlockNotFoundError(fmt.Sprintf("no blocks found at slot %d", slot))
}
// Find the canonical block root
if p.ChainInfoFetcher == nil {
return [32]byte{}, nil, errors.New("chain info fetcher is not configured")
}
for _, root := range roots {
canonical, err := p.ChainInfoFetcher.IsCanonical(ctx, root)
if err != nil {
return [32]byte{}, nil, errors.Wrapf(err, "could not determine if block root is canonical")
}
if canonical {
rootSlice = root[:]
break
}
}
// If no canonical block found, rootSlice remains nil
if rootSlice == nil {
// No canonical block at this slot
return [32]byte{}, nil, NewBlockNotFoundError(fmt.Sprintf("no canonical block found at slot %d", slot))
}
}
// Fetch the block using the root
root := bytesutil.ToBytes32(rootSlice)
blk, err := p.BeaconDB.Block(ctx, root)
if err != nil {
return [32]byte{}, nil, errors.Wrapf(err, "failed to retrieve block %#x from db", rootSlice)
}
if blk == nil {
return [32]byte{}, nil, NewBlockNotFoundError(fmt.Sprintf("block %#x not found in db", rootSlice))
}
return root, blk, nil
}
// resolveBlockID resolves a block ID to root and signed block.
// Fork validation is handled outside this function by the calling methods.
func (p *BeaconDbBlocker) resolveBlockID(ctx context.Context, id string) ([fieldparams.RootLength]byte, interfaces.ReadOnlySignedBeaconBlock, error) {
switch id {
case "genesis":
blk, err := p.BeaconDB.GenesisBlock(ctx)
if err != nil {
return [32]byte{}, nil, errors.Wrap(err, "could not retrieve genesis block")
}
if blk == nil {
return [32]byte{}, nil, NewBlockNotFoundError("genesis block not found")
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return [32]byte{}, nil, errors.Wrap(err, "could not get genesis block root")
}
return root, blk, nil
case "head":
blk, err := p.ChainInfoFetcher.HeadBlock(ctx)
if err != nil {
return [32]byte{}, nil, errors.Wrap(err, "could not retrieve head block")
}
if blk == nil {
return [32]byte{}, nil, NewBlockNotFoundError("head block not found")
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return [32]byte{}, nil, errors.Wrap(err, "could not get head block root")
}
return root, blk, nil
case "finalized":
finalized := p.ChainInfoFetcher.FinalizedCheckpt()
if finalized == nil {
return [32]byte{}, nil, errors.New("received nil finalized checkpoint")
}
finalizedRoot := bytesutil.ToBytes32(finalized.Root)
blk, err := p.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
return [32]byte{}, nil, errors.Wrap(err, "could not retrieve finalized block")
}
if blk == nil {
return [32]byte{}, nil, NewBlockNotFoundError(fmt.Sprintf("finalized block %#x not found", finalizedRoot))
}
return finalizedRoot, blk, nil
case "justified":
jcp := p.ChainInfoFetcher.CurrentJustifiedCheckpt()
if jcp == nil {
return [32]byte{}, nil, errors.New("received nil justified checkpoint")
}
justifiedRoot := bytesutil.ToBytes32(jcp.Root)
blk, err := p.BeaconDB.Block(ctx, justifiedRoot)
if err != nil {
return [32]byte{}, nil, errors.Wrap(err, "could not retrieve justified block")
}
if blk == nil {
return [32]byte{}, nil, NewBlockNotFoundError(fmt.Sprintf("justified block %#x not found", justifiedRoot))
}
return justifiedRoot, blk, nil
default:
return p.resolveBlockIDByRootOrSlot(ctx, id)
}
}
// Block returns the beacon block for a given identifier. The identifier can be one of:
// - "head" (canonical head in node's view)
// - "genesis"
@@ -79,71 +217,9 @@ type BeaconDbBlocker struct {
// - <hex encoded block root with '0x' prefix>
// - <block root>
func (p *BeaconDbBlocker) Block(ctx context.Context, id []byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
var err error
var blk interfaces.ReadOnlySignedBeaconBlock
switch string(id) {
case "head":
blk, err = p.ChainInfoFetcher.HeadBlock(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve head block")
}
case "finalized":
finalized := p.ChainInfoFetcher.FinalizedCheckpt()
finalizedRoot := bytesutil.ToBytes32(finalized.Root)
blk, err = p.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
return nil, errors.New("could not get finalized block from db")
}
case "genesis":
blk, err = p.BeaconDB.GenesisBlock(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not retrieve genesis block")
}
default:
if bytesutil.IsHex(id) {
decoded, err := hexutil.Decode(string(id))
if err != nil {
e := NewBlockIdParseError(err)
return nil, &e
}
blk, err = p.BeaconDB.Block(ctx, bytesutil.ToBytes32(decoded))
if err != nil {
return nil, errors.Wrap(err, "could not retrieve block")
}
} else if len(id) == 32 {
blk, err = p.BeaconDB.Block(ctx, bytesutil.ToBytes32(id))
if err != nil {
return nil, errors.Wrap(err, "could not retrieve block")
}
} else {
slot, err := strconv.ParseUint(string(id), 10, 64)
if err != nil {
e := NewBlockIdParseError(err)
return nil, &e
}
blks, err := p.BeaconDB.BlocksBySlot(ctx, primitives.Slot(slot))
if err != nil {
return nil, errors.Wrapf(err, "could not retrieve blocks for slot %d", slot)
}
_, roots, err := p.BeaconDB.BlockRootsBySlot(ctx, primitives.Slot(slot))
if err != nil {
return nil, errors.Wrapf(err, "could not retrieve block roots for slot %d", slot)
}
numBlks := len(blks)
if numBlks == 0 {
return nil, nil
}
for i, b := range blks {
canonical, err := p.ChainInfoFetcher.IsCanonical(ctx, roots[i])
if err != nil {
return nil, errors.Wrapf(err, "could not determine if block root is canonical")
}
if canonical {
blk = b
break
}
}
}
_, blk, err := p.resolveBlockID(ctx, string(id))
if err != nil {
return nil, err
}
return blk, nil
}
@@ -171,88 +247,36 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
opt(cfg)
}
// Resolve block ID to root
var rootSlice []byte
switch id {
case "genesis":
return nil, &core.RpcError{Err: errors.New("blobs are not supported for Phase 0 fork"), Reason: core.BadRequest}
case "head":
var err error
rootSlice, err = p.ChainInfoFetcher.HeadRoot(ctx)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "could not retrieve head root"), Reason: core.Internal}
}
case "finalized":
fcp := p.ChainInfoFetcher.FinalizedCheckpt()
if fcp == nil {
return nil, &core.RpcError{Err: errors.New("received nil finalized checkpoint"), Reason: core.Internal}
}
rootSlice = fcp.Root
case "justified":
jcp := p.ChainInfoFetcher.CurrentJustifiedCheckpt()
if jcp == nil {
return nil, &core.RpcError{Err: errors.New("received nil justified checkpoint"), Reason: core.Internal}
}
rootSlice = jcp.Root
default:
if bytesutil.IsHex([]byte(id)) {
var err error
rootSlice, err = bytesutil.DecodeHexWithLength(id, fieldparams.RootLength)
if err != nil {
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
}
} else {
slot, err := strconv.ParseUint(id, 10, 64)
if err != nil {
return nil, &core.RpcError{Err: NewBlockIdParseError(err), Reason: core.BadRequest}
}
denebStart, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Deneb start slot"), Reason: core.Internal}
}
if primitives.Slot(slot) < denebStart {
return nil, &core.RpcError{Err: errors.New("blobs are not supported before Deneb fork"), Reason: core.BadRequest}
}
ok, roots, err := p.BeaconDB.BlockRootsBySlot(ctx, primitives.Slot(slot))
if !ok {
return nil, &core.RpcError{Err: fmt.Errorf("no block roots at slot %d", slot), Reason: core.NotFound}
}
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to get block roots for slot %d", slot), Reason: core.Internal}
}
rootSlice = roots[0][:]
if len(roots) == 1 {
break
}
for _, blockRoot := range roots {
canonical, err := p.ChainInfoFetcher.IsCanonical(ctx, blockRoot)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "could not determine if block %#x is canonical", blockRoot), Reason: core.Internal}
}
if canonical {
rootSlice = blockRoot[:]
break
}
}
}
// Check for genesis block first (not supported for blobs)
if id == "genesis" {
return nil, &core.RpcError{Err: errors.New("not supported for Phase 0 fork"), Reason: core.BadRequest}
}
root := bytesutil.ToBytes32(rootSlice)
roSignedBlock, err := p.BeaconDB.Block(ctx, root)
// Resolve block ID to root and block
root, roSignedBlock, err := p.resolveBlockID(ctx, id)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve block %#x from db", rootSlice), Reason: core.Internal}
var blockNotFound *BlockNotFoundError
var blockIdParseErr *BlockIdParseError
reason := core.Internal // Default to Internal for unexpected errors
if errors.As(err, &blockNotFound) {
reason = core.NotFound
} else if errors.As(err, &blockIdParseErr) {
reason = core.BadRequest
}
return nil, &core.RpcError{Err: err, Reason: core.ErrorReason(reason)}
}
if roSignedBlock == nil {
return nil, &core.RpcError{Err: fmt.Errorf("block %#x not found in db", rootSlice), Reason: core.NotFound}
slot := roSignedBlock.Block().Slot()
if slots.ToEpoch(slot) < params.BeaconConfig().DenebForkEpoch {
return nil, &core.RpcError{Err: errors.New("blobs are not supported before Deneb fork"), Reason: core.BadRequest}
}
roBlock := roSignedBlock.Block()
commitments, err := roBlock.Body().BlobKzgCommitments()
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve kzg commitments from block %#x", rootSlice), Reason: core.Internal}
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve kzg commitments from block %#x", root), Reason: core.Internal}
}
// If there are no commitments return 200 w/ empty list
@@ -266,7 +290,7 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
if fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate peerDAS start slot"), Reason: core.Internal}
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
}
}
@@ -461,3 +485,93 @@ func (p *BeaconDbBlocker) neededDataColumnSidecars(root [fieldparams.RootLength]
return verifiedRoSidecars, nil
}
// DataColumns returns the data column sidecars for a given block id identifier and column indices. The identifier can be one of:
// - "head" (canonical head in node's view)
// - "genesis"
// - "finalized"
// - "justified"
// - <slot>
// - <hex encoded block root with '0x' prefix>
// - <block root>
//
// cases:
// - no block, 404
// - block exists, before Fulu fork, 400 (data columns are not supported before Fulu fork)
func (p *BeaconDbBlocker) DataColumns(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
// Check for genesis block first (not supported for data columns)
if id == "genesis" {
return nil, &core.RpcError{Err: errors.New("data columns are not supported for Phase 0 fork"), Reason: core.BadRequest}
}
// Resolve block ID to root and block
root, roSignedBlock, err := p.resolveBlockID(ctx, id)
if err != nil {
var blockNotFound *BlockNotFoundError
var blockIdParseErr *BlockIdParseError
reason := core.Internal // Default to Internal for unexpected errors
if errors.As(err, &blockNotFound) {
reason = core.NotFound
} else if errors.As(err, &blockIdParseErr) {
reason = core.BadRequest
}
return nil, &core.RpcError{Err: err, Reason: core.ErrorReason(reason)}
}
slot := roSignedBlock.Block().Slot()
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
fuluForkSlot, err := slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
}
if slot < fuluForkSlot {
return nil, &core.RpcError{Err: errors.New("data columns are not supported before Fulu fork"), Reason: core.BadRequest}
}
roBlock := roSignedBlock.Block()
commitments, err := roBlock.Body().BlobKzgCommitments()
if err != nil {
return nil, &core.RpcError{Err: errors.Wrapf(err, "failed to retrieve kzg commitments from block %#x", root), Reason: core.Internal}
}
// If there are no commitments return 200 w/ empty list
if len(commitments) == 0 {
return make([]blocks.VerifiedRODataColumn, 0), nil
}
// Get column indices to retrieve
columnIndices := make([]uint64, 0)
if len(indices) == 0 {
// If no indices specified, return all columns this node is custodying
summary := p.DataColumnStorage.Summary(root)
stored := summary.Stored()
for index := range stored {
columnIndices = append(columnIndices, index)
}
} else {
// Validate and convert indices
numberOfColumns := params.BeaconConfig().NumberOfColumns
for _, index := range indices {
if index < 0 || uint64(index) >= numberOfColumns {
return nil, &core.RpcError{
Err: fmt.Errorf("requested index %d is outside valid range [0, %d)", index, numberOfColumns),
Reason: core.BadRequest,
}
}
columnIndices = append(columnIndices, uint64(index))
}
}
// Retrieve data column sidecars from storage
verifiedRoDataColumns, err := p.DataColumnStorage.Get(root, columnIndices)
if err != nil {
return nil, &core.RpcError{
Err: errors.Wrapf(err, "could not retrieve data columns for block root %#x", root),
Reason: core.Internal,
}
}
return verifiedRoDataColumns, nil
}

View File

@@ -61,11 +61,12 @@ func TestGetBlock(t *testing.T) {
fetcher := &BeaconDbBlocker{
BeaconDB: beaconDB,
ChainInfoFetcher: &mockChain.ChainService{
DB: beaconDB,
Block: wsb,
Root: headBlock.BlockRoot,
FinalizedCheckPoint: &ethpb.Checkpoint{Root: blkContainers[64].BlockRoot},
CanonicalRoots: canonicalRoots,
DB: beaconDB,
Block: wsb,
Root: headBlock.BlockRoot,
FinalizedCheckPoint: &ethpb.Checkpoint{Root: blkContainers[64].BlockRoot},
CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: blkContainers[32].BlockRoot},
CanonicalRoots: canonicalRoots,
},
}
@@ -108,6 +109,11 @@ func TestGetBlock(t *testing.T) {
blockID: []byte("finalized"),
want: blkContainers[64].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "justified",
blockID: []byte("justified"),
want: blkContainers[32].Block.(*ethpb.BeaconBlockContainer_Phase0Block).Phase0Block,
},
{
name: "genesis",
blockID: []byte("genesis"),
@@ -162,6 +168,112 @@ func TestGetBlock(t *testing.T) {
}
}
func TestBlobsErrorHandling(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
params.OverrideBeaconConfig(cfg)
ctx := t.Context()
db := testDB.SetupDB(t)
t.Run("non-existent block by root returns 404", func(t *testing.T) {
blocker := &BeaconDbBlocker{
BeaconDB: db,
}
_, rpcErr := blocker.Blobs(ctx, "0x1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
require.StringContains(t, "not found", rpcErr.Err.Error())
})
t.Run("non-existent block by slot returns 404", func(t *testing.T) {
blocker := &BeaconDbBlocker{
BeaconDB: db,
ChainInfoFetcher: &mockChain.ChainService{},
}
_, rpcErr := blocker.Blobs(ctx, "999999")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
require.StringContains(t, "no blocks found at slot", rpcErr.Err.Error())
})
t.Run("genesis block not found returns 404", func(t *testing.T) {
blocker := &BeaconDbBlocker{
BeaconDB: db,
}
// Note: genesis blocks don't support blobs, so this returns BadRequest
_, rpcErr := blocker.Blobs(ctx, "genesis")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
require.StringContains(t, "not supported for Phase 0", rpcErr.Err.Error())
})
t.Run("finalized block not found returns 404", func(t *testing.T) {
// Set up a finalized checkpoint pointing to a non-existent block
nonExistentRoot := bytesutil.PadTo([]byte("nonexistent"), 32)
blocker := &BeaconDbBlocker{
BeaconDB: db,
ChainInfoFetcher: &mockChain.ChainService{
FinalizedCheckPoint: &ethpb.Checkpoint{Root: nonExistentRoot},
},
}
_, rpcErr := blocker.Blobs(ctx, "finalized")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
require.StringContains(t, "finalized block", rpcErr.Err.Error())
require.StringContains(t, "not found", rpcErr.Err.Error())
})
t.Run("justified block not found returns 404", func(t *testing.T) {
// Set up a justified checkpoint pointing to a non-existent block
nonExistentRoot := bytesutil.PadTo([]byte("nonexistent2"), 32)
blocker := &BeaconDbBlocker{
BeaconDB: db,
ChainInfoFetcher: &mockChain.ChainService{
CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: nonExistentRoot},
},
}
_, rpcErr := blocker.Blobs(ctx, "justified")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
require.StringContains(t, "justified block", rpcErr.Err.Error())
require.StringContains(t, "not found", rpcErr.Err.Error())
})
t.Run("invalid block ID returns 400", func(t *testing.T) {
blocker := &BeaconDbBlocker{
BeaconDB: db,
}
_, rpcErr := blocker.Blobs(ctx, "invalid-hex")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
require.StringContains(t, "could not parse block ID", rpcErr.Err.Error())
})
t.Run("database error returns 500", func(t *testing.T) {
// Create a pre-Deneb block with valid slot
predenebBlock := util.NewBeaconBlock()
predenebBlock.Block.Slot = 100
util.SaveBlock(t, ctx, db, predenebBlock)
// Create blocker without ChainInfoFetcher to trigger internal error when checking canonical status
blocker := &BeaconDbBlocker{
BeaconDB: db,
}
_, rpcErr := blocker.Blobs(ctx, "100")
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.Internal), rpcErr.Reason)
})
}
func TestGetBlob(t *testing.T) {
const (
slot = 123
@@ -217,9 +329,9 @@ func TestGetBlob(t *testing.T) {
for _, blob := range fuluBlobSidecars {
var kzgBlob kzg.Blob
copy(kzgBlob[:], blob.Blob)
cellsAndProogs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
require.NoError(t, err)
cellsAndProofsList = append(cellsAndProofsList, cellsAndProogs)
cellsAndProofsList = append(cellsAndProofsList, cellsAndProofs)
}
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofsList, peerdas.PopulateFromBlock(fuluBlock))
@@ -240,14 +352,17 @@ func TestGetBlob(t *testing.T) {
blocker := &BeaconDbBlocker{}
_, rpcErr := blocker.Blobs(ctx, "genesis")
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "blobs are not supported for Phase 0 fork", rpcErr.Err.Error())
require.StringContains(t, "not supported for Phase 0 fork", rpcErr.Err.Error())
})
t.Run("head", func(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{Root: denebBlockRoot[:]},
ChainInfoFetcher: &mockChain.ChainService{
Root: denebBlockRoot[:],
Block: denebBlock,
},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -326,6 +441,7 @@ func TestGetBlob(t *testing.T) {
setupDeneb(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
@@ -491,6 +607,31 @@ func TestGetBlob(t *testing.T) {
require.DeepSSZEqual(t, initialBlobSidecarPb, retrievedBlobSidecarPb)
}
})
t.Run("pre-deneb block should return 400", func(t *testing.T) {
// Setup with Deneb fork at epoch 1, so slot 0 is before Deneb
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
params.OverrideBeaconConfig(cfg)
// Create a pre-Deneb block (slot 0, which is epoch 0)
predenebBlock := util.NewBeaconBlock()
predenebBlock.Block.Slot = 0
util.SaveBlock(t, ctx, db, predenebBlock)
predenebBlockRoot, err := predenebBlock.Block.HashTreeRoot()
require.NoError(t, err)
blocker := &BeaconDbBlocker{
BeaconDB: db,
}
_, rpcErr := blocker.Blobs(ctx, hexutil.Encode(predenebBlockRoot[:]))
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "not supported before", rpcErr.Err.Error())
})
}
func TestBlobs_CommitmentOrdering(t *testing.T) {
@@ -653,3 +794,300 @@ func TestBlobs_CommitmentOrdering(t *testing.T) {
require.StringContains(t, "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", rpcErr.Err.Error())
})
}
func TestGetDataColumns(t *testing.T) {
const (
blobCount = 4
fuluForkEpoch = 2
)
setupFulu := func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
cfg.FuluForkEpoch = fuluForkEpoch
params.OverrideBeaconConfig(cfg)
}
setupPreFulu := func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
cfg.FuluForkEpoch = 1000 // Set to a high epoch to ensure we're before Fulu
params.OverrideBeaconConfig(cfg)
}
ctx := t.Context()
db := testDB.SetupDB(t)
// Start the trusted setup.
err := kzg.Start()
require.NoError(t, err)
// Create Fulu block and convert blob sidecars to data column sidecars.
fuluForkSlot := fuluForkEpoch * params.BeaconConfig().SlotsPerEpoch
fuluBlock, fuluBlobSidecars := util.GenerateTestElectraBlockWithSidecar(t, [fieldparams.RootLength]byte{}, fuluForkSlot, blobCount)
fuluBlockRoot := fuluBlock.Root()
cellsAndProofsList := make([]kzg.CellsAndProofs, 0, len(fuluBlobSidecars))
for _, blob := range fuluBlobSidecars {
var kzgBlob kzg.Blob
copy(kzgBlob[:], blob.Blob)
cellsAndProofs, err := kzg.ComputeCellsAndKZGProofs(&kzgBlob)
require.NoError(t, err)
cellsAndProofsList = append(cellsAndProofsList, cellsAndProofs)
}
roDataColumnSidecars, err := peerdas.DataColumnSidecars(cellsAndProofsList, peerdas.PopulateFromBlock(fuluBlock))
require.NoError(t, err)
verifiedRoDataColumnSidecars := make([]blocks.VerifiedRODataColumn, 0, len(roDataColumnSidecars))
for _, roDataColumn := range roDataColumnSidecars {
verifiedRoDataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRoDataColumnSidecars = append(verifiedRoDataColumnSidecars, verifiedRoDataColumn)
}
err = db.SaveBlock(t.Context(), fuluBlock)
require.NoError(t, err)
_, dataColumnStorage := filesystem.NewEphemeralDataColumnStorageAndFs(t)
err = dataColumnStorage.Save(verifiedRoDataColumnSidecars)
require.NoError(t, err)
t.Run("pre-fulu fork", func(t *testing.T) {
setupPreFulu(t)
// Create a block at slot 123 (before Fulu fork since FuluForkEpoch is set to MaxUint64)
preFuluBlock := util.NewBeaconBlock()
preFuluBlock.Block.Slot = 123
util.SaveBlock(t, ctx, db, preFuluBlock)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
ChainInfoFetcher: &mockChain.ChainService{},
BeaconDB: db,
}
_, rpcErr := blocker.DataColumns(ctx, "123", nil)
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
require.StringContains(t, "not supported before Fulu fork", rpcErr.Err.Error())
})
t.Run("genesis", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
ChainInfoFetcher: &mockChain.ChainService{},
}
_, rpcErr := blocker.DataColumns(ctx, "genesis", nil)
require.NotNil(t, rpcErr)
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "not supported for Phase 0 fork", rpcErr.Err.Error())
})
t.Run("head", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{
Root: fuluBlockRoot[:],
Block: fuluBlock,
},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, "head", nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(verifiedRoDataColumnSidecars), len(retrievedDataColumns))
// Create a map of expected indices for easier verification
expectedIndices := make(map[uint64]bool)
for _, expected := range verifiedRoDataColumnSidecars {
expectedIndices[expected.RODataColumn.DataColumnSidecar.Index] = true
}
// Verify we got data columns with the expected indices
for _, actual := range retrievedDataColumns {
require.Equal(t, true, expectedIndices[actual.RODataColumn.DataColumnSidecar.Index])
}
})
t.Run("finalized", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &ethpb.Checkpoint{Root: fuluBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, "finalized", nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(verifiedRoDataColumnSidecars), len(retrievedDataColumns))
})
t.Run("justified", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &ethpb.Checkpoint{Root: fuluBlockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, "justified", nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(verifiedRoDataColumnSidecars), len(retrievedDataColumns))
})
t.Run("root", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, hexutil.Encode(fuluBlockRoot[:]), nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(verifiedRoDataColumnSidecars), len(retrievedDataColumns))
})
t.Run("slot", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
ChainInfoFetcher: &mockChain.ChainService{},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
slotStr := fmt.Sprintf("%d", fuluForkSlot)
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, slotStr, nil)
require.IsNil(t, rpcErr)
require.Equal(t, len(verifiedRoDataColumnSidecars), len(retrievedDataColumns))
})
t.Run("specific indices", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
// Request specific indices (first 3 data columns)
indices := []int{0, 1, 2}
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, hexutil.Encode(fuluBlockRoot[:]), indices)
require.IsNil(t, rpcErr)
require.Equal(t, 3, len(retrievedDataColumns))
for i, dataColumn := range retrievedDataColumns {
require.Equal(t, uint64(indices[i]), dataColumn.RODataColumn.DataColumnSidecar.Index)
}
})
t.Run("no data columns returns empty array", func(t *testing.T) {
setupFulu(t)
_, emptyDataColumnStorage := filesystem.NewEphemeralDataColumnStorageAndFs(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: emptyDataColumnStorage,
}
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, hexutil.Encode(fuluBlockRoot[:]), nil)
require.IsNil(t, rpcErr)
require.Equal(t, 0, len(retrievedDataColumns))
})
t.Run("index too big", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
_, rpcErr := blocker.DataColumns(ctx, hexutil.Encode(fuluBlockRoot[:]), []int{0, math.MaxInt})
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.BadRequest), rpcErr.Reason)
})
t.Run("outside retention period", func(t *testing.T) {
setupFulu(t)
// Create a data column storage with very short retention period
shortRetentionStorage, err := filesystem.NewDataColumnStorage(ctx,
filesystem.WithDataColumnBasePath(t.TempDir()),
filesystem.WithDataColumnRetentionEpochs(1), // Only 1 epoch retention
)
require.NoError(t, err)
// Mock genesis time to make current slot much later than the block slot
// This simulates being outside retention period
genesisTime := time.Now().Add(-time.Duration(fuluForkSlot+1000) * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: genesisTime,
},
BeaconDB: db,
DataColumnStorage: shortRetentionStorage,
}
// Since the block is outside retention period, should return empty array
retrievedDataColumns, rpcErr := blocker.DataColumns(ctx, hexutil.Encode(fuluBlockRoot[:]), nil)
require.IsNil(t, rpcErr)
require.Equal(t, 0, len(retrievedDataColumns))
})
t.Run("block not found", func(t *testing.T) {
setupFulu(t)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
DataColumnStorage: dataColumnStorage,
}
nonExistentRoot := bytesutil.PadTo([]byte("nonexistent"), 32)
_, rpcErr := blocker.DataColumns(ctx, hexutil.Encode(nonExistentRoot), nil)
require.NotNil(t, rpcErr)
require.Equal(t, core.ErrorReason(core.NotFound), rpcErr.Reason)
})
}

View File

@@ -324,18 +324,18 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
wg.Add(1)
go func() {
defer wg.Done()
if err := vs.broadcastReceiveBlock(ctx, block, root); err != nil {
if err := vs.broadcastReceiveBlock(ctx, &wg, block, root); err != nil {
errChan <- errors.Wrap(err, "broadcast/receive block failed")
return
}
errChan <- nil
}()
wg.Wait()
if err := vs.broadcastAndReceiveSidecars(ctx, block, root, blobSidecars, dataColumnSidecars); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive sidecars: %v", err)
}
wg.Wait()
if err := <-errChan; err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast/receive block: %v", err)
}
@@ -432,7 +432,26 @@ func (vs *Server) handleUnblindedBlock(
}
// broadcastReceiveBlock broadcasts a block and handles its reception.
func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
func (vs *Server) broadcastReceiveBlock(ctx context.Context, wg *sync.WaitGroup, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
if err := vs.broadcastBlock(ctx, wg, block, root); err != nil {
return errors.Wrap(err, "broadcast block")
}
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: block},
})
if err := vs.BlockReceiver.ReceiveBlock(ctx, block, root, nil); err != nil {
return errors.Wrap(err, "receive block")
}
return nil
}
func (vs *Server) broadcastBlock(ctx context.Context, wg *sync.WaitGroup, block interfaces.SignedBeaconBlock, root [fieldparams.RootLength]byte) error {
defer wg.Done()
protoBlock, err := block.Proto()
if err != nil {
return errors.Wrap(err, "protobuf conversion failed")
@@ -440,11 +459,13 @@ func (vs *Server) broadcastReceiveBlock(ctx context.Context, block interfaces.Si
if err := vs.P2P.Broadcast(ctx, protoBlock); err != nil {
return errors.Wrap(err, "broadcast failed")
}
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: block},
})
return vs.BlockReceiver.ReceiveBlock(ctx, block, root, nil)
log.WithFields(logrus.Fields{
"slot": block.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
}).Debug("Broadcasted block")
return nil
}
// broadcastAndReceiveBlobs handles the broadcasting and reception of blob sidecars.
@@ -498,10 +519,6 @@ func (vs *Server) broadcastAndReceiveDataColumns(
})
}
if err := eg.Wait(); err != nil {
return errors.Wrap(err, "wait for data columns to be broadcasted")
}
if err := vs.DataColumnReceiver.ReceiveDataColumns(verifiedRODataColumns); err != nil {
return errors.Wrap(err, "receive data column")
}
@@ -512,6 +529,11 @@ func (vs *Server) broadcastAndReceiveDataColumns(
Data: &operation.DataColumnSidecarReceivedData{DataColumn: &verifiedRODataColumn}, // #nosec G601
})
}
if err := eg.Wait(); err != nil {
return errors.Wrap(err, "wait for data columns to be broadcasted")
}
return nil
}

View File

@@ -14,10 +14,13 @@ import (
// MockBlocker is a fake implementation of lookup.Blocker.
type MockBlocker struct {
BlockToReturn interfaces.ReadOnlySignedBeaconBlock
ErrorToReturn error
SlotBlockMap map[primitives.Slot]interfaces.ReadOnlySignedBeaconBlock
RootBlockMap map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
BlockToReturn interfaces.ReadOnlySignedBeaconBlock
ErrorToReturn error
SlotBlockMap map[primitives.Slot]interfaces.ReadOnlySignedBeaconBlock
RootBlockMap map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
DataColumnsFunc func(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError)
DataColumnsToReturn []blocks.VerifiedRODataColumn
DataColumnsErrorToReturn *core.RpcError
}
// Block --
@@ -40,3 +43,17 @@ func (m *MockBlocker) Block(_ context.Context, b []byte) (interfaces.ReadOnlySig
func (*MockBlocker) Blobs(_ context.Context, _ string, _ ...options.BlobsOption) ([]*blocks.VerifiedROBlob, *core.RpcError) {
return nil, &core.RpcError{}
}
// DataColumns --
func (m *MockBlocker) DataColumns(ctx context.Context, id string, indices []int) ([]blocks.VerifiedRODataColumn, *core.RpcError) {
if m.DataColumnsFunc != nil {
return m.DataColumnsFunc(ctx, id, indices)
}
if m.DataColumnsErrorToReturn != nil {
return nil, m.DataColumnsErrorToReturn
}
if m.DataColumnsToReturn != nil {
return m.DataColumnsToReturn, nil
}
return nil, &core.RpcError{}
}

View File

@@ -8,6 +8,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
prysmP2P "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
@@ -187,7 +188,7 @@ func requestSidecarsFromStorage(
requestedIndicesMap map[uint64]bool,
roots map[[fieldparams.RootLength]byte]bool,
) (map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn, error) {
requestedIndices := sortedSliceFromMap(requestedIndicesMap)
requestedIndices := helpers.SortedSliceFromMap(requestedIndicesMap)
result := make(map[[fieldparams.RootLength]byte][]blocks.VerifiedRODataColumn, len(roots))
@@ -599,7 +600,7 @@ func assembleAvailableSidecarsForRoot(
root [fieldparams.RootLength]byte,
indices map[uint64]bool,
) ([]blocks.VerifiedRODataColumn, error) {
stored, err := storage.Get(root, sortedSliceFromMap(indices))
stored, err := storage.Get(root, helpers.SortedSliceFromMap(indices))
if err != nil {
return nil, errors.Wrapf(err, "storage get for root %#x", root)
}
@@ -802,25 +803,27 @@ func sendDataColumnSidecarsRequest(
roDataColumns = append(roDataColumns, localRoDataColumns...)
}
prettyByRangeRequests := make([]map[string]any, 0, len(byRangeRequests))
for _, request := range byRangeRequests {
prettyRequest := map[string]any{
"startSlot": request.StartSlot,
"count": request.Count,
"columns": request.Columns,
if logrus.GetLevel() >= logrus.DebugLevel {
prettyByRangeRequests := make([]map[string]any, 0, len(byRangeRequests))
for _, request := range byRangeRequests {
prettyRequest := map[string]any{
"startSlot": request.StartSlot,
"count": request.Count,
"columns": helpers.PrettySlice(request.Columns),
}
prettyByRangeRequests = append(prettyByRangeRequests, prettyRequest)
}
prettyByRangeRequests = append(prettyByRangeRequests, prettyRequest)
log.WithFields(logrus.Fields{
"respondedSidecars": len(roDataColumns),
"requestCount": len(byRangeRequests),
"type": "byRange",
"duration": time.Since(start),
"requests": prettyByRangeRequests,
}).Debug("Received data column sidecars")
}
log.WithFields(logrus.Fields{
"respondedSidecars": len(roDataColumns),
"requestCount": len(byRangeRequests),
"type": "byRange",
"duration": time.Since(start),
"requests": prettyByRangeRequests,
}).Debug("Received data column sidecars")
return roDataColumns, nil
}
@@ -895,7 +898,7 @@ func buildByRangeRequests(
}
}
columns := sortedSliceFromMap(reference)
columns := helpers.SortedSliceFromMap(reference)
startSlot, endSlot := slots[0], slots[len(slots)-1]
totalCount := uint64(endSlot - startSlot + 1)
@@ -920,7 +923,7 @@ func buildByRootRequest(indicesByRoot map[[fieldparams.RootLength]byte]map[uint6
for root, indices := range indicesByRoot {
identifier := &eth.DataColumnsByRootIdentifier{
BlockRoot: root[:],
Columns: sortedSliceFromMap(indices),
Columns: helpers.SortedSliceFromMap(indices),
}
identifiers = append(identifiers, identifier)
}
@@ -1173,17 +1176,6 @@ func compareIndices(left, right map[uint64]bool) bool {
return true
}
// sortedSliceFromMap converts a map[uint64]bool to a sorted slice of keys.
func sortedSliceFromMap(m map[uint64]bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
slices.Sort(result)
return result
}
// computeTotalCount calculates the total count of indices across all roots.
func computeTotalCount(input map[[fieldparams.RootLength]byte]map[uint64]bool) int {
totalCount := 0

View File

@@ -1007,13 +1007,6 @@ func TestCompareIndices(t *testing.T) {
require.Equal(t, true, compareIndices(left, right))
}
func TestSlortedSliceFromMap(t *testing.T) {
input := map[uint64]bool{54: true, 23: true, 35: true}
expected := []uint64{23, 35, 54}
actual := sortedSliceFromMap(input)
require.DeepEqual(t, expected, actual)
}
func TestComputeTotalCount(t *testing.T) {
input := map[[fieldparams.RootLength]byte]map[uint64]bool{
[fieldparams.RootLength]byte{1}: {1: true, 3: true},

View File

@@ -6,6 +6,7 @@ import (
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -95,7 +96,7 @@ func (s *Service) processDataColumnSidecarsFromReconstruction(ctx context.Contex
"slot": slot,
"proposerIndex": proposerIndex,
"count": len(unseenIndices),
"indices": sortedSliceFromMap(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
"duration": duration,
}).Debug("Reconstructed data column sidecars")
}

View File

@@ -13,7 +13,13 @@ import (
// it will be in charge of subscribing/unsubscribing the relevant topics at the fork boundaries.
func (s *Service) forkWatcher() {
<-s.initialSyncComplete
slotTicker := slots.NewSlotTicker(s.cfg.clock.GenesisTime(), params.BeaconConfig().SecondsPerSlot)
genesisTime := s.cfg.clock.GenesisTime()
currentEpoch := slots.ToEpoch(slots.CurrentSlot(genesisTime))
// Initialize registeredNetworkEntry to the current network schedule entry to avoid
// duplicate subscriber registration on the first forkWatcher tick when the next
// epoch has the same digest.
s.registeredNetworkEntry = params.GetNetworkScheduleEntry(currentEpoch)
slotTicker := slots.NewSlotTicker(genesisTime, params.BeaconConfig().SecondsPerSlot)
for {
select {
// In the event of a node restart, we will still end up subscribing to the correct

View File

@@ -20,6 +20,7 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",

View File

@@ -3,12 +3,12 @@ package initialsync
import (
"context"
"fmt"
"slices"
"sort"
"strings"
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
@@ -430,9 +430,9 @@ func (f *blocksFetcher) fetchSidecars(ctx context.Context, pid peer.ID, peers []
}
if len(missingIndicesByRoot) > 0 {
prettyMissingIndicesByRoot := make(map[string][]uint64, len(missingIndicesByRoot))
prettyMissingIndicesByRoot := make(map[string]string, len(missingIndicesByRoot))
for root, indices := range missingIndicesByRoot {
prettyMissingIndicesByRoot[fmt.Sprintf("%#x", root)] = sortedSliceFromMap(indices)
prettyMissingIndicesByRoot[fmt.Sprintf("%#x", root)] = helpers.SortedPrettySliceFromMap(indices)
}
return "", errors.Errorf("some sidecars are still missing after fetch: %v", prettyMissingIndicesByRoot)
}
@@ -727,17 +727,6 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks.Blo
return "", errNoPeersAvailable
}
// sortedSliceFromMap returns a sorted slice of keys from a map.
func sortedSliceFromMap(m map[uint64]bool) []uint64 {
result := make([]uint64, 0, len(m))
for k := range m {
result = append(result, k)
}
slices.Sort(result)
return result
}
// requestBlocks is a wrapper for handling BeaconBlocksByRangeRequest requests/streams.
func (f *blocksFetcher) requestBlocks(
ctx context.Context,

View File

@@ -1349,14 +1349,6 @@ func TestBlockFetcher_HasSufficientBandwidth(t *testing.T) {
assert.Equal(t, 2, len(receivedPeers))
}
func TestSortedSliceFromMap(t *testing.T) {
m := map[uint64]bool{1: true, 3: true, 2: true, 4: true}
expected := []uint64{1, 2, 3, 4}
actual := sortedSliceFromMap(m)
require.DeepSSZEqual(t, expected, actual)
}
func TestFetchSidecars(t *testing.T) {
ctx := t.Context()
t.Run("No blocks", func(t *testing.T) {

View File

@@ -12,6 +12,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
blockfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/block"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/das"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
@@ -479,7 +480,7 @@ func (s *Service) fetchOriginDataColumnSidecars(roBlock blocks.ROBlock, delay ti
// Some sidecars are still missing.
log := log.WithFields(logrus.Fields{
"attempt": attempt,
"missingIndices": sortedSliceFromMap(missingIndicesByRoot[root]),
"missingIndices": helpers.SortedPrettySliceFromMap(missingIndicesByRoot[root]),
"delay": delay,
})

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
@@ -121,9 +122,9 @@ func (s *Service) requestAndSaveMissingDataColumnSidecars(blks []blocks.ROBlock)
}
if len(missingIndicesByRoot) > 0 {
prettyMissingIndicesByRoot := make(map[string][]uint64, len(missingIndicesByRoot))
prettyMissingIndicesByRoot := make(map[string]string, len(missingIndicesByRoot))
for root, indices := range missingIndicesByRoot {
prettyMissingIndicesByRoot[fmt.Sprintf("%#x", root)] = sortedSliceFromMap(indices)
prettyMissingIndicesByRoot[fmt.Sprintf("%#x", root)] = helpers.SortedPrettySliceFromMap(indices)
}
return errors.Errorf("some sidecars are still missing after fetch: %v", prettyMissingIndicesByRoot)
}

View File

@@ -7,6 +7,7 @@ import (
"slices"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
@@ -598,7 +599,7 @@ func isSidecarIndexRequested(request *ethpb.DataColumnSidecarsByRangeRequest) Da
return func(sidecar blocks.RODataColumn) error {
columnIndex := sidecar.Index
if !requestedIndices[columnIndex] {
requested := sortedSliceFromMap(requestedIndices)
requested := helpers.SortedPrettySliceFromMap(requestedIndices)
return errors.Errorf("data column sidecar index %d returned by the peer but not found in requested indices %v", columnIndex, requested)
}

View File

@@ -85,12 +85,14 @@ type subnetTracker struct {
subscribeParameters
mu sync.RWMutex
subscriptions map[uint64]*pubsub.Subscription
trackedTopics map[uint64]string // Maps subnet to full topic string for unsubscribing
}
func newSubnetTracker(p subscribeParameters) *subnetTracker {
return &subnetTracker{
subscribeParameters: p,
subscriptions: make(map[uint64]*pubsub.Subscription),
trackedTopics: make(map[uint64]string),
}
}
@@ -117,31 +119,84 @@ func (t *subnetTracker) missing(wanted map[uint64]bool) []uint64 {
missing = append(missing, subnet)
}
}
if len(missing) > 0 {
log.WithFields(logrus.Fields{
"wanted": len(wanted),
"tracked": len(t.subscriptions),
"missing": missing,
"trackedSubnets": func() []uint64 {
var tracked []uint64
for k := range t.subscriptions {
tracked = append(tracked, k)
}
return tracked
}(),
}).Debug("Subnet tracker missing analysis")
}
return missing
}
// tryReserveSubnets atomically checks for missing subnets and reserves them for subscription.
// Returns a list of subnets that were successfully reserved by this call.
func (t *subnetTracker) tryReserveSubnets(wanted map[uint64]bool) []uint64 {
t.mu.Lock()
defer t.mu.Unlock()
var reserved []uint64
for subnet := range wanted {
if _, ok := t.subscriptions[subnet]; !ok {
// Reserve by marking as nil (pending subscription)
t.subscriptions[subnet] = nil
reserved = append(reserved, subnet)
}
}
return reserved
}
// cancelReservation removes a reservation for a subnet that failed to subscribe.
func (t *subnetTracker) cancelReservation(subnet uint64) {
t.mu.Lock()
defer t.mu.Unlock()
if sub, ok := t.subscriptions[subnet]; ok && sub == nil {
delete(t.subscriptions, subnet)
}
}
// cancelSubscription cancels and removes the subscription for a given subnet.
func (t *subnetTracker) cancelSubscription(subnet uint64) {
func (t *subnetTracker) cancelSubscription(subnet uint64) string {
t.mu.Lock()
defer t.mu.Unlock()
defer delete(t.subscriptions, subnet)
defer delete(t.trackedTopics, subnet)
topic := t.trackedTopics[subnet]
sub := t.subscriptions[subnet]
if sub == nil {
return
// Debug: Check for inconsistent state
if sub != nil && topic == "" {
log.WithField("subnet", subnet).Warn("Found subscription but no tracked topic")
} else if sub == nil && topic != "" {
log.WithField("subnet", subnet).WithField("topic", topic).Debug("Found tracked topic but no subscription object")
}
sub.Cancel()
if sub != nil {
sub.Cancel()
}
return topic
}
// track asks subscriptionTracker to hold on to the subscription for a given subnet so
// that we can remember that it is tracked and cancel its context when it's time to unsubscribe.
func (t *subnetTracker) track(subnet uint64, sub *pubsub.Subscription) {
if sub == nil {
return
}
func (t *subnetTracker) track(subnet uint64, sub *pubsub.Subscription, expectedTopic string) {
t.mu.Lock()
defer t.mu.Unlock()
t.subscriptions[subnet] = sub
// Use the topic from the subscription if available, otherwise use the expected one
// (when sub is nil, it means the subscription already existed)
if sub != nil {
t.trackedTopics[subnet] = sub.Topic()
} else {
t.trackedTopics[subnet] = expectedTopic
}
}
// noopValidator is a no-op that only decodes the message, but does not check its contents.
@@ -200,21 +255,27 @@ func (s *Service) spawn(f func()) {
// Register PubSub subscribers
func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
s.spawn(func() {
s.subscribe(p2p.BlockSubnetTopicFormat, s.validateBeaconBlockPubSub, s.beaconBlockSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.AggregateAndProofSubnetTopicFormat, s.validateAggregateAndProof, s.beaconAggregateProofSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.ExitSubnetTopicFormat, s.validateVoluntaryExit, s.voluntaryExitSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.ProposerSlashingSubnetTopicFormat, s.validateProposerSlashing, s.proposerSlashingSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.AttesterSlashingSubnetTopicFormat, s.validateAttesterSlashing, s.attesterSlashingSubscriber, digest)
})
// Idempotent fixed-topic subscriptions: skip if already active.
fixed := []struct {
topicFormat string
val wrappedVal
handle subHandler
}{
{p2p.BlockSubnetTopicFormat, s.validateBeaconBlockPubSub, s.beaconBlockSubscriber},
{p2p.AggregateAndProofSubnetTopicFormat, s.validateAggregateAndProof, s.beaconAggregateProofSubscriber},
{p2p.ExitSubnetTopicFormat, s.validateVoluntaryExit, s.voluntaryExitSubscriber},
{p2p.ProposerSlashingSubnetTopicFormat, s.validateProposerSlashing, s.proposerSlashingSubscriber},
{p2p.AttesterSlashingSubnetTopicFormat, s.validateAttesterSlashing, s.attesterSlashingSubscriber},
}
for _, f := range fixed {
full := s.addDigestToTopic(f.topicFormat, digest) + s.cfg.p2p.Encoding().ProtocolSuffix()
if s.subHandler.topicExists(full) {
continue
}
s.spawn(func(fmt string, val wrappedVal, handle subHandler) func() {
return func() { s.subscribe(fmt, val, handle, digest) }
}(f.topicFormat, f.val, f.handle))
}
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.AttestationSubnetTopicFormat,
@@ -228,14 +289,20 @@ func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
// New gossip topic in Altair
if params.BeaconConfig().AltairForkEpoch <= epoch {
s.spawn(func() {
s.subscribe(
p2p.SyncContributionAndProofSubnetTopicFormat,
s.validateSyncContributionAndProof,
s.syncContributionAndProofSubscriber,
digest,
)
})
// SyncContributionAndProof fixed topic
{
full := s.addDigestToTopic(p2p.SyncContributionAndProofSubnetTopicFormat, digest) + s.cfg.p2p.Encoding().ProtocolSuffix()
if !s.subHandler.topicExists(full) {
s.spawn(func() {
s.subscribe(
p2p.SyncContributionAndProofSubnetTopicFormat,
s.validateSyncContributionAndProof,
s.syncContributionAndProofSubscriber,
digest,
)
})
}
}
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.SyncCommitteeSubnetTopicFormat,
@@ -247,35 +314,49 @@ func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
})
if features.Get().EnableLightClient {
s.spawn(func() {
s.subscribe(
p2p.LightClientOptimisticUpdateTopicFormat,
s.validateLightClientOptimisticUpdate,
s.lightClientOptimisticUpdateSubscriber,
digest,
)
})
s.spawn(func() {
s.subscribe(
p2p.LightClientFinalityUpdateTopicFormat,
s.validateLightClientFinalityUpdate,
s.lightClientFinalityUpdateSubscriber,
digest,
)
})
// Light client fixed topics
{
full := s.addDigestToTopic(p2p.LightClientOptimisticUpdateTopicFormat, digest) + s.cfg.p2p.Encoding().ProtocolSuffix()
if !s.subHandler.topicExists(full) {
s.spawn(func() {
s.subscribe(
p2p.LightClientOptimisticUpdateTopicFormat,
s.validateLightClientOptimisticUpdate,
s.lightClientOptimisticUpdateSubscriber,
digest,
)
})
}
}
{
full := s.addDigestToTopic(p2p.LightClientFinalityUpdateTopicFormat, digest) + s.cfg.p2p.Encoding().ProtocolSuffix()
if !s.subHandler.topicExists(full) {
s.spawn(func() {
s.subscribe(
p2p.LightClientFinalityUpdateTopicFormat,
s.validateLightClientFinalityUpdate,
s.lightClientFinalityUpdateSubscriber,
digest,
)
})
}
}
}
}
// New gossip topic in Capella
if params.BeaconConfig().CapellaForkEpoch <= epoch {
s.spawn(func() {
s.subscribe(
p2p.BlsToExecutionChangeSubnetTopicFormat,
s.validateBlsToExecutionChange,
s.blsToExecutionChangeSubscriber,
digest,
)
})
full := s.addDigestToTopic(p2p.BlsToExecutionChangeSubnetTopicFormat, digest) + s.cfg.p2p.Encoding().ProtocolSuffix()
if !s.subHandler.topicExists(full) {
s.spawn(func() {
s.subscribe(
p2p.BlsToExecutionChangeSubnetTopicFormat,
s.validateBlsToExecutionChange,
s.blsToExecutionChangeSubscriber,
digest,
)
})
}
}
// New gossip topic in Deneb, removed in Electra
@@ -337,22 +418,32 @@ func (s *Service) subscribe(topic string, validator wrappedVal, handle subHandle
// Impossible condition as it would mean topic does not exist.
panic(fmt.Sprintf("%s is not mapped to any message in GossipTopicMappings", topic)) // lint:nopanic -- Impossible condition.
}
s.subscribeWithBase(s.addDigestToTopic(topic, digest), validator, handle)
fullTopic := s.addDigestToTopic(topic, digest) + s.cfg.p2p.Encoding().ProtocolSuffix()
_ = s.subscribeWithBase(fullTopic, validator, handle)
}
// subscribeWithBase subscribes to a topic that already includes the protocol suffix.
func (s *Service) subscribeWithBase(topic string, validator wrappedVal, handle subHandler) *pubsub.Subscription {
topic += s.cfg.p2p.Encoding().ProtocolSuffix()
log := log.WithField("topic", topic)
// Do not resubscribe already seen subscriptions.
ok := s.subHandler.topicExists(topic)
if ok {
// Fast-path: avoid attempting if already actively subscribed
if s.subHandler.topicExists(topic) {
log.WithField("topic", topic).Debug("Provided topic already has an active subscription running")
return nil
}
// Atomically check if topic exists and reserve it for subscription if it doesn't
if !s.subHandler.tryReserveTopic(topic) {
// Another goroutine may have just subscribed or reserved it; report and skip
log.WithField("topic", topic).Debug("Provided topic already has an active subscription running")
return nil
}
// Register validator after successfully reserving the topic
if err := s.cfg.p2p.PubSub().RegisterTopicValidator(s.wrapAndReportValidation(topic, validator)); err != nil {
log.WithError(err).Error("Could not register validator for topic")
// Cancel the reservation since validator registration failed
s.subHandler.cancelReservation(topic)
return nil
}
@@ -362,6 +453,8 @@ func (s *Service) subscribeWithBase(topic string, validator wrappedVal, handle s
// libp2p PubSub library or a subscription request to a topic that fails to match the topic
// subscription filter.
log.WithError(err).Error("Could not subscribe topic")
// Cancel the reservation since subscription failed
s.subHandler.cancelReservation(topic)
return nil
}
@@ -504,8 +597,8 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
// This function mutates the `subscriptionBySubnet` map, which is used to keep track of the current subscriptions.
func (s *Service) pruneSubscriptions(t *subnetTracker, wantedSubnets map[uint64]bool) {
for _, subnet := range t.unwanted(wantedSubnets) {
t.cancelSubscription(subnet)
s.unSubscribeFromTopic(t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix()))
topic := t.cancelSubscription(subnet)
s.unSubscribeFromTopic(topic)
}
}
@@ -530,10 +623,50 @@ func (s *Service) subscribeToSubnets(t *subnetTracker) error {
subnetsToJoin := t.getSubnetsToJoin(s.cfg.clock.CurrentSlot())
s.pruneSubscriptions(t, subnetsToJoin)
for _, subnet := range t.missing(subnetsToJoin) {
// TODO: subscribeWithBase appends the protocol suffix, other methods don't. Make this consistent.
topic := t.fullTopic(subnet, "")
t.track(subnet, s.subscribeWithBase(topic, t.validate, t.handle))
// Filter out subnets already globally subscribed and mark them tracked to prevent re-reserve.
// Work on a copy to avoid mutating the original map while iterating later code paths.
wanted := make(map[uint64]bool, len(subnetsToJoin))
for subnet := range subnetsToJoin {
full := t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix())
if s.subHandler.topicExists(full) {
t.track(subnet, nil, full)
continue
}
wanted[subnet] = true
}
// Atomically reserve subnets that still need subscription
reservedSubnets := t.tryReserveSubnets(wanted)
if len(reservedSubnets) > 0 {
log.WithFields(logrus.Fields{
"reservedSubnets": reservedSubnets,
"totalWanted": len(subnetsToJoin),
}).Debug("Reserved missing subnets for subscription")
}
for _, subnet := range reservedSubnets {
fullTopic := t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix())
// Skip if already subscribed globally (e.g., another goroutine won the race).
// Mark as tracked so we don't repeatedly reserve and skip on subsequent ticks.
if s.subHandler.topicExists(fullTopic) {
t.track(subnet, nil, fullTopic)
continue
}
log.WithFields(logrus.Fields{
"subnet": subnet,
"topic": fullTopic,
}).Debug("Attempting to subscribe to reserved subnet")
sub := s.subscribeWithBase(fullTopic, t.validate, t.handle)
if sub != nil {
t.track(subnet, sub, fullTopic)
} else {
// Subscription failed, cancel the reservation
t.cancelReservation(subnet)
log.WithFields(logrus.Fields{
"subnet": subnet,
"topic": fullTopic,
}).Debug("Subscription failed, canceled reservation")
}
}
return nil

View File

@@ -8,6 +8,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition/interop"
"github.com/OffchainLabs/prysm/v6/config/features"
@@ -227,7 +228,7 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
"iteration": iteration,
"type": source.Type(),
"count": len(unseenIndices),
"indices": sortedSliceFromMap(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
}

View File

@@ -317,7 +317,7 @@ func TestRevalidateSubscription_CorrectlyFormatsTopic(t *testing.T) {
require.NoError(t, r.cfg.p2p.PubSub().RegisterTopicValidator(fullTopic, topVal))
sub1, err := r.cfg.p2p.SubscribeToTopic(fullTopic)
require.NoError(t, err)
tracker.track(c1, sub1)
tracker.track(c1, sub1, fullTopic)
// committee index 2
c2 := uint64(2)
@@ -327,7 +327,7 @@ func TestRevalidateSubscription_CorrectlyFormatsTopic(t *testing.T) {
require.NoError(t, err)
sub2, err := r.cfg.p2p.SubscribeToTopic(fullTopic)
require.NoError(t, err)
tracker.track(c2, sub2)
tracker.track(c2, sub2, fullTopic)
r.pruneSubscriptions(tracker, map[uint64]bool{c2: true})
require.LogsDoNotContain(t, hook, "Could not unregister topic validator")

View File

@@ -26,20 +26,31 @@ func newSubTopicHandler() *subTopicHandler {
func (s *subTopicHandler) addTopic(topic string, sub *pubsub.Subscription) {
s.Lock()
defer s.Unlock()
// Check if this topic was previously reserved (nil value means reserved)
prevSub, wasReserved := s.subTopics[topic]
s.subTopics[topic] = sub
digest, err := p2p.ExtractGossipDigest(topic)
if err != nil {
log.WithError(err).Error("Could not retrieve digest")
return
}
s.digestMap[digest] += 1
// Only increment digest count if this is truly a new subscription:
// - Topic didn't exist before (!wasReserved)
// - Topic was reserved (nil) and now getting a real subscription
if !wasReserved || prevSub == nil {
s.digestMap[digest] += 1
}
}
func (s *subTopicHandler) topicExists(topic string) bool {
if s == nil {
return false
}
s.RLock()
defer s.RUnlock()
_, ok := s.subTopics[topic]
return ok
sub, ok := s.subTopics[topic]
// Topic exists if it's in the map and has a real subscription (not just reserved)
return ok && sub != nil
}
func (s *subTopicHandler) removeTopic(topic string) {
@@ -88,3 +99,34 @@ func (s *subTopicHandler) subForTopic(topic string) *pubsub.Subscription {
defer s.RUnlock()
return s.subTopics[topic]
}
// tryReserveTopic atomically checks if a topic has an active subscription or reservation and reserves it if not.
// Returns true if the topic was successfully reserved, false if it already has an active subscription or reservation.
// If true is returned, the caller is responsible for calling addTopic with the actual subscription.
func (s *subTopicHandler) tryReserveTopic(topic string) bool {
if s == nil {
return false
}
s.Lock()
defer s.Unlock()
_, exists := s.subTopics[topic]
// Reject if topic already exists (either reserved with nil or has real subscription)
if exists {
return false
}
// Reserve the topic by adding a nil entry to prevent other goroutines from registering
s.subTopics[topic] = nil
return true
}
// cancelReservation removes a topic reservation if the subscription failed.
func (s *subTopicHandler) cancelReservation(topic string) {
if s == nil {
return
}
s.Lock()
defer s.Unlock()
if sub, exists := s.subTopics[topic]; exists && sub == nil {
delete(s.subTopics, topic)
}
}

View File

@@ -0,0 +1,3 @@
### Fixed
- Replace fmt.Printf with proper test error handling in web3signer keymanager tests, using require.NoError(t, err) instead of t.Fatalf for better error handling and debugging.

View File

@@ -0,0 +1,3 @@
### Added
- Adding `/eth/v1/debug/beacon/data_column_sidecars/{block_id}` endpoint.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixing Unsupported config field kind; value forwarded verbatim errors for type string.

View File

@@ -0,0 +1,3 @@
### Fixed
- da metric was not writing correctly because if statement on err was accidently flipped

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed 'justified' block support missing on blocker.Block and optimized logic between blocker.Block and blocker.Blob.

View File

@@ -0,0 +1,3 @@
### Fixed
- fixed regression introduced in PR #15715 , blocker now returns an error for not found, and error handling correctly handles error and returns 404 instead of 500

View File

@@ -0,0 +1,3 @@
### Added
- Support Fulu genesis block.

View File

@@ -0,0 +1,2 @@
### Ignored
- Adding context to logs around execution engine notification failures.

View File

@@ -0,0 +1,4 @@
### Changed
- Broadcast block then sidecars, instead block and sidecars concurrently
- Broadcast and receive sidecars in concurrently, instead sequentially

View File

@@ -0,0 +1,2 @@
### Fixed
- In P2P service start, wait for the custody info to be correctly initialized.

View File

@@ -0,0 +1,3 @@
### Changed
- Updated go.mod to v1.25.1

View File

@@ -0,0 +1,2 @@
## Changed
- Improve logging of data column sidecars

2
changelog/manu-wait.md Normal file
View File

@@ -0,0 +1,2 @@
### Fixed
- `createLocalNode`: Wait before retrying to retrieve the custody group count if not present.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix prysmctl panic when baseFee is not set in genesis.json

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix getStateRandao not returning historic RANDAO mix values

View File

@@ -0,0 +1,3 @@
### Changed
- Updated quick-go to latest version.

View File

@@ -0,0 +1,3 @@
### Added
- Update spectests to 1.6.0-beta.0

View File

@@ -0,0 +1,3 @@
### Changed
- Changed blst dependency from `http_archive` to `go_repository` so that gazelle can keep it in sync with go.mod.

5
changelog/pvl-go-1.25.md Normal file
View File

@@ -0,0 +1,5 @@
### Changed
- Updated go to v1.25.1
- Updated rules_go to v0.57.0
- Updated protobuf to 28.3

View File

@@ -0,0 +1,2 @@
### Added
- Added a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions

View File

@@ -0,0 +1,6 @@
### Ignored
- Updated pinned commit for eth-clients/holesky
- Updated pinned commit for eth-clients/hoodi
- Updated pinned commit for eth-clients/sepolia
- Removed deprecated dependency for eth-clients/eth2-networks

View File

@@ -0,0 +1,3 @@
### Changed
- Replaced reflect.TypeOf with reflect.TypeFor

View File

@@ -0,0 +1,2 @@
### Fixed
- fix race in PriorityQueue.Pop by checking emptiness under write lock. (#15726)

View File

@@ -0,0 +1,3 @@
### Added
- SSZ-QL: Handle `Bitlist` and `Bitvector` types.

View File

@@ -0,0 +1,3 @@
### Fixed
- SSZ-QL: Support nested `List` type (e.g., `ExecutionPayload.Transactions`)

View File

@@ -0,0 +1,3 @@
### Changed
- Set Fulu fork epochs for Holesky, Hoodi, and Sepolia testnets

View File

@@ -0,0 +1,3 @@
### Ignored
- Update golangci-lint config to enable only basic linters that currently pass

View File

@@ -34,7 +34,11 @@ go_test(
deps = [
"//crypto/bls:go_default_library",
"//runtime/interop:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//core:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//params:go_default_library",
],
)

View File

@@ -279,6 +279,14 @@ func generateGenesis(ctx context.Context) (state.BeaconState, error) {
if v > version.Altair {
// set ttd to zero so EL goes post-merge immediately
gen.Config.TerminalTotalDifficulty = big.NewInt(0)
if gen.BaseFee == nil {
return nil, errors.New("baseFeePerGas must be set in genesis.json for Post-Merge networks (after Altair)")
}
} else {
if gen.BaseFee == nil {
gen.BaseFee = big.NewInt(1000000000) // 1 Gwei default
log.WithField("baseFeePerGas", "1000000000").Warn("BaseFeePerGas not specified in genesis.json, using default value of 1 Gwei")
}
}
} else {
gen = interop.GethTestnetGenesis(time.Unix(int64(f.GenesisTime), 0), params.BeaconConfig())

View File

@@ -1,14 +1,21 @@
package testnet
import (
"context"
"encoding/json"
"fmt"
"math/big"
"os"
"testing"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/runtime/interop"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/ethereum/go-ethereum/core"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/params"
)
func Test_genesisStateFromJSONValidators(t *testing.T) {
@@ -48,3 +55,76 @@ func createGenesisDepositData(t *testing.T, numKeys int) []*depositDataJSON {
}
return jsonData
}
func Test_generateGenesis_BaseFeeValidation(t *testing.T) {
tests := []struct {
name string
forkVersion int
baseFee *big.Int
expectError bool
errorMsg string
}{
{
name: "Pre-merge Altair network without baseFee - should use default",
forkVersion: version.Altair,
baseFee: nil,
expectError: false,
},
{
name: "Post-merge Bellatrix network without baseFee - should error",
forkVersion: version.Bellatrix,
baseFee: nil,
expectError: true,
errorMsg: "baseFeePerGas must be set in genesis.json for Post-Merge networks (after Altair)",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Save original flags
originalFlags := generateGenesisStateFlags
defer func() {
generateGenesisStateFlags = originalFlags
}()
// Set up test flags
generateGenesisStateFlags.NumValidators = 2
generateGenesisStateFlags.GenesisTime = 1609459200
generateGenesisStateFlags.ForkName = version.String(tt.forkVersion)
// Create a minimal genesis JSON for testing
genesis := &core.Genesis{
BaseFee: tt.baseFee,
Difficulty: big.NewInt(0),
GasLimit: 15000000,
Alloc: types.GenesisAlloc{},
Config: &params.ChainConfig{
ChainID: big.NewInt(32382),
},
}
// Create temporary genesis JSON file
genesisJSON, err := json.Marshal(genesis)
require.NoError(t, err)
tmpFile := t.TempDir() + "/genesis.json"
err = writeFile(tmpFile, genesisJSON)
require.NoError(t, err)
generateGenesisStateFlags.GethGenesisJsonIn = tmpFile
ctx := context.Background()
_, err = generateGenesis(ctx)
if tt.expectError {
require.ErrorContains(t, tt.errorMsg, err)
} else {
require.NoError(t, err)
}
})
}
}
func writeFile(path string, data []byte) error {
return os.WriteFile(path, data, 0644)
}

View File

@@ -67,7 +67,6 @@ go_test(
"testdata/e2e_config.yaml",
"@consensus_spec//:spec_data",
"@consensus_spec_tests//:test_data",
"@eth2_networks//:configs",
"@holesky_testnet//:configs",
"@hoodi_testnet//:configs",
"@mainnet//:configs",

View File

@@ -30,6 +30,7 @@ var placeholderFields = []string{
"ATTESTATION_DUE_BPS",
"ATTESTATION_DUE_BPS_GLOAS",
"BLOB_SIDECAR_SUBNET_COUNT_FULU",
"CELLS_PER_EXT_BLOB",
"CONTRIBUTION_DUE_BPS",
"CONTRIBUTION_DUE_BPS_GLOAS",
"EIP6110_FORK_EPOCH",
@@ -45,10 +46,13 @@ var placeholderFields = []string{
"EIP7928_FORK_EPOCH",
"EIP7928_FORK_VERSION",
"EPOCHS_PER_SHUFFLING_PHASE",
"FIELD_ELEMENTS_PER_CELL", // Configured as a constant in config/fieldparams/mainnet.go
"FIELD_ELEMENTS_PER_EXT_BLOB", // Configured in proto/ssz_proto_library.bzl
"GLOAS_FORK_EPOCH",
"GLOAS_FORK_VERSION",
"INCLUSION_LIST_SUBMISSION_DEADLINE",
"INCLUSION_LIST_SUBMISSION_DUE_BPS",
"KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH", // Configured in proto/ssz_proto_library.bzl
"MAX_BYTES_PER_INCLUSION_LIST",
"MAX_REQUEST_BLOB_SIDECARS_FULU",
"MAX_REQUEST_INCLUSION_LIST",
@@ -388,6 +392,7 @@ func presetsFilePath(t *testing.T, config string) []string {
path.Join(fPath, "presets", config, "capella.yaml"),
path.Join(fPath, "presets", config, "deneb.yaml"),
path.Join(fPath, "presets", config, "electra.yaml"),
path.Join(fPath, "presets", config, "fulu.yaml"),
}
}

View File

@@ -1,7 +1,5 @@
package params
import "math"
// UseHoleskyNetworkConfig uses the Holesky beacon chain specific network config.
func UseHoleskyNetworkConfig() {
cfg := BeaconNetworkConfig().Copy()
@@ -41,12 +39,21 @@ func HoleskyConfig() *BeaconChainConfig {
cfg.DenebForkVersion = []byte{0x05, 0x1, 0x70, 0x0}
cfg.ElectraForkEpoch = 115968 // Mon, Feb 24 at 21:55:12 UTC
cfg.ElectraForkVersion = []byte{0x06, 0x1, 0x70, 0x0}
cfg.FuluForkEpoch = math.MaxUint64
cfg.FuluForkVersion = []byte{0x07, 0x1, 0x70, 0x0} // TODO: Define holesky fork version for fulu. This is a placeholder value.
cfg.FuluForkEpoch = 165120 // 2025-10-01 08:48:00 UTC
cfg.FuluForkVersion = []byte{0x07, 0x1, 0x70, 0x0}
cfg.TerminalTotalDifficulty = "0"
cfg.DepositContractAddress = "0x4242424242424242424242424242424242424242"
cfg.EjectionBalance = 28000000000
cfg.BlobSchedule = []BlobScheduleEntry{}
cfg.BlobSchedule = []BlobScheduleEntry{
{
MaxBlobsPerBlock: 15,
Epoch: 166400, // 2025-10-07 01:20:00 UTC
},
{
MaxBlobsPerBlock: 21,
Epoch: 167936, // 2025-10-13 21:10:24 UTC
},
}
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -1,9 +1,5 @@
package params
import (
"math"
)
// UseHoodiNetworkConfig uses the Hoodi beacon chain specific network config.
func UseHoodiNetworkConfig() {
cfg := BeaconNetworkConfig().Copy()
@@ -49,11 +45,20 @@ func HoodiConfig() *BeaconChainConfig {
cfg.DenebForkVersion = []byte{0x50, 0x00, 0x09, 0x10}
cfg.ElectraForkEpoch = 2048
cfg.ElectraForkVersion = []byte{0x60, 0x00, 0x09, 0x10}
cfg.FuluForkEpoch = math.MaxUint64
cfg.FuluForkEpoch = 50688 // 2025-10-28 18:53:12 UTC
cfg.FuluForkVersion = []byte{0x70, 0x00, 0x09, 0x10}
cfg.TerminalTotalDifficulty = "0"
cfg.DepositContractAddress = "0x00000000219ab540356cBB839Cbe05303d7705Fa"
cfg.BlobSchedule = []BlobScheduleEntry{}
cfg.BlobSchedule = []BlobScheduleEntry{
{
MaxBlobsPerBlock: 15,
Epoch: 52480, // 2025-11-05 18:02:00 UTC
},
{
MaxBlobsPerBlock: 21,
Epoch: 54016, // 2025-11-12 13:52:24 UTC
},
}
cfg.DefaultBuilderGasLimit = uint64(60000000)
cfg.InitializeForkSchedule()
return cfg

View File

@@ -1,8 +1,6 @@
package params
import (
"math"
eth1Params "github.com/ethereum/go-ethereum/params"
)
@@ -46,12 +44,21 @@ func SepoliaConfig() *BeaconChainConfig {
cfg.DenebForkVersion = []byte{0x90, 0x00, 0x00, 0x73}
cfg.ElectraForkEpoch = 222464 // Wed, Mar 5 at 07:29:36 UTC
cfg.ElectraForkVersion = []byte{0x90, 0x00, 0x00, 0x74}
cfg.FuluForkEpoch = math.MaxUint64
cfg.FuluForkVersion = []byte{0x90, 0x00, 0x00, 0x75} // TODO: Define sepolia fork version for fulu. This is a placeholder value.
cfg.FuluForkEpoch = 272640 // 2025-10-14 07:36:00 UTC
cfg.FuluForkVersion = []byte{0x90, 0x00, 0x00, 0x75}
cfg.TerminalTotalDifficulty = "17000000000000000"
cfg.DepositContractAddress = "0x7f02C3E3c98b133055B8B348B2Ac625669Ed295D"
cfg.DefaultBuilderGasLimit = uint64(60000000)
cfg.BlobSchedule = []BlobScheduleEntry{}
cfg.BlobSchedule = []BlobScheduleEntry{
{
MaxBlobsPerBlock: 15,
Epoch: 274176, // 2025-10-21 03:26:24 UTC
},
{
MaxBlobsPerBlock: 21,
Epoch: 275712, // 2025-10-27 23:16:48 UTC
},
}
cfg.InitializeForkSchedule()
return cfg
}

View File

@@ -86,13 +86,13 @@ func (pq *PriorityQueue) Len() int {
// wrapper/convenience method that calls heap.Pop, so consumers do not need to
// invoke heap functions directly
func (pq *PriorityQueue) Pop() (*Item, error) {
if pq.Len() == 0 {
return nil, ErrEmpty
}
pq.lock.Lock()
defer pq.lock.Unlock()
if pq.data.Len() == 0 {
return nil, ErrEmpty
}
item, ok := heap.Pop(&pq.data).(*Item)
if !ok {
return nil, errors.New("unknown type")

View File

@@ -2915,8 +2915,8 @@ def prysm_deps():
"gazelle:exclude tools.go",
],
importpath = "github.com/quic-go/quic-go",
sum = "h1:w5iJHXwHxs1QxyBv1EHKuC50GX5to8mJAxvtnttJp94=",
version = "v0.49.0",
sum = "h1:x09Agz4ATTMEP3qb5P0MRxNZfd6O9wAyK3qwwqQZVQc=",
version = "v0.49.1-0.20250925085836-275c172fec2b",
)
go_repository(
name = "com_github_quic_go_webtransport_go",
@@ -3312,6 +3312,15 @@ def prysm_deps():
sum = "h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=",
version = "v1.10.0",
)
go_repository(
name = "com_github_supranational_blst",
build_file_generation = "off",
importpath = "github.com/supranational/blst",
patch_args = ["-p1"],
patches = ["//third_party:com_github_supranational_blst.patch"],
sum = "h1:xNMoHRJOTwMn63ip6qoWJ2Ymgvj7E2b9jY2FAwY+qRo=",
version = "v0.3.14",
)
go_repository(
name = "com_github_syndtr_goleveldb",
importpath = "github.com/syndtr/goleveldb",
@@ -4939,13 +4948,3 @@ def prysm_deps():
sum = "h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=",
version = "v1.27.0",
)
http_archive(
name = "com_github_supranational_blst",
urls = [
"https://github.com/supranational/blst/archive/8c7db7fe8d2ce6e76dc398ebd4d475c0ec564355.tar.gz",
],
strip_prefix = "blst-8c7db7fe8d2ce6e76dc398ebd4d475c0ec564355",
build_file = "//third_party:blst/blst.BUILD",
sha256 = "e9041d03594271c9739d22d9f013ea8b5c28403285a2e8938f6e41a2437c6ff8",
)

View File

@@ -320,6 +320,6 @@ func IsProto(item interface{}) bool {
return ok
}
elemTyp := typ.Elem()
modelType := reflect.TypeOf((*proto.Message)(nil)).Elem()
modelType := reflect.TypeFor[proto.Message]()
return elemTyp.Implements(modelType)
}

View File

@@ -4,6 +4,8 @@ go_library(
name = "go_default_library",
srcs = [
"analyzer.go",
"bitlist.go",
"bitvector.go",
"container.go",
"list.go",
"path.go",
@@ -15,6 +17,7 @@ go_library(
],
importpath = "github.com/OffchainLabs/prysm/v6/encoding/ssz/query",
visibility = ["//visibility:public"],
deps = ["@com_github_prysmaticlabs_go_bitfield//:go_default_library"],
)
go_test(
@@ -30,5 +33,6 @@ go_test(
"//encoding/ssz/query/testutil:go_default_library",
"//proto/ssz_query:go_default_library",
"//testing/require:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -60,13 +60,44 @@ func PopulateVariableLengthInfo(sszInfo *sszInfo, value any) error {
if val.Kind() != reflect.Slice {
return fmt.Errorf("expected slice for List type, got %v", val.Kind())
}
length := val.Len()
length := uint64(val.Len())
if err := listInfo.SetLength(length); err != nil {
if listInfo.element.isVariable {
listInfo.elementSizes = make([]uint64, 0, length)
// Populate nested variable-sized type element lengths recursively.
for i := range length {
if err := PopulateVariableLengthInfo(listInfo.element, val.Index(i).Interface()); err != nil {
return fmt.Errorf("could not populate nested list element at index %d: %w", i, err)
}
listInfo.elementSizes = append(listInfo.elementSizes, listInfo.element.Size())
}
}
if err := listInfo.SetLength(uint64(length)); err != nil {
return fmt.Errorf("could not set list length: %w", err)
}
return nil
// In Bitlist case, we have to set the actual length of the bitlist.
case Bitlist:
bitlistInfo, err := sszInfo.BitlistInfo()
if err != nil {
return fmt.Errorf("could not get bitlist info: %w", err)
}
if bitlistInfo == nil {
return errors.New("bitlistInfo is nil")
}
val := reflect.ValueOf(value)
if err := bitlistInfo.SetLengthFromBytes(val.Bytes()); err != nil {
return fmt.Errorf("could not set bitlist length from bytes: %w", err)
}
return nil
// In Container case, we need to recursively populate variable-sized fields.
case Container:
containerInfo, err := sszInfo.ContainerInfo()
@@ -92,15 +123,20 @@ func PopulateVariableLengthInfo(sszInfo *sszInfo, value any) error {
continue
}
// Set the actual offset for variable-sized fields.
fieldInfo.offset = currentOffset
// Recursively populate variable-sized fields.
fieldValue := derefValue.FieldByName(fieldInfo.goFieldName)
if err := PopulateVariableLengthInfo(childSszInfo, fieldValue.Interface()); err != nil {
return fmt.Errorf("could not populate from value for field %s: %w", fieldName, err)
}
// Each variable-sized element needs an offset entry.
if childSszInfo.sszType == List {
currentOffset += childSszInfo.listInfo.OffsetBytes()
}
// Set the actual offset for variable-sized fields.
fieldInfo.offset = currentOffset
currentOffset += childSszInfo.Size()
}
@@ -194,7 +230,7 @@ func analyzeHomogeneousColType(typ reflect.Type, tag *reflect.StructTag) (*sszIn
return nil, fmt.Errorf("could not get list limit: %w", err)
}
return analyzeListType(typ, elementInfo, limit)
return analyzeListType(typ, elementInfo, limit, sszDimension.isBitfield)
}
// 2. Handle Vector/Bitvector type
@@ -204,7 +240,7 @@ func analyzeHomogeneousColType(typ reflect.Type, tag *reflect.StructTag) (*sszIn
return nil, fmt.Errorf("could not get vector length: %w", err)
}
return analyzeVectorType(typ, elementInfo, length)
return analyzeVectorType(typ, elementInfo, length, sszDimension.isBitfield)
}
// Parsing ssz tag doesn't provide enough information to determine the collection type,
@@ -212,8 +248,22 @@ func analyzeHomogeneousColType(typ reflect.Type, tag *reflect.StructTag) (*sszIn
return nil, errors.New("could not determine collection type from tags")
}
// analyzeListType analyzes SSZ List type and returns its SSZ info.
func analyzeListType(typ reflect.Type, elementInfo *sszInfo, limit uint64) (*sszInfo, error) {
// analyzeListType analyzes SSZ List/Bitlist type and returns its SSZ info.
func analyzeListType(typ reflect.Type, elementInfo *sszInfo, limit uint64, isBitfield bool) (*sszInfo, error) {
if isBitfield {
return &sszInfo{
sszType: Bitlist,
typ: typ,
fixedSize: offsetBytes,
isVariable: true,
bitlistInfo: &bitlistInfo{
limit: limit,
},
}, nil
}
if elementInfo == nil {
return nil, errors.New("element info is required for List")
}
@@ -232,10 +282,25 @@ func analyzeListType(typ reflect.Type, elementInfo *sszInfo, limit uint64) (*ssz
}, nil
}
// analyzeVectorType analyzes SSZ Vector type and returns its SSZ info.
func analyzeVectorType(typ reflect.Type, elementInfo *sszInfo, length uint64) (*sszInfo, error) {
// analyzeVectorType analyzes SSZ Vector/Bitvector type and returns its SSZ info.
func analyzeVectorType(typ reflect.Type, elementInfo *sszInfo, length uint64, isBitfield bool) (*sszInfo, error) {
if isBitfield {
return &sszInfo{
sszType: Bitvector,
typ: typ,
// Size in bytes
fixedSize: length,
isVariable: false,
bitvectorInfo: &bitvectorInfo{
length: length * 8, // length in bits
},
}, nil
}
if elementInfo == nil {
return nil, errors.New("element info is required for Vector")
return nil, errors.New("element info is required for Vector/Bitvector")
}
// Validate the given length.

View File

@@ -13,5 +13,5 @@ func TestAnalyzeSSZInfo(t *testing.T) {
require.NoError(t, err)
require.NotNil(t, info, "Expected non-nil SSZ info")
require.Equal(t, uint64(493), info.FixedSize(), "Expected fixed size to be 333")
require.Equal(t, uint64(565), info.FixedSize())
}

Some files were not shown because too many files have changed in this diff Show More