Compare commits

...

43 Commits

Author SHA1 Message Date
Manu NALEPA
d5454096be Revert "Separate type for unaggregated network attestations (#14659)"
This reverts commit 153d1872ae.
2025-01-15 21:17:24 +01:00
Manu NALEPA
f6aa197c4e Merge branch 'develop' into peerDAS 2025-01-15 20:59:22 +01:00
Manu NALEPA
e07341e1d5 ToBlinded: Use Fulu. (#14797)
* `ToBlinded`: Use Fulu.

* Fix Sammy's comment.

* `unmarshalState`: Use `hasFuluKey`.
2025-01-15 19:50:48 +00:00
Manu NALEPA
6fb349ea76 unmarshalState: Use hasFuluKey. 2025-01-15 20:48:25 +01:00
Preston Van Loon
ef293e52f8 Use ip.addr.tools for DNS resolution test in p2p (#14800)
* Use ip.addr.tools for DNS resolution test in p2p

* Changelog fragment
2025-01-15 18:07:00 +00:00
Manu NALEPA
e5a425f5c7 Merge branch 'develop' into peerDAS 2025-01-15 17:18:34 +01:00
Jun Song
72cc63a6a3 Clean TestCanUpgrade* tests (#14791)
* Clean TestCanUpgrade* tests

* Add new changelog file
2025-01-15 15:42:21 +00:00
Jun Song
34ff4c3ea9 Replace new exampleIP for example.org (#14795)
* Replace new exampleIP for example.org

* Add changelog
2025-01-15 06:14:25 +00:00
kasey
e8c968326a switch to unclog latest release instead of run artifact (#14793)
* use latest unclog release instead of run artifact

* add missing changelog entries to pre-unclog starter pack

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-01-14 21:47:20 +00:00
kasey
2000ef457b Move prysm to unclog for changelog management (#14782)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-01-14 18:22:38 +00:00
Manu NALEPA
f157d37e4c peerDAS: Decouple network subnets from das-core. (#14784)
https://github.com/ethereum/consensus-specs/pull/3832/
2025-01-14 10:45:05 +01:00
james-prysm
e36564c4d3 goethereum dependency to v1.14~ (#14351)
* updating the goethereum dependency

* fixing dependencies

* reverting workspace

* more fixes, work in progress

* trying with upgraded geth version

* fixing deprecated functions except for the time related ones on eth1 distance due to time issues

* fixing time issues

* gaz

* fixing test and upgrading some dependencies and reverting others

* Disable cgo in hid, delete old vendored usb library

* changelog

* rolling back dependencies

* fixing go mod tidy

* Geth v1.13.6

* fix tests

* Add ping interval, set to 500ms for tests. This didnt work

* Update to v1.14.8

* Spread it out to different bootnodes

* Fix it

* Remove Memsize

* Update all out of date dependencies

* Fix geth body change

* Fix Test

* Fix Build

* Fix Tests

* Fix Tests Again

* Fix Tests Again

* Fix Tests

* Fix Test

* Copy USB Package for HID

* Push it up

* Finally fix all tests with felix's changes

* updating geth dependency

* Update go-ethereum to v1.14.11

* fixing import

* reverting blob change

* fixing Implicit memory aliasing in for loop.

* WIP changes

* wip getting a little further on e2e runs

* getting a little further

* getting a little further

* setting everything to capella

* more partial fixes

* more fixes but still WIP

* fixing access list transactions"

* some cleanup

* making configs dynamic

* reverting time

* skip lower bound in builder

* updating to geth v1.14.12

* fixing verify blob to pointer

* go mod tidy

* fixing linting

* missed removing another terminal difficulty item

* another missed update

* updating more dependencies to fix cicd

* fixing holiman dependency update

* downgrading geth to 1.14.11 due to p2p loop issue

* reverting builder middleware caused by downgrade

* fixing more rollback issues

* upgrading back to 1.14.12 after discussing with preston

* mod tidy

* gofmt

* partial review feedback

* trying to start e2e from bellatrix instead

* reverting some changes

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: nisdas <nishdas93@gmail.com>
2025-01-14 08:35:49 +00:00
terence
e5784d09f0 Update spec test to v1.5.0-beta.0 (#14788) 2025-01-13 20:05:34 +00:00
terence
e577bb0dcf Revert BlobSidecarsByRoot/Range version to v1 (#14785)
* Update blobs by range rpc topics to v1

* Update blobs by range rpc topics to v1

* RPC handler comments: Use "Added", "Modified" and "Upgraded".

- Added: No message with this message name was previously existing.
- Upgraded: A message with this message name was existing in the previous fork, but the schema version is upgraded in the current fork.
- Modified: The couple message name, schema version is the same than in the previous fork, but the implementation of the handler is modified in the current fork.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-01-13 16:58:44 +00:00
Radosław Kapka
153d1872ae Separate type for unaggregated network attestations (#14659)
* definitions and gossip

* validator

* broadcast

* broadcast the correct att depending on version

* small updates

* don't check bits after Electra

* nitpick

* tests

* changelog <3

* review

* more review

* review yet again

* try a different design

* fix gossip issues

* cleanup

* tests

* reduce cognitive complexity

* Preston's review

* move changelog entry to unreleased section

* fix pending atts pool issues

* reviews

* Potuz's comments

* test fixes
2025-01-13 16:48:20 +00:00
terence
80aa811ab9 Fix kzg commitment inclusion proof depth minimal value (#14787) 2025-01-13 15:51:30 +00:00
Bastin
39cf2c8f06 Read LC Bootstraps from DB (#14718)
* fix bootstrap handler

* add tests for get bootstrap handler

* changelog

* make CreateDefaultLightClientBootstrap private

* remove fulu test

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-01-10 16:51:55 +00:00
Bastin
e99df5e489 Save LC Bootstrap DB Space Optimization (#14775)
* add db optimization altair

* add db optimization

* add tests

* fix tests

* fix changelog

* remove debug code

* remove unused code

* fix comment

* remove unused code
2025-01-09 18:00:08 +00:00
oftenoccur
393e63e8e5 refactor: using slices.Contains to simplify the code (#14781)
Signed-off-by: oftenoccur <ezc5@sina.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2025-01-09 00:04:45 +00:00
Manu NALEPA
5f08559bef Merge branch 'develop' into peerDAS 2025-01-08 10:18:18 +01:00
Manu NALEPA
c48d40907c Add Fulu fork boilerplate (#14771)
* Prepare for future fork boilerplate.

* Implement the Fulu fork boilerplate.

* `Upgraded state to <fork> log`: Move from debug to info.

Rationale:
This log is the only one notifying the user a new fork happened.
A new fork is always a little bit stressful for a node operator.
Having at least one log indicating the client switched fork is something useful.

* Update testing/util/helpers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix Radek's comment.

* Fix Radek's comment.

* Update beacon-chain/state/state-native/state_trie.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/state/state-native/state_trie.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix Radek's comment.

* Fix Radek's comment.

* Fix Radek's comment.

* Remove Electra struct type aliasing.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-01-07 20:09:12 +00:00
Nishant Das
a21f544219 Update to v0.33 (#14780) 2025-01-07 09:20:45 +00:00
Preston Van Loon
705a9e8dcd Electra: Spec review of process_slashings (#14766)
* Electra: Review process_slashings

* Update signature not to return the state, there is no need

* Update CHANGELOG.md
2025-01-06 13:24:58 +00:00
Skylar Ray
2dcb015470 Update CHANGELOG.md (#14753) 2025-01-06 13:09:25 +00:00
Manu NALEPA
a082d2aecd Merge branch 'fulu-boilerplate' into peerDAS 2025-01-06 13:45:33 +01:00
Nishant Das
211e1a4b7c Close Streams For Metadata Requests Better (#14776)
* Close streams better

* Changelog
2025-01-06 11:01:58 +00:00
Nishant Das
1efca9c28d Add Ability to Trace IDONTWANT Control Messages (#14778)
* Trace IDONTWANT Requests

* Changelog
2025-01-06 10:13:22 +00:00
Nishant Das
97d7ca828b Security Update For Dependencies (#14777)
* Update Golang Crypto

* Changelog
2025-01-06 10:00:59 +00:00
Manu NALEPA
bcfaff8504 Upgraded state to <fork> log: Move from debug to info.
Rationale:
This log is the only one notifying the user a new fork happened.
A new fork is always a little bit stressful for a node operator.
Having at least one log indicating the client switched fork is something useful.
2025-01-05 16:22:43 +01:00
Manu NALEPA
d8e09c346f Implement the Fulu fork boilerplate. 2025-01-05 16:22:38 +01:00
Manu NALEPA
876519731b Prepare for future fork boilerplate. 2025-01-05 16:14:02 +01:00
Preston Van Loon
31df250496 Electra: Update spec definition for process_registry_updates (#14767)
* Electra: review process_registry_updates

* Update CHANGELOG.md
2025-01-03 16:15:02 +00:00
Manu NALEPA
8a439a6f5d Simplify next Fork boilerplate creation. (#14761)
* Simplify next Fork boilerplate creation.

* Update beacon-chain/blockchain/execution_engine.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/blockchain/execution_engine.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/blockchain/execution_engine.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/state/state-native/spec_parameters.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/state/state-native/spec_parameters.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/state/state-native/spec_parameters.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/state/state-native/spec_parameters.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix Radek's comments.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-01-03 14:39:08 +00:00
Radosław Kapka
8cff9356f1 Remove duplicate imports (#14772) 2025-01-03 11:42:14 +00:00
Nishant Das
afeb05c9a1 Update Go-Libp2p-Pubsub (#14770)
* Update to Current Master Head

* Changelog

* Update CHANGELOG.md
2025-01-03 10:52:46 +00:00
Preston Van Loon
fa16232924 Electra: Update spec definition for process_epoch (#14768)
* Electra: Review process_epoch

* Update CHANGELOG.md
2025-01-03 09:40:53 +00:00
Preston Van Loon
1f720bdbf4 Fix all typos (#14769) 2025-01-03 09:40:13 +00:00
Brindrajsinh Chauhan
79ea77ff57 update go version to 1.22.10 (#14729)
* update to go 1.22.10

* update change log

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-01-02 23:07:38 +00:00
Radosław Kapka
d35cd0788e Remove v2 protos (#14765)
* Remove v2 protos

* changelog <3
2025-01-02 19:40:07 +00:00
Preston Van Loon
093e3df80a Remove slashingMultiplier argument from ProcessSlashings (#14762)
* Remove slashingMultiplier argument from ProcessSlashings

* Update CHANGELOG.md
2025-01-01 18:21:52 +00:00
Manu NALEPA
699a3b07a7 SSZ generation: Remove the // Hash: ... header. (#14760)
* `update-go-ssz.sh`: Remove the `// Hash: ...` line from the generated files header.

* Generate SSZ files.
2024-12-31 12:07:41 +00:00
Manu NALEPA
937d441e2e registerSubscribers: Register correctly. (#14759)
Issue before this commit:
The `currentSlot` in the `getSubnetsToSubscribe` is really set to the current slot. So, even if the `registerSubscribers` function is called
one epoch in advance, as done in `registerForUpcomingFork`, then
`currentSlot` will still be - really - the current slot.
==> The new blobs subnets will be subscribed only at the start of the
Electra fork, and not one epoch in advance.

With this commit:
The new blobs subnets will be effectively subscribed one epoch in advance.
2024-12-31 08:42:56 +00:00
terence
ead08d56d0 Implement EIP7691: Blob throughput increase (#14750)
* Add EIP-7691 blob throughput increase

* Review feedback

* Factorize blobs subscriptions.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-12-30 19:23:56 +00:00
311 changed files with 16097 additions and 20259 deletions

View File

@@ -33,5 +33,5 @@ Fixes #
**Acknowledgements**
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have made an appropriate entry to [CHANGELOG.md](https://github.com/prysmaticlabs/prysm/blob/develop/CHANGELOG.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.

View File

@@ -1,33 +1,33 @@
name: CI
# This workflow will build a golang project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-go
name: changelog
on:
pull_request:
branches:
- develop
branches: [ "develop" ]
jobs:
changed_files:
runs-on: ubuntu-latest
name: Check CHANGELOG.md
run-changelog-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: changelog modified
id: changelog-modified
- name: Checkout source code
uses: actions/checkout@v3
- name: Download unclog binary
uses: dsaltares/fetch-gh-release-asset@aa2ab1243d6e0d5b405b973c89fa4d06a2d0fff7 # 1.1.2
with:
repo: OffchainLabs/unclog
file: "unclog"
- name: Get new changelog files
id: new-changelog-files
uses: tj-actions/changed-files@v45
with:
files: CHANGELOG.md
files: |
changelog/**.md
- name: List all changed files
- name: Run lint command
env:
ALL_CHANGED_FILES: ${{ steps.changelog-modified.outputs.all_changed_files }}
run: |
if [[ ${ALL_CHANGED_FILES[*]} =~ (^|[[:space:]])"CHANGELOG.md"($|[[:space:]]) ]];
then
echo "CHANGELOG.md was modified.";
exit 0;
else
echo "CHANGELOG.md was not modified.";
echo "Please see CHANGELOG.md and follow the instructions to add your changes to that file."
echo "In some rare scenarios, a changelog entry is not required and this CI check can be ignored."
exit 1;
fi
ALL_ADDED_MARKDOWN: ${{ steps.new-changelog-files.outputs.added_files }}
run: chmod +x unclog && ./unclog check -fragment-env=ALL_ADDED_MARKDOWN

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.22.3'
go-version: '1.22.10'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.22.3'
go-version: '1.22.10'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -31,7 +31,7 @@ jobs:
- name: Set up Go 1.22
uses: actions/setup-go@v4
with:
go-version: '1.22.6'
go-version: '1.22.10'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -48,7 +48,7 @@ jobs:
- name: Set up Go 1.22
uses: actions/setup-go@v4
with:
go-version: '1.22.6'
go-version: '1.22.10'
id: go
- name: Golangci-lint
@@ -64,7 +64,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v4
with:
go-version: '1.22.6'
go-version: '1.22.10'
id: go
- name: Check out code into the Go module directory

View File

@@ -6,7 +6,7 @@ run:
- proto
- tools/analyzers
timeout: 10m
go: '1.22.6'
go: '1.22.10'
linters:
enable-all: true

View File

@@ -4,45 +4,6 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.2.0...HEAD)
### Added
- Added proper gas limit check for header from the builder.
- Added an error field to log `Finished building block`.
- Implemented a new `EmptyExecutionPayloadHeader` function.
- `Finished building block`: Display error only if not nil.
- Added support to update target and max blob count to different values per hard fork config.
- Log before blob filesystem cache warm-up.
- New design for the attestation pool. [PR](https://github.com/prysmaticlabs/prysm/pull/14324)
- Add field param placeholder for Electra blob target and max to pass spec tests.
### Changed
- Process light client finality updates only for new finalized epochs instead of doing it for every block.
- Refactor subnets subscriptions.
- Refactor RPC handlers subscriptions.
- Go deps upgrade, from `ioutil` to `io`
- Move successfully registered validator(s) on builder log to debug.
- Update some test files to use `crypto/rand` instead of `math/rand`
- Enforce Compound prefix (0x02) for target when processing pending consolidation request.
- Limit consolidating by validator's effective balance.
- Use 16-bit random value for proposer and sync committee selection filter.
- Re-organize the content of the `*.proto` files (No functional change).
### Deprecated
### Removed
### Fixed
- Added check to prevent nil pointer deference or out of bounds array access when validating the BLSToExecutionChange on an impossibly nil validator.
### Security
## [v5.2.0](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...v5.2.0)
Updating to this release is highly recommended, especially for users running v5.1.1 or v5.1.2.
@@ -145,7 +106,7 @@ Notable features:
- Removed finalized validator index cache, no longer needed.
- Removed validator queue position log on key reload and wait for activation.
- Removed outdated spectest exclusions for EIP-6110.
- Removed support for starting a beacon node with a deterministic interop genesis state via interop flags. Alteratively, create a genesis state with prysmctl and use `--genesis-state`. This removes about 9Mb (~11%) of unnecessary code and dependencies from the final production binary.
- Removed support for starting a beacon node with a deterministic interop genesis state via interop flags. Alternatively, create a genesis state with prysmctl and use `--genesis-state`. This removes about 9Mb (~11%) of unnecessary code and dependencies from the final production binary.
- Removed kzg proof check from blob reconstructor.
### Fixed
@@ -159,7 +120,7 @@ Notable features:
- Fix `--backfill-oldest-slot` handling - this flag was totally broken, the code would always backfill to the default slot [pr](https://github.com/prysmaticlabs/prysm/pull/14584)
- Fix keymanager API should return corrected error format for malformed tokens
- Fix keymanager API so that get keys returns an empty response instead of a 500 error when using an unsupported keystore.
- Small log imporvement, removing some redundant or duplicate logs
- Small log improvement, removing some redundant or duplicate logs
- EIP7521 - Fixes withdrawal bug by accounting for pending partial withdrawals and deducting already withdrawn amounts from the sweep balance. [PR](https://github.com/prysmaticlabs/prysm/pull/14578)
- unskip electra merkle spec test
- Fix panic in validator REST mode when checking status after removing all keys
@@ -189,7 +150,7 @@ meantime we are issuing a patch that recovers from the panic to prevent the node
This only impacts the v5.1.1 release beacon api event stream endpoints. This endpoint is used by the
prysm REST mode validator (a feature which requires the validator to be configured to use the beacon
api intead of prysm's stock grpc endpoints) or accessory software that connects to the events api,
api instead of prysm's stock grpc endpoints) or accessory software that connects to the events api,
like https://github.com/ethpandaops/ethereum-metrics-exporter
### Fixed
@@ -432,7 +393,7 @@ Updating to this release is recommended at your earliest convenience, especially
- use time.NewTimer() to avoid possible memory leaks
- paranoid underflow protection without error handling
- Fix CommitteeAssignments to not return every validator
- Fix dependent root retrival genesis case
- Fix dependent root retrieval genesis case
- Restrict Dials From Discovery
- Always close cache warm chan to prevent blocking
- Keep only the latest value in the health channel
@@ -584,7 +545,7 @@ block profit. If you want to preserve the existing behavior, set --local-block-v
- handle special case of batch size=1
- Always Set Inprogress Boolean In Cache
- Builder APIs: adding headers to post endpoint
- Rename mispelled variable
- Rename misspelled variable
- allow blob by root within da period
- Rewrite Pruning Implementation To Handle EIP 7045
- Set default fee recipient if tracked val fails
@@ -654,7 +615,7 @@ Known Issues
- Support beacon_committee_selections
- /eth/v1/beacon/deposit_snapshot
- Docker images now have coreutils pre-installed
- da_waited_time_milliseconds tracks total time waiting for data availablity check in ReceiveBlock
- da_waited_time_milliseconds tracks total time waiting for data availability check in ReceiveBlock
- blob_written, blob_disk_count, blob_disk_bytes new metrics for tracking blobs on disk
- Backfill supports blob backfilling
- Add mainnet deneb fork epoch config
@@ -829,7 +790,7 @@ and Raspberry Pi users.
- Add Goerli Deneb Fork Epoch
- Use deneb key for deneb state in saveStatesEfficientInternal
- Initialize Inactivity Scores Correctly
- Excluse DA wait time for chain processing time
- Excludes DA wait time for chain processing time
- Initialize sig cache for verification.Initializer
- Verify roblobs
- KZG Commitment inclusion proof verifier
@@ -862,7 +823,7 @@ and Raspberry Pi users.
- Exit early if blob by root request is empty
- Request missing blobs while processing pending queue
- Check blob exists before requesting from peer
- Passing block as arugment for sidecar validation
- Passing block as argument for sidecar validation
#### Blob Management
@@ -1159,13 +1120,13 @@ _Most of the PRs here involve shifting our http endpoints to using vanilla http
- Remove no-op cancel func
- Update Terms of Service
- fix head slot in log
- DEPRECTATION: Remove exchange transition configuration call
- DEPRECATION: Remove exchange transition configuration call
- fix segmentation fork when Capella for epoch is MaxUint64
- Return Error Gracefully When Removing 4881 Flag
- Add zero length check on indices during NextSyncCommitteeIndices
- Replace Empty Slice Literals with Nil Slices
- Refactor Error String Formatting According to Go Best Practices
- Fix redundant type converstion
- Fix redundant type conversion
- docs: fix typo
- Add Clarification To Sync Committee Cache
- Fix typos
@@ -1187,7 +1148,7 @@ small set of users.
### Security
No security issues in thsi release.
No security issues in this release.
## [v4.1.0](https://github.com/prysmaticlabs/prysm/compare/v4.0.8...v4.1.0) - 2023-08-22
@@ -1703,7 +1664,7 @@ notes [here](https://hackmd.io/TtyFurRJRKuklG3n8lMO9Q). This release is **strong
Note: The released docker images are using the portable version of the blst cryptography library. The Prysm team will
release docker images with the non-portable blst library as the default image. In the meantime, you can compile docker
images with blst non-portable locally with the `--define=blst_modern=true` bazel flag, use the "-modern-" assets
attached to releases, or set environment varaible USE_PRYSM_MODERN=true when using prysm.sh.
attached to releases, or set environment variable USE_PRYSM_MODERN=true when using prysm.sh.
### Added
@@ -2052,7 +2013,7 @@ There are some known issues with this release.
- Beacon node can bootstrap from non-genesis state (i.e bellatrix state)
- Refactor bytesutil, add support for go1.20 slice to array conversions
- Add Span information for attestation record save request
- Matric addition
- Metric addition
- Identify invalid signature within batch verification
- Support for getting consensus values from beacon config
- EIP-4881: Spec implementation
@@ -2118,7 +2079,7 @@ See [flashbots/mev-boost#404](https://github.com/flashbots/mev-boost/issues/404)
- Added more histogram metrics for block arrival latency times block_arrival_latency_milliseconds
- Priority queue RetrieveByKey now uses read lock instead of write lock
- Use custom types for certain ethclient requests. Fixes an issue when using prysm on gnosis chain.
- Updted forkchoice endpoint /eth/v1/debug/forkchoice (was /eth/v1/debug/beacon/forkchoice)
- Updated forkchoice endpoint /eth/v1/debug/forkchoice (was /eth/v1/debug/beacon/forkchoice)
- Include empty fields in builder json client.
- Computing committee assignments for slots older than the oldest historical root in the beacon state is now forbidden
@@ -2350,7 +2311,7 @@ There are no security updates in this release.
removed: `GetBeaconState`, `ProduceBlock`, `ListForkChoiceHeads`, `ListBlocks`, `SubmitValidatorRegistration`, `GetBlock`, `ProposeBlock`
- API: Forkchoice method `GetForkChoice` has been removed.
- All previously deprecated feature flags have been
removed. `--enable-active-balance-cache`, `--correctly-prune-canonical-atts`, `--correctly-insert-orphaned-atts`, `--enable-next-slot-state-cache`, `--enable-batch-gossip-verification`, `--enable-get-block-optimizations`, `--enable-balance-trie-computation`, `--disable-next-slot-state-cache`, `--attestation-aggregation-strategy`, `--attestation-aggregation-force-opt-maxcover`, `--pyrmont`, `--disable-get-block-optimizations`, `--disable-proposer-atts-selection-using-max-cover`, `--disable-optimized-balance-update`, `--disable-active-balance-cache`, `--disable-balance-trie-computation`, `--disable-batch-gossip-verification`, `--disable-correctly-prune-canonical-atts`, `--disable-correctly-insert-orphaned-atts`, `--enable-native-state`, `--enable-peer-scorer`, `--enable-gossip-batch-aggregation`, `--experimental-disable-boundry-checks`
removed. `--enable-active-balance-cache`, `--correctly-prune-canonical-atts`, `--correctly-insert-orphaned-atts`, `--enable-next-slot-state-cache`, `--enable-batch-gossip-verification`, `--enable-get-block-optimizations`, `--enable-balance-trie-computation`, `--disable-next-slot-state-cache`, `--attestation-aggregation-strategy`, `--attestation-aggregation-force-opt-maxcover`, `--pyrmont`, `--disable-get-block-optimizations`, `--disable-proposer-atts-selection-using-max-cover`, `--disable-optimized-balance-update`, `--disable-active-balance-cache`, `--disable-balance-trie-computation`, `--disable-batch-gossip-verification`, `--disable-correctly-prune-canonical-atts`, `--disable-correctly-insert-orphaned-atts`, `--enable-native-state`, `--enable-peer-scorer`, `--enable-gossip-batch-aggregation`, `--experimental-disable-boundary-checks`
- Validator Web API: Removed unused ImportAccounts and DeleteAccounts rpc options
### Fixed
@@ -2566,7 +2527,7 @@ There are two known issues with this release:
- Bellatrix support. See [kiln testnet instructions](https://hackmd.io/OqIoTiQvS9KOIataIFksBQ?view)
- Weak subjectivity sync / checkpoint sync. This is an experimental feature and may have unintended side effects for
certain operators serving historical data. See
the [documentation](https://docs.prylabs.network/docs/next/prysm-usage/checkpoint-sync) for more details.
the [documentation](https://docs.prylabs.network/docs/prysm-usage/checkpoint-sync) for more details.
- A faster build of blst for beacon chain on linux amd64. Use the environment variable `USE_PRYSM_MODERN=true` with
prysm.sh, use the "modern" binary, or bazel build with `--define=blst_modern=true`.
- Vectorized sha256. This may have performance improvements with use of the new flag `--enable-vectorized-htr`.
@@ -2747,7 +2708,7 @@ notes [here](https://github.com/prysmaticlabs/prysm-web-ui/releases/tag/v1.0.0)
- Added uint64 overflow protection
- Sync committee pool returns empty slice instead of nil on cache miss
- Improved description of datadir flag
- Simplied web password requirements
- Simplified web password requirements
- Web JWT tokens no longer expire.
- Updated keymanager protos
- Watch and update jwt secret when auth token file updated on disk.
@@ -2758,7 +2719,7 @@ notes [here](https://github.com/prysmaticlabs/prysm-web-ui/releases/tag/v1.0.0)
- Refactor for weak subjectivity sync implementation
- Update naming for Atlair previous epoch attester
- Remove duplicate MerkleizeTrieLeaves method.
- Add explict error for validator flag checks on out of bound positions
- Add explicit error for validator flag checks on out of bound positions
- Simplify method to check if the beacon chain client should update the justified epoch value.
- Rename web UI performance endpoint to "summary"
- Refactor powchain service to be more functional
@@ -2784,7 +2745,7 @@ notes [here](https://github.com/prysmaticlabs/prysm-web-ui/releases/tag/v1.0.0)
Upstream go-ethereum is now used with familiar go.mod tooling.
- Removed duplicate aggergation validation p2p pipelines.
- Metrics calculation removed extra condition
- Removed superflous errors from peer scoring parameters registration
- Removed superfluous errors from peer scoring parameters registration
### Fixed

View File

@@ -125,7 +125,7 @@ Navigate to your fork of the repo on GitHub. On the upper left where the current
**16. Add an entry to CHANGELOG.md.**
If your change is user facing, you must include a CHANGELOG.md entry. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
All PRs must must include a changelog fragment file in the `changelog` directory. If your change is not user-facing or should not be mentioned in the changelog for some other reason, you may use the `Ignored` changelog section in your fragment's header to satisfy this requirement without altering the final release changelog. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
**17. Create a pull request.**
@@ -177,16 +177,10 @@ $ git push myrepo feature-in-progress-branch -f
## Maintaining CHANGELOG.md
This project follows the changelog guidelines from [keepachangelog.com](https://keepachangelog.com/en/1.1.0/).
All PRs with user facing changes should have an entry in the CHANGELOG.md file and the change should be categorized in the appropriate category within the "Unreleased" section. The categories are:
- `Added` for new features.
- `Changed` for changes in existing functionality.
- `Deprecated` for soon-to-be removed features.
- `Removed` for now removed features.
- `Fixed` for any bug fixes.
- `Security` in case of vulnerabilities. Please see the [Security Policy](SECURITY.md) for responsible disclosure before adding a change with this category.
This project follows the changelog guidelines from [keepachangelog.com](https://keepachangelog.com/en/1.1.0/). In order to minimize conflicts and workflow headaches, we chose to implement a changelog management
strategy that uses changelog "fragment" files, managed by our changelog management tool called `unclog`. Each PR must include a new changelog fragment file in the `changelog` directory, as specified by unclog's
[README.md](https://github.com/OffchainLabs/unclog?tab=readme-ov-file#what-is-a-changelog-fragment). As the `unclog` README suggests in the [Best Practices](https://github.com/OffchainLabs/unclog?tab=readme-ov-file#best-practices) section,
the standard naming convention for your PR's fragment file, to avoid conflicting with another fragment file, is `changelog/<github user name>_<PR branch name>.md`.
### Releasing

View File

@@ -182,7 +182,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.22.4",
go_version = "1.22.10",
nogo = "@//:nogo",
)
@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.10"
consensus_spec_version = "v1.5.0-beta.0"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NtWIhbO/mVMb1edq5jqABL0o8R1tNFiuG8PCMAsUHcs=",
integrity = "sha256-HdMlTN3wv+hUMCkIRPk+EHcLixY1cSZlvkx3obEp4AM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-DFlFlnzls1bBrDm+/xD8NK2ivvkhxR+rSNVLLqScVKc=",
integrity = "sha256-eX/ihmHQ+OvfoGJxSMgy22yAU3SZ3xjsX0FU0EaZrSs=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-G9ENPF8udZL/BqRHbi60GhFPnZDPZAH6UjcjRiOlvbk=",
integrity = "sha256-k3Onf42vOzIqyddecR6G82sDy3mmDA+R8RN66QjB0GI=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-ClOLKkmAcEi8/uKi6LDeqthask5+E3sgxVoA0bqmQ0c=",
integrity = "sha256-N/d4AwdOSlb70Dr+2l20dfXxNSzJDj/qKA9Rkn8Gb5w=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -16,7 +16,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -124,7 +123,7 @@ func TestClient_RegisterValidator(t *testing.T) {
func TestClient_GetHeader(t *testing.T) {
ctx := context.Background()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
var slot types.Slot = 23
var slot primitives.Slot = 23
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
pubkey := ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
t.Run("server error", func(t *testing.T) {
@@ -370,7 +369,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(withdrawals))
assert.Equal(t, uint64(1), withdrawals[0].Index)
assert.Equal(t, types.ValidatorIndex(1), withdrawals[0].ValidatorIndex)
assert.Equal(t, primitives.ValidatorIndex(1), withdrawals[0].ValidatorIndex)
assert.DeepEqual(t, ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943"), withdrawals[0].Address)
assert.Equal(t, uint64(1), withdrawals[0].Amount)
})
@@ -409,7 +408,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 1, len(withdrawals))
assert.Equal(t, uint64(1), withdrawals[0].Index)
assert.Equal(t, types.ValidatorIndex(1), withdrawals[0].ValidatorIndex)
assert.Equal(t, primitives.ValidatorIndex(1), withdrawals[0].ValidatorIndex)
assert.DeepEqual(t, ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943"), withdrawals[0].Address)
assert.Equal(t, uint64(1), withdrawals[0].Amount)
require.NotNil(t, blobBundle)

View File

@@ -14,6 +14,10 @@ type SignedMessageJsoner interface {
SigString() string
}
// ----------------------------------------------------------------------------
// Phase 0
// ----------------------------------------------------------------------------
type SignedBeaconBlock struct {
Message *BeaconBlock `json:"message"`
Signature string `json:"signature"`
@@ -48,6 +52,29 @@ type BeaconBlockBody struct {
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
}
type SignedBeaconBlockHeaderContainer struct {
Header *SignedBeaconBlockHeader `json:"header"`
Root string `json:"root"`
Canonical bool `json:"canonical"`
}
type SignedBeaconBlockHeader struct {
Message *BeaconBlockHeader `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlockHeader struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
BodyRoot string `json:"body_root"`
}
// ----------------------------------------------------------------------------
// Altair
// ----------------------------------------------------------------------------
type SignedBeaconBlockAltair struct {
Message *BeaconBlockAltair `json:"message"`
Signature string `json:"signature"`
@@ -83,6 +110,10 @@ type BeaconBlockBodyAltair struct {
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
}
// ----------------------------------------------------------------------------
// Bellatrix
// ----------------------------------------------------------------------------
type SignedBeaconBlockBellatrix struct {
Message *BeaconBlockBellatrix `json:"message"`
Signature string `json:"signature"`
@@ -155,6 +186,44 @@ type BlindedBeaconBlockBodyBellatrix struct {
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header"`
}
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
type SignedBeaconBlockCapella struct {
Message *BeaconBlockCapella `json:"message"`
Signature string `json:"signature"`
@@ -229,6 +298,46 @@ type BlindedBeaconBlockBodyCapella struct {
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
}
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
type SignedBeaconBlockContentsDeneb struct {
SignedBlock *SignedBeaconBlockDeneb `json:"signed_block"`
KzgProofs []string `json:"kzg_proofs"`
@@ -317,6 +426,50 @@ type BlindedBeaconBlockBodyDeneb struct {
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
type SignedBeaconBlockContentsElectra struct {
SignedBlock *SignedBeaconBlockElectra `json:"signed_block"`
KzgProofs []string `json:"kzg_proofs"`
@@ -362,7 +515,7 @@ type BeaconBlockBodyElectra struct {
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadElectra `json:"execution_payload"`
ExecutionPayload *ExecutionPayloadDeneb `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
@@ -392,156 +545,119 @@ func (s *SignedBlindedBeaconBlockElectra) SigString() string {
}
type BlindedBeaconBlockBodyElectra struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderElectra `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type SignedBeaconBlockHeaderContainer struct {
Header *SignedBeaconBlockHeader `json:"header"`
Root string `json:"root"`
Canonical bool `json:"canonical"`
type (
ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
}
)
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------
type SignedBeaconBlockContentsFulu struct {
SignedBlock *SignedBeaconBlockFulu `json:"signed_block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type SignedBeaconBlockHeader struct {
Message *BeaconBlockHeader `json:"message"`
Signature string `json:"signature"`
type BeaconBlockContentsFulu struct {
Block *BeaconBlockFulu `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type BeaconBlockHeader struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
BodyRoot string `json:"body_root"`
type SignedBeaconBlockFulu struct {
Message *BeaconBlockFulu `json:"message"`
Signature string `json:"signature"`
}
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
var _ SignedMessageJsoner = &SignedBeaconBlockFulu{}
func (s *SignedBeaconBlockFulu) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
func (s *SignedBeaconBlockFulu) SigString() string {
return s.Signature
}
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
type BeaconBlockFulu struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyFulu `json:"body"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
type BeaconBlockBodyFulu struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadDeneb `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
type BlindedBeaconBlockFulu struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyFulu `json:"body"`
}
type ExecutionPayloadElectra = ExecutionPayloadDeneb
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
type SignedBlindedBeaconBlockFulu struct {
Message *BlindedBeaconBlockFulu `json:"message"`
Signature string `json:"signature"`
}
type ExecutionPayloadHeaderElectra = ExecutionPayloadHeaderDeneb
var _ SignedMessageJsoner = &SignedBlindedBeaconBlockFulu{}
type ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
func (s *SignedBlindedBeaconBlockFulu) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBlindedBeaconBlockFulu) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBodyFulu struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -176,9 +176,9 @@ func lightClientHeaderToJSON(header interfaces.LightClientHeader) (json.RawMessa
if err != nil {
return nil, err
}
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderElectra)
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderElectra{})
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderDeneb{})
}
execution, err := ExecutionPayloadHeaderElectraFromConsensus(ex)
if err != nil {

View File

@@ -11,6 +11,10 @@ import (
var errPayloadHeaderNotFound = errors.New("expected payload header not found")
// ----------------------------------------------------------------------------
// Phase 0
// ----------------------------------------------------------------------------
func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
@@ -97,6 +101,10 @@ func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error)
}, nil
}
// ----------------------------------------------------------------------------
// Altair
// ----------------------------------------------------------------------------
func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAltair, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
@@ -202,6 +210,10 @@ func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAlt
}, nil
}
// ----------------------------------------------------------------------------
// Bellatrix
// ----------------------------------------------------------------------------
func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconStateBellatrix, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
@@ -320,6 +332,10 @@ func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconState
}, nil
}
// ----------------------------------------------------------------------------
// Capella
// ----------------------------------------------------------------------------
func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCapella, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
@@ -457,6 +473,10 @@ func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCa
}, nil
}
// ----------------------------------------------------------------------------
// Deneb
// ----------------------------------------------------------------------------
func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDeneb, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
@@ -594,6 +614,10 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
}, nil
}
// ----------------------------------------------------------------------------
// Electra
// ----------------------------------------------------------------------------
func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateElectra, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
@@ -775,3 +799,189 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
}, nil
}
// ----------------------------------------------------------------------------
// Fulu
// ----------------------------------------------------------------------------
func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
execData, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
srcPayload, ok := execData.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, errPayloadHeaderNotFound
}
payload, err := ExecutionPayloadHeaderElectraFromConsensus(srcPayload)
if err != nil {
return nil, err
}
srcHs, err := st.HistoricalSummaries()
if err != nil {
return nil, err
}
hs := make([]*HistoricalSummary, len(srcHs))
for i, s := range srcHs {
hs[i] = HistoricalSummaryFromConsensus(s)
}
nwi, err := st.NextWithdrawalIndex()
if err != nil {
return nil, err
}
nwvi, err := st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
drsi, err := st.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
dbtc, err := st.DepositBalanceToConsume()
if err != nil {
return nil, err
}
ebtc, err := st.ExitBalanceToConsume()
if err != nil {
return nil, err
}
eee, err := st.EarliestExitEpoch()
if err != nil {
return nil, err
}
cbtc, err := st.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
ece, err := st.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pbd, err := st.PendingDeposits()
if err != nil {
return nil, err
}
ppw, err := st.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pc, err := st.PendingConsolidations()
if err != nil {
return nil, err
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
LatestExecutionPayloadHeader: payload,
NextWithdrawalIndex: fmt.Sprintf("%d", nwi),
NextWithdrawalValidatorIndex: fmt.Sprintf("%d", nwvi),
HistoricalSummaries: hs,
DepositRequestsStartIndex: fmt.Sprintf("%d", drsi),
DepositBalanceToConsume: fmt.Sprintf("%d", dbtc),
ExitBalanceToConsume: fmt.Sprintf("%d", ebtc),
EarliestExitEpoch: fmt.Sprintf("%d", eee),
ConsolidationBalanceToConsume: fmt.Sprintf("%d", cbtc),
EarliestConsolidationEpoch: fmt.Sprintf("%d", ece),
PendingDeposits: PendingDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
}, nil
}

View File

@@ -142,41 +142,81 @@ type BeaconStateDeneb struct {
}
type BeaconStateElectra struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderElectra `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
DepositRequestsStartIndex string `json:"deposit_requests_start_index"`
DepositBalanceToConsume string `json:"deposit_balance_to_consume"`
ExitBalanceToConsume string `json:"exit_balance_to_consume"`
EarliestExitEpoch string `json:"earliest_exit_epoch"`
ConsolidationBalanceToConsume string `json:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch string `json:"earliest_consolidation_epoch"`
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
DepositRequestsStartIndex string `json:"deposit_requests_start_index"`
DepositBalanceToConsume string `json:"deposit_balance_to_consume"`
ExitBalanceToConsume string `json:"exit_balance_to_consume"`
EarliestExitEpoch string `json:"earliest_exit_epoch"`
ConsolidationBalanceToConsume string `json:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch string `json:"earliest_consolidation_epoch"`
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
}
type BeaconStateFulu struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
DepositRequestsStartIndex string `json:"deposit_requests_start_index"`
DepositBalanceToConsume string `json:"deposit_balance_to_consume"`
ExitBalanceToConsume string `json:"exit_balance_to_consume"`
EarliestExitEpoch string `json:"earliest_exit_epoch"`
ConsolidationBalanceToConsume string `json:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch string `json:"earliest_consolidation_epoch"`
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
}

View File

@@ -362,15 +362,16 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
return emptyAttri
}
var attr payloadattribute.Attributer
switch st.Version() {
case version.Deneb, version.Electra:
v := st.Version()
if v >= version.Deneb {
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return emptyAttri
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV3{
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
@@ -381,13 +382,18 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
case version.Capella:
return attr
}
if v >= version.Capella {
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return emptyAttri
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV2{
attr, err := payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
@@ -397,8 +403,12 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
case version.Bellatrix:
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
return attr
}
if v >= version.Bellatrix {
attr, err := payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: val.FeeRecipient[:],
@@ -407,12 +417,12 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
log.WithError(err).Error("Could not get payload attribute")
return emptyAttri
}
default:
log.WithField("version", st.Version()).Error("Could not get payload attribute due to unknown state version")
return emptyAttri
return attr
}
return attr
log.WithField("version", version.String(st.Version())).Error("Could not get payload attribute due to unknown state version")
return emptyAttri
}
// removeInvalidBlockAndState removes the invalid block, blob and its corresponding state from the cache and DB.

View File

@@ -39,7 +39,8 @@ type CellsAndProofs struct {
}
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
comm, err := kzg4844.BlobToCommitment(kzg4844.Blob(*blob))
kzgBlob := kzg4844.Blob(*blob)
comm, err := kzg4844.BlobToCommitment(&kzgBlob)
if err != nil {
return Commitment{}, err
}
@@ -47,7 +48,8 @@ func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
}
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
proof, err := kzg4844.ComputeBlobProof(kzg4844.Blob(*blob), kzg4844.Commitment(commitment))
kzgBlob := kzg4844.Blob(*blob)
proof, err := kzg4844.ComputeBlobProof(&kzgBlob, kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
}

View File

@@ -20,16 +20,17 @@ func Verify(sidecars ...blocks.ROBlob) error {
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs[i] = bytesToBlob(sidecar.Blob)
blobs[i] = *bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
func bytesToBlob(blob []byte) (ret GoKZG.Blob) {
func bytesToBlob(blob []byte) *GoKZG.Blob {
var ret GoKZG.Blob
copy(ret[:], blob)
return
return &ret
}
func bytesToCommitment(commitment []byte) (ret GoKZG.KZGCommitment) {

View File

@@ -10,11 +10,11 @@ import (
)
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
commitment, err := kzgContext.BlobToKZGCommitment(blob, 0)
commitment, err := kzgContext.BlobToKZGCommitment(&blob, 0)
if err != nil {
return GoKZG.KZGCommitment{}, GoKZG.KZGProof{}, err
}
proof, err := kzgContext.ComputeBlobKZGProof(blob, commitment, 0)
proof, err := kzgContext.ComputeBlobKZGProof(&blob, commitment, 0)
if err != nil {
return GoKZG.KZGCommitment{}, GoKZG.KZGProof{}, err
}
@@ -31,7 +31,7 @@ func TestBytesToAny(t *testing.T) {
blob := GoKZG.Blob{0x01, 0x02}
commitment := GoKZG.KZGCommitment{0x01, 0x02}
proof := GoKZG.KZGProof{0x01, 0x02}
require.DeepEqual(t, blob, bytesToBlob(bytes))
require.DeepEqual(t, blob, *bytesToBlob(bytes))
require.DeepEqual(t, commitment, bytesToCommitment(bytes))
require.DeepEqual(t, proof, bytesToKZGProof(bytes))
}

View File

@@ -71,6 +71,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
defer s.saveLightClientUpdate(cfg)
defer s.saveLightClientBootstrap(cfg)
}
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
@@ -652,7 +653,7 @@ func uint64MapToSortedSlice(input map[uint64]bool) []uint64 {
}
func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
if signed.Version() < version.Fulu {
return nil
}
@@ -660,8 +661,12 @@ func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, si
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
if !params.WithinDAPeriod(blockEpoch, currentEpoch) {
return nil
}
@@ -681,20 +686,26 @@ func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, si
}
// All columns to sample need to be available for the block to be considered available.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#subnet-sampling
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
nodeID := s.cfg.P2P.NodeID()
subnetSamplingSize := peerdas.SubnetSamplingSize()
custodyGroupSamplingSize := peerdas.CustodyGroupSamplingSize()
colMap, err := peerdas.CustodyColumns(nodeID, subnetSamplingSize)
custodyGroups, err := peerdas.CustodyGroups(nodeID, custodyGroupSamplingSize)
if err != nil {
return errors.Wrap(err, "custody columns")
return errors.Wrap(err, "custody groups")
}
// colMap represents the data columnns a node is expected to custody.
if len(colMap) == 0 {
// Exit early if the node is not expected to custody any data columns.
if len(custodyGroups) == 0 {
return nil
}
// Get the custody columns from the groups.
columnsMap, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
@@ -715,7 +726,7 @@ func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, si
}
// Get a map of data column indices that are not currently available.
missingMap, err := missingDataColumns(s.blobStorage, root, colMap)
missingMap, err := missingDataColumns(s.blobStorage, root, columnsMap)
if err != nil {
return err
}
@@ -743,10 +754,10 @@ func (s *Service) areDataColumnsAvailable(ctx context.Context, root [32]byte, si
)
numberOfColumns := params.BeaconConfig().NumberOfColumns
colMapCount := uint64(len(colMap))
colMapCount := uint64(len(columnsMap))
if colMapCount < numberOfColumns {
expected = uint64MapToSortedSlice(colMap)
expected = uint64MapToSortedSlice(columnsMap)
}
if missingMapCount < numberOfColumns {

View File

@@ -53,7 +53,7 @@ func NewAttestationCache() *AttestationCache {
//
// - For unaggregated attestations, it adds the attestation bit to attestation bits of the running aggregate,
// which is the first aggregate for the slot.
// - For aggregated attestations, it appends the attestation to the existng list of attestations for the slot.
// - For aggregated attestations, it appends the attestation to the existing list of attestations for the slot.
func (c *AttestationCache) Add(att ethpb.Att) error {
if att.IsNil() {
log.Debug("Attempted to add a nil attestation to the attestation cache")

View File

@@ -105,10 +105,9 @@ func TestProcessSlashings_NotSlashed(t *testing.T) {
}
s, err := state_native.InitializeFromProtoAltair(base)
require.NoError(t, err)
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplierAltair)
require.NoError(t, err)
require.NoError(t, epoch.ProcessSlashings(s))
wanted := params.BeaconConfig().MaxEffectiveBalance
assert.Equal(t, wanted, newState.Balances()[0], "Unexpected slashed balance")
assert.Equal(t, wanted, s.Balances()[0], "Unexpected slashed balance")
}
func TestProcessSlashings_SlashedLess(t *testing.T) {
@@ -176,9 +175,8 @@ func TestProcessSlashings_SlashedLess(t *testing.T) {
original := proto.Clone(tt.state)
s, err := state_native.InitializeFromProtoAltair(tt.state)
require.NoError(t, err)
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplierAltair)
require.NoError(t, err)
assert.Equal(t, tt.want, newState.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, newState.Balances()[0])
require.NoError(t, epoch.ProcessSlashings(s))
assert.Equal(t, tt.want, s.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, s.Balances()[0])
})
}
}
@@ -192,6 +190,5 @@ func TestProcessSlashings_BadValue(t *testing.T) {
}
s, err := state_native.InitializeFromProtoAltair(base)
require.NoError(t, err)
_, err = epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplierAltair)
require.ErrorContains(t, "addition overflows", err)
require.ErrorContains(t, "addition overflows", epoch.ProcessSlashings(s))
}

View File

@@ -69,12 +69,7 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
}
// Modified in Altair and Bellatrix.
proportionalSlashingMultiplier, err := state.ProportionalSlashingMultiplier()
if err != nil {
return err
}
state, err = e.ProcessSlashings(state, proportionalSlashingMultiplier)
if err != nil {
if err := e.ProcessSlashings(state); err != nil {
return err
}
state, err = e.ProcessEth1DataReset(state)

View File

@@ -198,7 +198,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
ExecutionPayload: &enginev1.ExecutionPayloadElectra{
ExecutionPayload: &enginev1.ExecutionPayloadDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),

View File

@@ -1152,7 +1152,7 @@ func TestProcessWithdrawals(t *testing.T) {
}
st, err = state_native.InitializeFromProtoUnsafeElectra(spb)
require.NoError(t, err)
p, err = consensusblocks.WrappedExecutionPayloadElectra(&enginev1.ExecutionPayloadElectra{Withdrawals: test.Args.Withdrawals})
p, err = consensusblocks.WrappedExecutionPayloadDeneb(&enginev1.ExecutionPayloadDeneb{Withdrawals: test.Args.Withdrawals})
require.NoError(t, err)
default:
t.Fatalf("Add a beacon state setup for version %s", version.String(fork))

View File

@@ -15,7 +15,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
@@ -273,7 +272,7 @@ func ProcessPendingDeposits(ctx context.Context, st state.BeaconState, activeBal
isChurnLimitReached := false
var pendingDepositsToBatchVerify []*ethpb.PendingDeposit
var pendingDepositsToPostpone []*eth.PendingDeposit
var pendingDepositsToPostpone []*ethpb.PendingDeposit
depBalToConsume, err := st.DepositBalanceToConsume()
if err != nil {

View File

@@ -18,23 +18,24 @@ import (
//
// Spec pseudocode definition:
//
// def process_registry_updates(state: BeaconState) -> None:
// # Process activation eligibility and ejections
// for index, validator in enumerate(state.validators):
// if is_eligible_for_activation_queue(validator):
// validator.activation_eligibility_epoch = get_current_epoch(state) + 1
// def process_registry_updates(state: BeaconState) -> None:
// # Process activation eligibility and ejections
// for index, validator in enumerate(state.validators):
// if is_eligible_for_activation_queue(validator): # [Modified in Electra:EIP7251]
// validator.activation_eligibility_epoch = get_current_epoch(state) + 1
//
// if (
// is_active_validator(validator, get_current_epoch(state))
// and validator.effective_balance <= EJECTION_BALANCE
// ):
// initiate_validator_exit(state, ValidatorIndex(index))
// if (
// is_active_validator(validator, get_current_epoch(state))
// and validator.effective_balance <= EJECTION_BALANCE
// ):
// initiate_validator_exit(state, ValidatorIndex(index)) # [Modified in Electra:EIP7251]
//
// # Activate all eligible validators
// activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
// for validator in state.validators:
// if is_eligible_for_activation(state, validator):
// validator.activation_epoch = activation_epoch
// # Activate all eligible validators
// # [Modified in Electra:EIP7251]
// activation_epoch = compute_activation_exit_epoch(get_current_epoch(state))
// for validator in state.validators:
// if is_eligible_for_activation(state, validator):
// validator.activation_epoch = activation_epoch
func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
currentEpoch := time.CurrentEpoch(st)
ejectionBal := params.BeaconConfig().EjectionBalance
@@ -90,8 +91,8 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
}
}
// Activate all eligible validators.
for _, idx := range eligibleForActivation {
// Activate all eligible validators.
v, err := st.ValidatorAtIndex(idx)
if err != nil {
return err

View File

@@ -40,14 +40,17 @@ var (
// process_justification_and_finalization(state)
// process_inactivity_updates(state)
// process_rewards_and_penalties(state)
// process_registry_updates(state)
// process_slashings(state)
// process_registry_updates(state) # [Modified in Electra:EIP7251]
// process_slashings(state) # [Modified in Electra:EIP7251]
// process_eth1_data_reset(state)
// process_pending_deposits(state) # New in EIP7251
// process_pending_consolidations(state) # New in EIP7251
// process_effective_balance_updates(state)
// process_pending_deposits(state) # [New in Electra:EIP7251]
// process_pending_consolidations(state) # [New in Electra:EIP7251]
// process_effective_balance_updates(state) # [Modified in Electra:EIP7251]
// process_slashings_reset(state)
// process_randao_mixes_reset(state)
// process_historical_summaries_update(state)
// process_participation_flag_updates(state)
// process_sync_committee_updates(state)
func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "electra.ProcessEpoch")
defer span.End()
@@ -75,24 +78,16 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err != nil {
return errors.Wrap(err, "could not process rewards and penalties")
}
if err := ProcessRegistryUpdates(ctx, state); err != nil {
return errors.Wrap(err, "could not process registry updates")
}
proportionalSlashingMultiplier, err := state.ProportionalSlashingMultiplier()
if err != nil {
return err
}
state, err = ProcessSlashings(state, proportionalSlashingMultiplier)
if err != nil {
if err := ProcessSlashings(state); err != nil {
return err
}
state, err = ProcessEth1DataReset(state)
if err != nil {
return err
}
if err = ProcessPendingDeposits(ctx, state, primitives.Gwei(bp.ActiveCurrentEpoch)); err != nil {
return err
}
@@ -102,7 +97,6 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err = ProcessEffectiveBalanceUpdates(state); err != nil {
return err
}
state, err = ProcessSlashingsReset(state)
if err != nil {
return err
@@ -115,17 +109,14 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err != nil {
return err
}
state, err = ProcessParticipationFlagUpdates(state)
if err != nil {
return err
}
_, err = ProcessSyncCommitteeUpdates(ctx, state)
if err != nil {
return err
}
return nil
}

View File

@@ -240,7 +240,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderElectra{
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),

View File

@@ -102,7 +102,7 @@ func TestUpgradeToElectra(t *testing.T) {
header, err := mSt.LatestExecutionPayloadHeader()
require.NoError(t, err)
protoHeader, ok := header.Proto().(*enginev1.ExecutionPayloadHeaderElectra)
protoHeader, ok := header.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
require.Equal(t, true, ok)
prevHeader, err := preForkState.LatestExecutionPayloadHeader()
require.NoError(t, err)
@@ -111,7 +111,7 @@ func TestUpgradeToElectra(t *testing.T) {
wdRoot, err := prevHeader.WithdrawalsRoot()
require.NoError(t, err)
wanted := &enginev1.ExecutionPayloadHeaderElectra{
wanted := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: prevHeader.ParentHash(),
FeeRecipient: prevHeader.FeeRecipient(),
StateRoot: prevHeader.StateRoot(),

View File

@@ -141,29 +141,76 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
return st, nil
}
// ProcessSlashings processes the slashed validators during epoch processing,
// ProcessSlashings processes the slashed validators during epoch processing. This is a state mutating method.
//
// Electra spec definition:
//
// def process_slashings(state: BeaconState) -> None:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER, total_balance)
// if state.version == electra:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from total balance to avoid uint64 overflow
// penalty_per_effective_balance_increment = adjusted_total_slashing_balance // (total_balance // increment)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// if state.version == electra:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(
// sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX,
// total_balance
// )
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from total balance to avoid uint64 overflow
// penalty_per_effective_balance_increment = adjusted_total_slashing_balance // (total_balance // increment)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// effective_balance_increments = validator.effective_balance // increment
// # [Modified in Electra:EIP7251]
// penalty = penalty_per_effective_balance_increment * effective_balance_increments
// decrease_balance(state, ValidatorIndex(index), penalty)
func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.BeaconState, error) {
// decrease_balance(state, ValidatorIndex(index), penalty)
//
// Bellatrix spec definition:
//
// def process_slashings(state: BeaconState) -> None:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(
// sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER_BELLATRIX, # [Modified in Bellatrix]
// total_balance
// )
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// decrease_balance(state, ValidatorIndex(index), penalty)
//
// Altair spec definition:
//
// def process_slashings(state: BeaconState) -> None:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER_ALTAIR, total_balance)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// decrease_balance(state, ValidatorIndex(index), penalty)
//
// Phase0 spec definition:
//
// def process_slashings(state: BeaconState) -> None:
// epoch = get_current_epoch(state)
// total_balance = get_total_active_balance(state)
// adjusted_total_slashing_balance = min(sum(state.slashings) * PROPORTIONAL_SLASHING_MULTIPLIER, total_balance)
// for index, validator in enumerate(state.validators):
// if validator.slashed and epoch + EPOCHS_PER_SLASHINGS_VECTOR // 2 == validator.withdrawable_epoch:
// increment = EFFECTIVE_BALANCE_INCREMENT # Factored out from penalty numerator to avoid uint64 overflow
// penalty_numerator = validator.effective_balance // increment * adjusted_total_slashing_balance
// penalty = penalty_numerator // total_balance * increment
// decrease_balance(state, ValidatorIndex(index), penalty)
func ProcessSlashings(st state.BeaconState) error {
slashingMultiplier, err := st.ProportionalSlashingMultiplier()
if err != nil {
return errors.Wrap(err, "could not get proportional slashing multiplier")
}
currentEpoch := time.CurrentEpoch(st)
totalBalance, err := helpers.TotalActiveBalance(st)
if err != nil {
return nil, errors.Wrap(err, "could not get total active balance")
return errors.Wrap(err, "could not get total active balance")
}
// Compute slashed balances in the current epoch
@@ -175,7 +222,7 @@ func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.Be
for _, slashing := range slashings {
totalSlashing, err = math.Add64(totalSlashing, slashing)
if err != nil {
return nil, err
return err
}
}
@@ -209,14 +256,14 @@ func ProcessSlashings(st state.BeaconState, slashingMultiplier uint64) (state.Be
return nil
})
if err != nil {
return nil, err
return err
}
if changed {
if err := st.SetBalances(bals); err != nil {
return nil, err
return err
}
}
return st, nil
return nil
}
// ProcessEth1DataReset processes updates to ETH1 data votes during epoch processing.

View File

@@ -32,10 +32,9 @@ func TestProcessSlashings_NotSlashed(t *testing.T) {
}
s, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplier)
require.NoError(t, err)
require.NoError(t, epoch.ProcessSlashings(s))
wanted := params.BeaconConfig().MaxEffectiveBalance
assert.Equal(t, wanted, newState.Balances()[0], "Unexpected slashed balance")
assert.Equal(t, wanted, s.Balances()[0], "Unexpected slashed balance")
}
func TestProcessSlashings_SlashedLess(t *testing.T) {
@@ -111,9 +110,8 @@ func TestProcessSlashings_SlashedLess(t *testing.T) {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
helpers.ClearCache()
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplier)
require.NoError(t, err)
assert.Equal(t, tt.want, newState.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, newState.Balances()[0])
require.NoError(t, epoch.ProcessSlashings(s))
assert.Equal(t, tt.want, s.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, s.Balances()[0])
})
}
}
@@ -365,8 +363,7 @@ func TestProcessSlashings_BadValue(t *testing.T) {
}
s, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
_, err = epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplier)
require.ErrorContains(t, "addition overflows", err)
require.ErrorContains(t, "addition overflows", epoch.ProcessSlashings(s))
}
func TestProcessHistoricalDataUpdate(t *testing.T) {
@@ -514,9 +511,8 @@ func TestProcessSlashings_SlashedElectra(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(tt.state)
require.NoError(t, err)
helpers.ClearCache()
newState, err := epoch.ProcessSlashings(s, params.BeaconConfig().ProportionalSlashingMultiplierBellatrix)
require.NoError(t, err)
assert.Equal(t, tt.want, newState.Balances()[0], "ProcessSlashings({%v}) = newState; newState.Balances[0] = %d", original, newState.Balances()[0])
require.NoError(t, epoch.ProcessSlashings(s))
assert.Equal(t, tt.want, s.Balances()[0], "ProcessSlashings({%v}); s.Balances[0] = %d", original, s.Balances()[0])
})
}
}

View File

@@ -0,0 +1,37 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["upgrade.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/fulu",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["upgrade_test.go"],
deps = [
":go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -0,0 +1,184 @@
package fulu
import (
"sort"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// UpgradeToFulu updates inputs a generic state to return the version Fulu state.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/fork.md#upgrading-the-state
func UpgradeToFulu(beaconState state.BeaconState) (state.BeaconState, error) {
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
historicalRoots, err := beaconState.HistoricalRoots()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
earliestExitEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(beaconState))
preActivationIndices := make([]primitives.ValidatorIndex, 0)
compoundWithdrawalIndices := make([]primitives.ValidatorIndex, 0)
if err = beaconState.ReadFromEveryValidator(func(index int, val state.ReadOnlyValidator) error {
if val.ExitEpoch() != params.BeaconConfig().FarFutureEpoch && val.ExitEpoch() > earliestExitEpoch {
earliestExitEpoch = val.ExitEpoch()
}
if val.ActivationEpoch() == params.BeaconConfig().FarFutureEpoch {
preActivationIndices = append(preActivationIndices, primitives.ValidatorIndex(index))
}
if helpers.HasCompoundingWithdrawalCredential(val) {
compoundWithdrawalIndices = append(compoundWithdrawalIndices, primitives.ValidatorIndex(index))
}
return nil
}); err != nil {
return nil, err
}
earliestExitEpoch++ // Increment to find the earliest possible exit epoch
// note: should be the same in prestate and post beaconState.
// we are deviating from the specs a bit as it calls for using the post beaconState
tab, err := helpers.TotalActiveBalance(beaconState)
if err != nil {
return nil, errors.Wrap(err, "failed to get total active balance")
}
s := &ethpb.BeaconStateFulu{
GenesisTime: beaconState.GenesisTime(),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().FuluForkVersion,
Epoch: time.CurrentEpoch(beaconState),
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: historicalRoots,
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}
// Sorting preActivationIndices based on a custom criteria
sort.Slice(preActivationIndices, func(i, j int) bool {
// Comparing based on ActivationEligibilityEpoch and then by index if the epochs are the same
if s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch == s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch {
return preActivationIndices[i] < preActivationIndices[j]
}
return s.Validators[preActivationIndices[i]].ActivationEligibilityEpoch < s.Validators[preActivationIndices[j]].ActivationEligibilityEpoch
})
// Need to cast the beaconState to use in helper functions
post, err := state_native.InitializeFromProtoUnsafeFulu(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post fulu beaconState")
}
return post, nil
}

View File

@@ -0,0 +1,188 @@
package fulu_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/fulu"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestUpgradeToFulu(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
vals := st.Validators()
vals[0].ActivationEpoch = params.BeaconConfig().FarFutureEpoch
vals[1].WithdrawalCredentials = []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte}
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[1] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
preForkState := st.Copy()
mSt, err := fulu.UpgradeToFulu(st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), mSt.GenesisTime())
require.DeepSSZEqual(t, preForkState.GenesisValidatorsRoot(), mSt.GenesisValidatorsRoot())
require.Equal(t, preForkState.Slot(), mSt.Slot())
require.DeepSSZEqual(t, preForkState.LatestBlockHeader(), mSt.LatestBlockHeader())
require.DeepSSZEqual(t, preForkState.BlockRoots(), mSt.BlockRoots())
require.DeepSSZEqual(t, preForkState.StateRoots(), mSt.StateRoots())
require.DeepSSZEqual(t, preForkState.Validators()[2:], mSt.Validators()[2:])
require.DeepSSZEqual(t, preForkState.Balances()[2:], mSt.Balances()[2:])
require.DeepSSZEqual(t, preForkState.Eth1Data(), mSt.Eth1Data())
require.DeepSSZEqual(t, preForkState.Eth1DataVotes(), mSt.Eth1DataVotes())
require.DeepSSZEqual(t, preForkState.Eth1DepositIndex(), mSt.Eth1DepositIndex())
require.DeepSSZEqual(t, preForkState.RandaoMixes(), mSt.RandaoMixes())
require.DeepSSZEqual(t, preForkState.Slashings(), mSt.Slashings())
require.DeepSSZEqual(t, preForkState.JustificationBits(), mSt.JustificationBits())
require.DeepSSZEqual(t, preForkState.PreviousJustifiedCheckpoint(), mSt.PreviousJustifiedCheckpoint())
require.DeepSSZEqual(t, preForkState.CurrentJustifiedCheckpoint(), mSt.CurrentJustifiedCheckpoint())
require.DeepSSZEqual(t, preForkState.FinalizedCheckpoint(), mSt.FinalizedCheckpoint())
require.Equal(t, len(preForkState.Validators()), len(mSt.Validators()))
preVal, err := preForkState.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, preVal.EffectiveBalance)
preVal2, err := preForkState.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, preVal2.EffectiveBalance)
// TODO: Fix this test
// mVal, err := mSt.ValidatorAtIndex(0)
_, err = mSt.ValidatorAtIndex(0)
require.NoError(t, err)
// require.Equal(t, uint64(0), mVal.EffectiveBalance)
mVal2, err := mSt.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, mVal2.EffectiveBalance)
numValidators := mSt.NumValidators()
p, err := mSt.PreviousEpochParticipation()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]byte, numValidators), p)
p, err = mSt.CurrentEpochParticipation()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]byte, numValidators), p)
s, err := mSt.InactivityScores()
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().FuluForkVersion,
Epoch: time.CurrentEpoch(st),
}, f)
csc, err := mSt.CurrentSyncCommittee()
require.NoError(t, err)
psc, err := preForkState.CurrentSyncCommittee()
require.NoError(t, err)
require.DeepSSZEqual(t, psc, csc)
nsc, err := mSt.NextSyncCommittee()
require.NoError(t, err)
psc, err = preForkState.NextSyncCommittee()
require.NoError(t, err)
require.DeepSSZEqual(t, psc, nsc)
header, err := mSt.LatestExecutionPayloadHeader()
require.NoError(t, err)
protoHeader, ok := header.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
require.Equal(t, true, ok)
prevHeader, err := preForkState.LatestExecutionPayloadHeader()
require.NoError(t, err)
txRoot, err := prevHeader.TransactionsRoot()
require.NoError(t, err)
wdRoot, err := prevHeader.WithdrawalsRoot()
require.NoError(t, err)
wanted := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: prevHeader.ParentHash(),
FeeRecipient: prevHeader.FeeRecipient(),
StateRoot: prevHeader.StateRoot(),
ReceiptsRoot: prevHeader.ReceiptsRoot(),
LogsBloom: prevHeader.LogsBloom(),
PrevRandao: prevHeader.PrevRandao(),
BlockNumber: prevHeader.BlockNumber(),
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
}
require.DeepEqual(t, wanted, protoHeader)
nwi, err := mSt.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, uint64(0), nwi)
lwvi, err := mSt.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(0), lwvi)
summaries, err := mSt.HistoricalSummaries()
require.NoError(t, err)
require.Equal(t, 0, len(summaries))
startIndex, err := mSt.DepositRequestsStartIndex()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().UnsetDepositRequestsStartIndex, startIndex)
balance, err := mSt.DepositBalanceToConsume()
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), balance)
tab, err := helpers.TotalActiveBalance(mSt)
require.NoError(t, err)
ebtc, err := mSt.ExitBalanceToConsume()
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitChurnLimit(primitives.Gwei(tab)), ebtc)
eee, err := mSt.EarliestExitEpoch()
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitEpoch(primitives.Epoch(1)), eee)
cbtc, err := mSt.ConsolidationBalanceToConsume()
require.NoError(t, err)
require.Equal(t, helpers.ConsolidationChurnLimit(primitives.Gwei(tab)), cbtc)
earliestConsolidationEpoch, err := mSt.EarliestConsolidationEpoch()
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitEpoch(slots.ToEpoch(preForkState.Slot())), earliestConsolidationEpoch)
// TODO: Fix this test
// pendingDeposits, err := mSt.PendingDeposits()
_, err = mSt.PendingDeposits()
require.NoError(t, err)
// require.Equal(t, 2, len(pendingDeposits))
// require.Equal(t, uint64(1000), pendingDeposits[1].Amount)
numPendingPartialWithdrawals, err := mSt.NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(0), numPendingPartialWithdrawals)
consolidations, err := mSt.PendingConsolidations()
require.NoError(t, err)
require.Equal(t, 0, len(consolidations))
}

View File

@@ -697,7 +697,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
excessBlobGas, err := payload.ExcessBlobGas()
require.NoError(t, err)
executionHeader := &v11.ExecutionPayloadHeaderElectra{
executionHeader := &v11.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
@@ -762,7 +762,7 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
excessBlobGas, err := payload.ExcessBlobGas()
require.NoError(t, err)
executionHeader := &v11.ExecutionPayloadHeaderElectra{
executionHeader := &v11.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),

View File

@@ -30,41 +30,41 @@ import (
)
const (
CustodySubnetCountEnrKey = "csc"
CustodyGroupCountEnrKey = "cgc"
)
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
type Csc uint64
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/p2p-interface.md#the-discovery-domain-discv5
type Cgc uint64
func (Csc) ENRKey() string { return CustodySubnetCountEnrKey }
func (Cgc) ENRKey() string { return CustodyGroupCountEnrKey }
var (
// Custom errors
errCustodySubnetCountTooLarge = errors.New("custody subnet count larger than data column sidecar subnet count")
errIndexTooLarge = errors.New("column index is larger than the specified columns count")
errMismatchLength = errors.New("mismatch in the length of the commitments and proofs")
errRecordNil = errors.New("record is nil")
errCannotLoadCustodySubnetCount = errors.New("cannot load the custody subnet count from peer")
errCustodyGroupCountTooLarge = errors.New("custody group count too large")
errWrongComputedCustodyGroupCount = errors.New("wrong computed custody group count, should never happen")
errIndexTooLarge = errors.New("column index is larger than the specified columns count")
errMismatchLength = errors.New("mismatch in the length of the commitments and proofs")
errRecordNil = errors.New("record is nil")
errCannotLoadCustodyGroupCount = errors.New("cannot load the custody group count from peer")
// maxUint256 is the maximum value of a uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
// CustodyColumnSubnets computes the subnets the node should participate in for custody.
func CustodyColumnSubnets(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// CustodyGroups computes the custody groups the node should participate in for custody.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#get_custody_groups
func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) (map[uint64]bool, error) {
numberOfCustodyGroup := params.BeaconConfig().NumberOfCustodyGroups
// Check if the custody subnet count is larger than the data column sidecar subnet count.
if custodySubnetCount > dataColumnSidecarSubnetCount {
return nil, errCustodySubnetCountTooLarge
// Check if the custody group count is larger than the number of custody groups.
if custodyGroupCount > numberOfCustodyGroup {
return nil, errCustodyGroupCountTooLarge
}
// First, compute the subnet IDs that the node should participate in.
subnetIds := make(map[uint64]bool, custodySubnetCount)
custodyGroups := make(map[uint64]bool, custodyGroupCount)
one := uint256.NewInt(1)
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(subnetIds)) < custodySubnetCount; currentId.Add(currentId, one) {
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(custodyGroups)) < custodyGroupCount; currentId.Add(currentId, one) {
// Convert to big endian bytes.
currentIdBytesBigEndian := currentId.Bytes32()
@@ -74,11 +74,11 @@ func CustodyColumnSubnets(nodeId enode.ID, custodySubnetCount uint64) (map[uint6
// Hash the result.
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the subnet ID.
subnetId := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % dataColumnSidecarSubnetCount
// Get the custody group ID.
custodyGroupId := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % numberOfCustodyGroup
// Add the subnet to the map.
subnetIds[subnetId] = true
// Add the custody group to the map.
custodyGroups[custodyGroupId] = true
// Overflow prevention.
if currentId.Cmp(maxUint256) == 0 {
@@ -86,37 +86,100 @@ func CustodyColumnSubnets(nodeId enode.ID, custodySubnetCount uint64) (map[uint6
}
}
return subnetIds, nil
// Final check.
if uint64(len(custodyGroups)) != custodyGroupCount {
return nil, errWrongComputedCustodyGroupCount
}
return custodyGroups, nil
}
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroup := beaconConfig.NumberOfCustodyGroups
if custodyGroup > numberOfCustodyGroup {
return nil, errCustodyGroupCountTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroup
columns := make([]uint64, 0, columnsPerGroup)
for i := range columnsPerGroup {
column := numberOfCustodyGroup*i + custodyGroup
columns = append(columns, column)
}
return columns, nil
}
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
if columnIndex >= numberOfColumns {
return 0, errIndexTooLarge
}
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
return columnIndex / columnsPerGroup, nil
}
// ComputeSubnetForDataColumnSidecar computes the subnet for a data column sidecar.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#compute_subnet_for_data_column_sidecar
func ComputeSubnetForDataColumnSidecar(columnIndex uint64) uint64 {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
return columnIndex % dataColumnSidecarSubnetCount
}
// CustodyColumns computes the columns the node should custody.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#helper-functions
func CustodyColumns(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
func CustodyColumns(custodyGroups map[uint64]bool) (map[uint64]bool, error) {
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
// Compute the custody subnets.
subnetIds, err := CustodyColumnSubnets(nodeId, custodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody subnets")
}
custodyGroupCount := len(custodyGroups)
columnsPerSubnet := fieldparams.NumberOfColumns / dataColumnSidecarSubnetCount
// Compute the columns for each custody group.
columns := make(map[uint64]bool, custodyGroupCount)
for group := range custodyGroups {
if group >= numberOfCustodyGroups {
return nil, errCustodyGroupCountTooLarge
}
// Knowing the subnet ID and the number of columns per subnet, select all the columns the node should custody.
// Columns belonging to the same subnet are contiguous.
columnIndices := make(map[uint64]bool, custodySubnetCount*columnsPerSubnet)
for i := uint64(0); i < columnsPerSubnet; i++ {
for subnetId := range subnetIds {
columnIndex := dataColumnSidecarSubnetCount*i + subnetId
columnIndices[columnIndex] = true
groupColumns, err := ComputeColumnsForCustodyGroup(group)
if err != nil {
return nil, errors.Wrap(err, "compute columns for custody group")
}
for _, column := range groupColumns {
columns[column] = true
}
}
return columnIndices, nil
return columns, nil
}
// DataColumnSubnets computes the subnets for the data columns.
func DataColumnSubnets(dataColumns map[uint64]bool) map[uint64]bool {
subnets := make(map[uint64]bool, len(dataColumns))
for column := range dataColumns {
subnet := ComputeSubnetForDataColumnSidecar(column)
subnets[subnet] = true
}
return subnets
}
// DataColumnSidecars computes the data column sidecars from the signed block and blobs.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#recover_matrix
// https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/das-core.md#get_data_column_sidecars
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, blobs []kzg.Blob) ([]*ethpb.DataColumnSidecar, error) {
startTime := time.Now()
blobsCount := len(blobs)
@@ -454,39 +517,22 @@ func VerifyDataColumnsSidecarKZGProofs(sidecars []blocks.RODataColumn) (bool, er
return verified, nil
}
// CustodySubnetCount returns the number of subnets the node should participate in for custody.
func CustodySubnetCount() uint64 {
// CustodyGroupCount returns the number of groups the node should participate in for custody.
func CustodyGroupCount() uint64 {
if flags.Get().SubscribeToAllSubnets {
return params.BeaconConfig().DataColumnSidecarSubnetCount
return params.BeaconConfig().NumberOfCustodyGroups
}
return params.BeaconConfig().CustodyRequirement
}
// SubnetSamplingSize returns the number of subnets the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#subnet-sampling
func SubnetSamplingSize() uint64 {
// CustodyGroupSamplingSize returns the number of custody groups the node should sample from.
// https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.10/specs/fulu/das-core.md#custody-sampling
func CustodyGroupSamplingSize() uint64 {
samplesPerSlot := params.BeaconConfig().SamplesPerSlot
custodySubnetCount := CustodySubnetCount()
custodyGroupCount := CustodyGroupCount()
return max(samplesPerSlot, custodySubnetCount)
}
// CustodyColumnCount returns the number of columns the node should custody.
func CustodyColumnCount() uint64 {
// Get the number of subnets.
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Compute the number of columns per subnet.
columnsPerSubnet := fieldparams.NumberOfColumns / dataColumnSidecarSubnetCount
// Get the number of subnets we custody
custodySubnetCount := CustodySubnetCount()
// Finally, compute the number of columns we should custody.
custodyColumnCount := custodySubnetCount * columnsPerSubnet
return custodyColumnCount
return max(samplesPerSlot, custodyGroupCount)
}
// HypergeomCDF computes the hypergeometric cumulative distribution function.
@@ -538,27 +584,27 @@ func ExtendedSampleCount(samplesPerSlot, allowedFailures uint64) uint64 {
return sampleCount
}
func CustodyCountFromRecord(record *enr.Record) (uint64, error) {
// By default, we assume the peer custodies the minimum number of subnets.
// CustodyGroupCountFromRecord extracts the custody group count from an ENR record.
func CustodyGroupCountFromRecord(record *enr.Record) (uint64, error) {
if record == nil {
return 0, errRecordNil
}
// Load the `custody_subnet_count`
var csc Csc
if err := record.Load(&csc); err != nil {
return 0, errCannotLoadCustodySubnetCount
// Load the `cgc`
var cgc Cgc
if cgc := record.Load(&cgc); cgc != nil {
return 0, errCannotLoadCustodyGroupCount
}
return uint64(csc), nil
return uint64(cgc), nil
}
func CanSelfReconstruct(numCol uint64) bool {
total := params.BeaconConfig().NumberOfColumns
// if total is odd, then we need total / 2 + 1 columns to reconstruct
// if total is even, then we need total / 2 columns to reconstruct
columnsNeeded := total/2 + total%2
return numCol >= columnsNeeded
func CanSelfReconstruct(custodyGroupCount uint64) bool {
total := params.BeaconConfig().NumberOfCustodyGroups
// If total is odd, then we need total / 2 + 1 columns to reconstruct.
// If total is even, then we need total / 2 columns to reconstruct.
custodyGroupsNeeded := total/2 + total%2
return custodyGroupCount >= custodyGroupsNeeded
}
// RecoverCellsAndProofs recovers the cells and proofs from the data column sidecars.

View File

@@ -234,7 +234,7 @@ func TestDataColumnsSidecarsBlobsRoundtrip(t *testing.T) {
require.DeepSSZEqual(t, verifiedROBlobs, roundtripBlobs)
}
func TestCustodySubnetCount(t *testing.T) {
func TestCustodyGroupCount(t *testing.T) {
testCases := []struct {
name string
subscribeToAllSubnets bool
@@ -266,25 +266,12 @@ func TestCustodySubnetCount(t *testing.T) {
flags.Init(gFlags)
// Get the custody subnet count.
actual := peerdas.CustodySubnetCount()
actual := peerdas.CustodyGroupCount()
require.Equal(t, tc.expected, actual)
})
}
}
func TestCustodyColumnCount(t *testing.T) {
const expected uint64 = 8
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig().Copy()
config.DataColumnSidecarSubnetCount = 32
config.CustodyRequirement = 2
params.OverrideBeaconConfig(config)
actual := peerdas.CustodyColumnCount()
require.Equal(t, expected, actual)
}
func TestHypergeomCDF(t *testing.T) {
// Test case from https://en.wikipedia.org/wiki/Hypergeometric_distribution
// Population size: 1000, number of successes in population: 500, sample size: 10, number of successes in sample: 5
@@ -337,48 +324,48 @@ func TestExtendedSampleCount(t *testing.T) {
}
}
func TestCustodyCountFromRecord(t *testing.T) {
func TestCustodyGroupCountFromRecord(t *testing.T) {
const expected uint64 = 7
// Create an Ethereum record.
record := &enr.Record{}
record.Set(peerdas.Csc(expected))
record.Set(peerdas.Cgc(expected))
actual, err := peerdas.CustodyCountFromRecord(record)
actual, err := peerdas.CustodyGroupCountFromRecord(record)
require.NoError(t, err)
require.Equal(t, expected, actual)
}
func TestCanSelfReconstruct(t *testing.T) {
testCases := []struct {
name string
totalNumberOfColumns uint64
custodyNumberOfColumns uint64
expected bool
name string
totalNumberOfCustodyGroups uint64
custodyNumberOfGroups uint64
expected bool
}{
{
name: "totalNumberOfColumns=64, custodyNumberOfColumns=31",
totalNumberOfColumns: 64,
custodyNumberOfColumns: 31,
expected: false,
name: "totalNumberOfCustodyGroups=64, custodyNumberOfGroups=31",
totalNumberOfCustodyGroups: 64,
custodyNumberOfGroups: 31,
expected: false,
},
{
name: "totalNumberOfColumns=64, custodyNumberOfColumns=32",
totalNumberOfColumns: 64,
custodyNumberOfColumns: 32,
expected: true,
name: "totalNumberOfCustodyGroups=64, custodyNumberOfGroups=32",
totalNumberOfCustodyGroups: 64,
custodyNumberOfGroups: 32,
expected: true,
},
{
name: "totalNumberOfColumns=65, custodyNumberOfColumns=32",
totalNumberOfColumns: 65,
custodyNumberOfColumns: 32,
expected: false,
name: "totalNumberOfCustodyGroups=65, custodyNumberOfGroups=32",
totalNumberOfCustodyGroups: 65,
custodyNumberOfGroups: 32,
expected: false,
},
{
name: "totalNumberOfColumns=63, custodyNumberOfColumns=33",
totalNumberOfColumns: 65,
custodyNumberOfColumns: 33,
expected: true,
name: "totalNumberOfCustodyGroups=63, custodyNumberOfGroups=33",
totalNumberOfCustodyGroups: 65,
custodyNumberOfGroups: 33,
expected: true,
},
}
@@ -387,11 +374,11 @@ func TestCanSelfReconstruct(t *testing.T) {
// Set the total number of columns.
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.NumberOfColumns = tc.totalNumberOfColumns
cfg.NumberOfCustodyGroups = tc.totalNumberOfCustodyGroups
params.OverrideBeaconConfig(cfg)
// Check if reconstuction is possible.
actual := peerdas.CanSelfReconstruct(tc.custodyNumberOfColumns)
actual := peerdas.CanSelfReconstruct(tc.custodyNumberOfGroups)
require.Equal(t, tc.expected, actual)
})
}

View File

@@ -55,7 +55,7 @@ func HigherEqualThanAltairVersionAndEpoch(s state.BeaconState, e primitives.Epoc
// PeerDASIsActive checks whether peerDAS is active at the provided slot.
func PeerDASIsActive(slot primitives.Slot) bool {
return params.PeerDASEnabled() && slots.ToEpoch(slot) >= params.BeaconConfig().ElectraForkEpoch
return params.FuluEnabled() && slots.ToEpoch(slot) >= params.BeaconConfig().FuluForkEpoch
}
// CanUpgradeToAltair returns true if the input `slot` can upgrade to Altair.
@@ -104,6 +104,15 @@ func CanUpgradeToElectra(slot primitives.Slot) bool {
return epochStart && electraEpoch
}
// CanUpgradeToFulu returns true if the input `slot` can upgrade to Fulu.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == FULU_FORK_EPOCH
func CanUpgradeToFulu(slot primitives.Slot) bool {
epochStart := slots.IsEpochStart(slot)
fuluEpoch := slots.ToEpoch(slot) == params.BeaconConfig().FuluForkEpoch
return epochStart && fuluEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -1,6 +1,7 @@
package time_test
import (
"fmt"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
@@ -83,76 +84,6 @@ func TestNextEpoch_OK(t *testing.T) {
}
}
func TestCanUpgradeToAltair(t *testing.T) {
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.AltairForkEpoch = 5
params.OverrideBeaconConfig(bc)
tests := []struct {
name string
slot primitives.Slot
want bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
},
{
name: "not altair epoch",
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
},
{
name: "altair epoch",
slot: primitives.Slot(params.BeaconConfig().AltairForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := time.CanUpgradeToAltair(tt.slot); got != tt.want {
t.Errorf("canUpgradeToAltair() = %v, want %v", got, tt.want)
}
})
}
}
func TestCanUpgradeBellatrix(t *testing.T) {
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.BellatrixForkEpoch = 5
params.OverrideBeaconConfig(bc)
tests := []struct {
name string
slot primitives.Slot
want bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
},
{
name: "not bellatrix epoch",
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
},
{
name: "bellatrix epoch",
slot: primitives.Slot(params.BeaconConfig().BellatrixForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := time.CanUpgradeToBellatrix(tt.slot); got != tt.want {
t.Errorf("CanUpgradeToBellatrix() = %v, want %v", got, tt.want)
}
})
}
}
func TestCanProcessEpoch_TrueOnEpochsLastSlot(t *testing.T) {
tests := []struct {
slot primitives.Slot
@@ -264,107 +195,79 @@ func TestAltairCompatible(t *testing.T) {
}
}
func TestCanUpgradeToCapella(t *testing.T) {
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.CapellaForkEpoch = 5
params.OverrideBeaconConfig(bc)
tests := []struct {
name string
slot primitives.Slot
want bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
},
{
name: "not capella epoch",
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
},
{
name: "capella epoch",
slot: primitives.Slot(params.BeaconConfig().CapellaForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := time.CanUpgradeToCapella(tt.slot); got != tt.want {
t.Errorf("CanUpgradeToCapella() = %v, want %v", got, tt.want)
}
})
}
}
func TestCanUpgradeTo(t *testing.T) {
beaconConfig := params.BeaconConfig()
func TestCanUpgradeToDeneb(t *testing.T) {
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.DenebForkEpoch = 5
params.OverrideBeaconConfig(bc)
tests := []struct {
name string
slot primitives.Slot
want bool
outerTestCases := []struct {
name string
forkEpoch *primitives.Epoch
upgradeFunc func(primitives.Slot) bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
name: "Altair",
forkEpoch: &beaconConfig.AltairForkEpoch,
upgradeFunc: time.CanUpgradeToAltair,
},
{
name: "not deneb epoch",
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
name: "Bellatrix",
forkEpoch: &beaconConfig.BellatrixForkEpoch,
upgradeFunc: time.CanUpgradeToBellatrix,
},
{
name: "deneb epoch",
slot: primitives.Slot(params.BeaconConfig().DenebForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
name: "Capella",
forkEpoch: &beaconConfig.CapellaForkEpoch,
upgradeFunc: time.CanUpgradeToCapella,
},
{
name: "Deneb",
forkEpoch: &beaconConfig.DenebForkEpoch,
upgradeFunc: time.CanUpgradeToDeneb,
},
{
name: "Electra",
forkEpoch: &beaconConfig.ElectraForkEpoch,
upgradeFunc: time.CanUpgradeToElectra,
},
{
name: "Fulu",
forkEpoch: &beaconConfig.FuluForkEpoch,
upgradeFunc: time.CanUpgradeToFulu,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := time.CanUpgradeToDeneb(tt.slot); got != tt.want {
t.Errorf("CanUpgradeToDeneb() = %v, want %v", got, tt.want)
}
})
}
}
func TestCanUpgradeToElectra(t *testing.T) {
params.SetupTestConfigCleanup(t)
bc := params.BeaconConfig()
bc.ElectraForkEpoch = 5
params.OverrideBeaconConfig(bc)
tests := []struct {
name string
slot primitives.Slot
want bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
},
{
name: "not electra epoch",
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
},
{
name: "electra epoch",
slot: primitives.Slot(params.BeaconConfig().ElectraForkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := time.CanUpgradeToElectra(tt.slot); got != tt.want {
t.Errorf("CanUpgradeToElectra() = %v, want %v", got, tt.want)
}
})
for _, otc := range outerTestCases {
params.SetupTestConfigCleanup(t)
*otc.forkEpoch = 5
params.OverrideBeaconConfig(beaconConfig)
innerTestCases := []struct {
name string
slot primitives.Slot
want bool
}{
{
name: "not epoch start",
slot: 1,
want: false,
},
{
name: fmt.Sprintf("not %s epoch", otc.name),
slot: params.BeaconConfig().SlotsPerEpoch,
want: false,
},
{
name: fmt.Sprintf("%s epoch", otc.name),
slot: primitives.Slot(*otc.forkEpoch) * params.BeaconConfig().SlotsPerEpoch,
want: true,
},
}
for _, itc := range innerTestCases {
t.Run(fmt.Sprintf("%s-%s", otc.name, itc.name), func(t *testing.T) {
if got := otc.upgradeFunc(itc.slot); got != itc.want {
t.Errorf("CanUpgradeTo%s() = %v, want %v", otc.name, got, itc.want)
}
})
}
}
}

View File

@@ -23,6 +23,7 @@ go_library(
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/execution:go_default_library",
"//beacon-chain/core/fulu:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
e "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/fulu"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
@@ -298,7 +299,7 @@ func ProcessSlotsCore(ctx context.Context, span trace.Span, state state.BeaconSt
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
var err error
if time.CanProcessEpoch(state) {
if state.Version() == version.Electra {
if state.Version() >= version.Electra {
if err = electra.ProcessEpoch(ctx, state); err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("could not process %s epoch", version.String(state.Version())))
}
@@ -322,9 +323,11 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
defer span.End()
var err error
slot := state.Slot()
upgraded := false
if time.CanUpgradeToAltair(state.Slot()) {
if time.CanUpgradeToAltair(slot) {
state, err = altair.UpgradeToAltair(ctx, state)
if err != nil {
tracing.AnnotateError(span, err)
@@ -333,7 +336,7 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToBellatrix(state.Slot()) {
if time.CanUpgradeToBellatrix(slot) {
state, err = execution.UpgradeToBellatrix(state)
if err != nil {
tracing.AnnotateError(span, err)
@@ -342,7 +345,7 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToCapella(state.Slot()) {
if time.CanUpgradeToCapella(slot) {
state, err = capella.UpgradeToCapella(state)
if err != nil {
tracing.AnnotateError(span, err)
@@ -351,7 +354,7 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToDeneb(state.Slot()) {
if time.CanUpgradeToDeneb(slot) {
state, err = deneb.UpgradeToDeneb(state)
if err != nil {
tracing.AnnotateError(span, err)
@@ -360,7 +363,7 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToElectra(state.Slot()) {
if time.CanUpgradeToElectra(slot) {
state, err = electra.UpgradeToElectra(state)
if err != nil {
tracing.AnnotateError(span, err)
@@ -369,8 +372,17 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToFulu(slot) {
state, err = fulu.UpgradeToFulu(state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
upgraded = true
}
if upgraded {
log.Debugf("upgraded state to %s", version.String(state.Version()))
log.WithField("version", version.String(state.Version())).Info("Upgraded state to")
}
return state, nil

View File

@@ -665,6 +665,20 @@ func TestProcessSlots_ThroughElectraEpoch(t *testing.T) {
require.Equal(t, params.BeaconConfig().SlotsPerEpoch*10, st.Slot())
}
func TestProcessSlots_ThroughFuluEpoch(t *testing.T) {
transition.SkipSlotCache.Disable()
params.SetupTestConfigCleanup(t)
conf := params.BeaconConfig()
conf.FuluForkEpoch = 5
params.OverrideBeaconConfig(conf)
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerEpoch*10)
require.NoError(t, err)
require.Equal(t, version.Fulu, st.Version())
require.Equal(t, params.BeaconConfig().SlotsPerEpoch*10, st.Slot())
}
func TestProcessSlotsUsingNextSlotCache(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 1)
r := []byte{'a'}

View File

@@ -9,25 +9,35 @@ import (
// SlashingParamsPerVersion returns the slashing parameters for the given state version.
func SlashingParamsPerVersion(v int) (slashingQuotient, proposerRewardQuotient, whistleblowerRewardQuotient uint64, err error) {
cfg := params.BeaconConfig()
switch v {
case version.Phase0:
slashingQuotient = cfg.MinSlashingPenaltyQuotient
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotient
case version.Altair:
slashingQuotient = cfg.MinSlashingPenaltyQuotientAltair
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotient
case version.Bellatrix, version.Capella, version.Deneb:
slashingQuotient = cfg.MinSlashingPenaltyQuotientBellatrix
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotient
case version.Electra:
if v >= version.Electra {
slashingQuotient = cfg.MinSlashingPenaltyQuotientElectra
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotientElectra
default:
err = errors.New("unknown state version")
return
}
if v >= version.Bellatrix {
slashingQuotient = cfg.MinSlashingPenaltyQuotientBellatrix
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotient
return
}
if v >= version.Altair {
slashingQuotient = cfg.MinSlashingPenaltyQuotientAltair
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotient
return
}
if v >= version.Phase0 {
slashingQuotient = cfg.MinSlashingPenaltyQuotient
proposerRewardQuotient = cfg.ProposerRewardQuotient
whistleblowerRewardQuotient = cfg.WhistleBlowerRewardQuotient
return
}
err = errors.Errorf("unknown state version %s", version.String(v))
return
}

View File

@@ -137,8 +137,8 @@ func (s *LazilyPersistentStoreColumn) IsDataAvailable(
// fullCommitmentsToCheck returns the commitments to check for a given block.
func fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot primitives.Slot) (*safeCommitmentsArray, error) {
// Return early for blocks that are pre-deneb.
if block.Version() < version.Deneb {
// Return early for blocks that are pre-Fulu.
if block.Version() < version.Fulu {
return &safeCommitmentsArray{}, nil
}
@@ -165,9 +165,17 @@ func fullCommitmentsToCheck(nodeID enode.ID, block blocks.ROBlock, currentSlot p
return &safeCommitmentsArray{}, nil
}
// Retrieve the custody columns.
custodySubnetCount := peerdas.CustodySubnetCount()
custodyColumns, err := peerdas.CustodyColumns(nodeID, custodySubnetCount)
// Retrieve the groups count.
custodyGroupCount := peerdas.CustodyGroupCount()
// Retrieve custody groups.
custodyGroups, err := peerdas.CustodyGroups(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody groups")
}
// Retrieve custody columns.
custodyColumns, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}

View File

@@ -31,9 +31,9 @@ func TestFullCommitmentsToCheck(t *testing.T) {
err error
}{
{
name: "pre deneb",
name: "pre fulu",
block: func(t *testing.T) blocks.ROBlock {
bb := util.NewBeaconBlockBellatrix()
bb := util.NewBeaconBlockElectra()
sb, err := blocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
@@ -44,7 +44,7 @@ func TestFullCommitmentsToCheck(t *testing.T) {
{
name: "commitments within da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d := util.NewBeaconBlockFulu()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := blocks.NewSignedBeaconBlock(d)
@@ -59,7 +59,7 @@ func TestFullCommitmentsToCheck(t *testing.T) {
{
name: "commitments outside da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d := util.NewBeaconBlockElectra()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := blocks.NewSignedBeaconBlock(d)

View File

@@ -36,7 +36,8 @@ func (s BlobStorageSummary) HasIndex(idx uint64) bool {
// HasDataColumnIndex true if the DataColumnSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasDataColumnIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
if idx >= fieldparams.NumberOfColumns {
numberOfColumns := params.BeaconConfig().NumberOfColumns
if idx >= numberOfColumns {
return false
}
@@ -66,7 +67,7 @@ func (s BlobStorageSummary) AllAvailable(count int) bool {
// AllDataColumnsAvailable returns true if we have all datacolumns for corresponding indices.
func (s BlobStorageSummary) AllDataColumnsAvailable(indices map[uint64]bool) bool {
if uint64(len(indices)) > fieldparams.NumberOfColumns {
if len(indices) > len(s.mask) {
return false
}
@@ -119,7 +120,8 @@ func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64
v.slot = slot
if v.mask == nil {
// TODO: Separate blobs from data columns
v.mask = make(blobIndexMask, fieldparams.NumberOfColumns)
numberOfColumns := params.BeaconConfig().NumberOfColumns
v.mask = make(blobIndexMask, numberOfColumns)
}
if !v.mask[idx] {
s.updateMetrics(1)

View File

@@ -41,6 +41,7 @@ go_library(
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -823,6 +823,16 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(electraBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Electra block")
}
case hasFuluKey(enc):
rawBlock = &ethpb.SignedBeaconBlockFulu{}
if err := rawBlock.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Fulu block")
}
case hasFuluBlindKey(enc):
rawBlock = &ethpb.SignedBlindedBeaconBlockFulu{}
if err := rawBlock.UnmarshalSSZ(enc[len(fuluBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Fulu block")
}
default:
// Marshal block bytes to phase 0 beacon block.
rawBlock = &ethpb.SignedBeaconBlock{}
@@ -851,32 +861,50 @@ func encodeBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
}
func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
switch blk.Version() {
case version.Electra:
v := blk.Version()
if v >= version.Fulu {
if blk.IsBlinded() {
return fuluBlindKey, nil
}
return fuluKey, nil
}
if v >= version.Electra {
if blk.IsBlinded() {
return electraBlindKey, nil
}
return electraKey, nil
case version.Deneb:
}
if v >= version.Deneb {
if blk.IsBlinded() {
return denebBlindKey, nil
}
return denebKey, nil
case version.Capella:
}
if v >= version.Capella {
if blk.IsBlinded() {
return capellaBlindKey, nil
}
return capellaKey, nil
case version.Bellatrix:
}
if v >= version.Bellatrix {
if blk.IsBlinded() {
return bellatrixBlindKey, nil
}
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
case version.Phase0:
return nil, nil
default:
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}
if v >= version.Altair {
return altairKey, nil
}
if v >= version.Phase0 {
return nil, nil
}
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}

View File

@@ -65,3 +65,17 @@ func hasElectraBlindKey(enc []byte) bool {
}
return bytes.Equal(enc[:len(electraBlindKey)], electraBlindKey)
}
func hasFuluKey(enc []byte) bool {
if len(fuluKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(fuluKey)], fuluKey)
}
func hasFuluBlindKey(enc []byte) bool {
if len(fuluBlindKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(fuluBlindKey)], fuluBlindKey)
}

View File

@@ -109,6 +109,7 @@ var Buckets = [][]byte{
stateValidatorsBucket,
lightClientUpdatesBucket,
lightClientBootstrapBucket,
lightClientSyncCommitteeBucket,
// Indices buckets.
blockSlotIndicesBucket,
stateSlotIndicesBucket,

View File

@@ -7,6 +7,8 @@ import (
"github.com/golang/snappy"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
light_client "github.com/prysmaticlabs/prysm/v5/consensus-types/light-client"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -35,9 +37,35 @@ func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot []byte,
_, span := trace.StartSpan(ctx, "BeaconDB.SaveLightClientBootstrap")
defer span.End()
bootstrapCopy, err := light_client.NewWrappedBootstrap(proto.Clone(bootstrap.Proto()))
if err != nil {
return errors.Wrap(err, "could not clone light client bootstrap")
}
syncCommitteeHash, err := bootstrapCopy.CurrentSyncCommittee().HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not hash current sync committee")
}
return s.db.Update(func(tx *bolt.Tx) error {
syncCommitteeBucket := tx.Bucket(lightClientSyncCommitteeBucket)
syncCommitteeAlreadyExists := syncCommitteeBucket.Get(syncCommitteeHash[:]) != nil
if !syncCommitteeAlreadyExists {
enc, err := bootstrapCopy.CurrentSyncCommittee().MarshalSSZ()
if err != nil {
return errors.Wrap(err, "could not marshal current sync committee")
}
if err := syncCommitteeBucket.Put(syncCommitteeHash[:], enc); err != nil {
return errors.Wrap(err, "could not save current sync committee")
}
}
err = bootstrapCopy.SetCurrentSyncCommittee(createEmptySyncCommittee())
if err != nil {
return errors.Wrap(err, "could not set current sync committee to zero while saving")
}
bkt := tx.Bucket(lightClientBootstrapBucket)
enc, err := encodeLightClientBootstrap(bootstrap)
enc, err := encodeLightClientBootstrap(bootstrapCopy, syncCommitteeHash)
if err != nil {
return err
}
@@ -50,20 +78,49 @@ func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot []byte) (int
defer span.End()
var bootstrap interfaces.LightClientBootstrap
var syncCommitteeHash []byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lightClientBootstrapBucket)
syncCommitteeBucket := tx.Bucket(lightClientSyncCommitteeBucket)
enc := bkt.Get(blockRoot)
if enc == nil {
return nil
}
var err error
bootstrap, err = decodeLightClientBootstrap(enc)
bootstrap, syncCommitteeHash, err = decodeLightClientBootstrap(enc)
if err != nil {
return errors.Wrap(err, "could not decode light client bootstrap")
}
var syncCommitteeBytes = syncCommitteeBucket.Get(syncCommitteeHash)
if syncCommitteeBytes == nil {
return errors.New("sync committee not found")
}
syncCommittee := &ethpb.SyncCommittee{}
if err := syncCommittee.UnmarshalSSZ(syncCommitteeBytes); err != nil {
return errors.Wrap(err, "could not unmarshal sync committee")
}
err = bootstrap.SetCurrentSyncCommittee(syncCommittee)
if err != nil {
return errors.Wrap(err, "could not set current sync committee while retrieving")
}
return err
})
return bootstrap, err
}
func encodeLightClientBootstrap(bootstrap interfaces.LightClientBootstrap) ([]byte, error) {
func createEmptySyncCommittee() *ethpb.SyncCommittee {
syncCom := make([][]byte, params.BeaconConfig().SyncCommitteeSize)
for i := 0; uint64(i) < params.BeaconConfig().SyncCommitteeSize; i++ {
syncCom[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
return &ethpb.SyncCommittee{
Pubkeys: syncCom,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
}
func encodeLightClientBootstrap(bootstrap interfaces.LightClientBootstrap, syncCommitteeHash [32]byte) ([]byte, error) {
key, err := keyForLightClientUpdate(bootstrap.Version())
if err != nil {
return nil, err
@@ -72,48 +129,56 @@ func encodeLightClientBootstrap(bootstrap interfaces.LightClientBootstrap) ([]by
if err != nil {
return nil, errors.Wrap(err, "could not marshal light client bootstrap")
}
fullEnc := make([]byte, len(key)+len(enc))
fullEnc := make([]byte, len(key)+32+len(enc))
copy(fullEnc, key)
copy(fullEnc[len(key):], enc)
return snappy.Encode(nil, fullEnc), nil
copy(fullEnc[len(key):len(key)+32], syncCommitteeHash[:])
copy(fullEnc[len(key)+32:], enc)
compressedEnc := snappy.Encode(nil, fullEnc)
return compressedEnc, nil
}
func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, error) {
func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, []byte, error) {
var err error
enc, err = snappy.Decode(nil, enc)
if err != nil {
return nil, errors.Wrap(err, "could not snappy decode light client bootstrap")
return nil, nil, errors.Wrap(err, "could not snappy decode light client bootstrap")
}
var m proto.Message
var syncCommitteeHash []byte
switch {
case hasAltairKey(enc):
bootstrap := &ethpb.LightClientBootstrapAltair{}
if err := bootstrap.UnmarshalSSZ(enc[len(altairKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Altair light client bootstrap")
if err := bootstrap.UnmarshalSSZ(enc[len(altairKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Altair light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(altairKey) : len(altairKey)+32]
case hasCapellaKey(enc):
bootstrap := &ethpb.LightClientBootstrapCapella{}
if err := bootstrap.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Capella light client bootstrap")
if err := bootstrap.UnmarshalSSZ(enc[len(capellaKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Capella light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(capellaKey) : len(capellaKey)+32]
case hasDenebKey(enc):
bootstrap := &ethpb.LightClientBootstrapDeneb{}
if err := bootstrap.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Deneb light client bootstrap")
if err := bootstrap.UnmarshalSSZ(enc[len(denebKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Deneb light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(denebKey) : len(denebKey)+32]
case hasElectraKey(enc):
bootstrap := &ethpb.LightClientBootstrapElectra{}
if err := bootstrap.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra light client bootstrap")
if err := bootstrap.UnmarshalSSZ(enc[len(electraKey)+32:]); err != nil {
return nil, nil, errors.Wrap(err, "could not unmarshal Electra light client bootstrap")
}
m = bootstrap
syncCommitteeHash = enc[len(electraKey) : len(electraKey)+32]
default:
return nil, errors.New("decoding of saved light client bootstrap is unsupported")
return nil, nil, errors.New("decoding of saved light client bootstrap is unsupported")
}
return light_client.NewWrappedBootstrap(m)
bootstrap, err := light_client.NewWrappedBootstrap(m)
return bootstrap, syncCommitteeHash, err
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) (map[uint64]interfaces.LightClientUpdate, error) {

View File

@@ -18,6 +18,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
bolt "go.etcd.io/bbolt"
"google.golang.org/protobuf/proto"
)
@@ -122,7 +123,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
StateRoot: sampleRoot,
BodyRoot: sampleRoot,
},
Execution: &enginev1.ExecutionPayloadHeaderElectra{
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
@@ -140,6 +141,34 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
require.NoError(t, err)
st, err = util.NewBeaconStateElectra()
require.NoError(t, err)
case version.Fulu:
slot = primitives.Slot(config.FuluForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
BodyRoot: sampleRoot,
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
},
ExecutionBranch: sampleExecutionBranch,
})
require.NoError(t, err)
st, err = util.NewBeaconStateFulu()
require.NoError(t, err)
default:
return nil, fmt.Errorf("unsupported version %s", version.String(v))
}
@@ -167,6 +196,7 @@ func TestStore_LightClientUpdate_CanSaveRetrieve(t *testing.T) {
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.FuluForkEpoch = 3
params.OverrideBeaconConfig(cfg)
db := setupDB(t)
@@ -213,6 +243,17 @@ func TestStore_LightClientUpdate_CanSaveRetrieve(t *testing.T) {
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
})
t.Run("Fulu", func(t *testing.T) {
update, err := createUpdate(t, version.Fulu)
require.NoError(t, err)
period := uint64(1)
err = db.SaveLightClientUpdate(ctx, period, update)
require.NoError(t, err)
retrievedUpdate, err := db.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.DeepEqual(t, update, retrievedUpdate, "retrieved update does not match saved update")
@@ -571,7 +612,13 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Capella", func(t *testing.T) {
@@ -586,7 +633,13 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootCapella"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Deneb", func(t *testing.T) {
@@ -601,7 +654,13 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootDeneb"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
t.Run("Electra", func(t *testing.T) {
@@ -616,10 +675,138 @@ func TestStore_LightClientBootstrap_CanSaveRetrieve(t *testing.T) {
retrievedBootstrap, err := db.LightClientBootstrap(ctx, []byte("blockRootElectra"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap, retrievedBootstrap, "retrieved bootstrap does not match saved bootstrap")
require.DeepEqual(t, bootstrap.Header(), retrievedBootstrap.Header(), "retrieved bootstrap header does not match saved bootstrap header")
require.DeepEqual(t, bootstrap.CurrentSyncCommittee(), retrievedBootstrap.CurrentSyncCommittee(), "retrieved bootstrap sync committee does not match saved bootstrap sync committee")
savedBranch, err := bootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap.CurrentSyncCommitteeBranchElectra()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap sync committee branch does not match saved bootstrap sync committee branch")
})
}
func TestStore_LightClientBootstrap_MultipleBootstrapsWithSameSyncCommittee(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.EpochsPerSyncCommitteePeriod = 1
params.OverrideBeaconConfig(cfg)
db := setupDB(t)
ctx := context.Background()
bootstrap1, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
bootstrap2, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
randomSyncCommittee := createRandomSyncCommittee()
err = bootstrap1.SetCurrentSyncCommittee(randomSyncCommittee)
require.NoError(t, err)
err = bootstrap2.SetCurrentSyncCommittee(randomSyncCommittee)
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair1"), bootstrap1)
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair2"), bootstrap2)
require.NoError(t, err)
retrievedBootstrap1, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair1"))
require.NoError(t, err)
retrievedBootstrap2, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair2"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap1.Header(), retrievedBootstrap1.Header(), "retrieved bootstrap1 header does not match saved bootstrap1 header")
require.DeepEqual(t, randomSyncCommittee, retrievedBootstrap1.CurrentSyncCommittee(), "retrieved bootstrap1 sync committee does not match saved bootstrap1 sync committee")
savedBranch, err := bootstrap1.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap1.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap1 sync committee branch does not match saved bootstrap1 sync committee branch")
require.DeepEqual(t, bootstrap2.Header(), retrievedBootstrap2.Header(), "retrieved bootstrap1 header does not match saved bootstrap1 header")
require.DeepEqual(t, randomSyncCommittee, retrievedBootstrap2.CurrentSyncCommittee(), "retrieved bootstrap1 sync committee does not match saved bootstrap1 sync committee")
savedBranch2, err := bootstrap2.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch2, err := retrievedBootstrap2.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch2, retrievedBranch2, "retrieved bootstrap1 sync committee branch does not match saved bootstrap1 sync committee branch")
// Ensure that the sync committee is only stored once
err = db.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(lightClientSyncCommitteeBucket)
require.NotNil(t, bucket)
count := bucket.Stats().KeyN
require.Equal(t, 1, count)
return nil
})
require.NoError(t, err)
}
func TestStore_LightClientBootstrap_MultipleBootstrapsWithDifferentSyncCommittees(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.AltairForkEpoch = 0
cfg.CapellaForkEpoch = 1
cfg.DenebForkEpoch = 2
cfg.ElectraForkEpoch = 3
cfg.EpochsPerSyncCommitteePeriod = 1
params.OverrideBeaconConfig(cfg)
db := setupDB(t)
ctx := context.Background()
bootstrap1, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
bootstrap2, err := createDefaultLightClientBootstrap(primitives.Slot(uint64(params.BeaconConfig().AltairForkEpoch) * uint64(params.BeaconConfig().SlotsPerEpoch)))
require.NoError(t, err)
err = bootstrap1.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = bootstrap2.SetCurrentSyncCommittee(createRandomSyncCommittee())
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair1"), bootstrap1)
require.NoError(t, err)
err = db.SaveLightClientBootstrap(ctx, []byte("blockRootAltair2"), bootstrap2)
require.NoError(t, err)
retrievedBootstrap1, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair1"))
require.NoError(t, err)
retrievedBootstrap2, err := db.LightClientBootstrap(ctx, []byte("blockRootAltair2"))
require.NoError(t, err)
require.DeepEqual(t, bootstrap1.Header(), retrievedBootstrap1.Header(), "retrieved bootstrap1 header does not match saved bootstrap1 header")
require.DeepEqual(t, bootstrap1.CurrentSyncCommittee(), retrievedBootstrap1.CurrentSyncCommittee(), "retrieved bootstrap1 sync committee does not match saved bootstrap1 sync committee")
savedBranch, err := bootstrap1.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch, err := retrievedBootstrap1.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch, retrievedBranch, "retrieved bootstrap1 sync committee branch does not match saved bootstrap1 sync committee branch")
require.DeepEqual(t, bootstrap2.Header(), retrievedBootstrap2.Header(), "retrieved bootstrap1 header does not match saved bootstrap1 header")
require.DeepEqual(t, bootstrap2.CurrentSyncCommittee(), retrievedBootstrap2.CurrentSyncCommittee(), "retrieved bootstrap1 sync committee does not match saved bootstrap1 sync committee")
savedBranch2, err := bootstrap2.CurrentSyncCommitteeBranch()
require.NoError(t, err)
retrievedBranch2, err := retrievedBootstrap2.CurrentSyncCommitteeBranch()
require.NoError(t, err)
require.DeepEqual(t, savedBranch2, retrievedBranch2, "retrieved bootstrap1 sync committee branch does not match saved bootstrap1 sync committee branch")
// Ensure that the sync committee is stored twice
err = db.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(lightClientSyncCommitteeBucket)
require.NotNil(t, bucket)
count := bucket.Stats().KeyN
require.Equal(t, 2, count)
return nil
})
require.NoError(t, err)
}
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize

View File

@@ -18,8 +18,9 @@ var (
registrationBucket = []byte("registration")
// Light Client Updates Bucket
lightClientUpdatesBucket = []byte("light-client-updates")
lightClientBootstrapBucket = []byte("light-client-bootstrap")
lightClientUpdatesBucket = []byte("light-client-updates")
lightClientBootstrapBucket = []byte("light-client-bootstrap")
lightClientSyncCommitteeBucket = []byte("light-client-sync-committee")
// Deprecated: This bucket was migrated in PR 6461. Do not use, except for migrations.
slotsHasObjectBucket = []byte("slots-has-objects")
@@ -54,6 +55,8 @@ var (
denebBlindKey = []byte("blind-deneb")
electraKey = []byte("electra")
electraBlindKey = []byte("blind-electra")
fuluKey = []byte("fulu")
fuluBlindKey = []byte("blind-fulu")
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")

View File

@@ -517,6 +517,19 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
}
switch {
case hasFuluKey(enc):
protoState := &ethpb.BeaconStateFulu{}
if err := protoState.UnmarshalSSZ(enc[len(fuluKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Electra")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return nil, err
}
if ok {
protoState.Validators = validatorEntries
}
return statenative.InitializeFromProtoUnsafeFulu(protoState)
case hasElectraKey(enc):
protoState := &ethpb.BeaconStateElectra{}
if err := protoState.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
@@ -676,6 +689,19 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(electraKey, rawObj...)), nil
case version.Fulu:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateFulu)
if !ok {
return nil, errors.New("non valid inner state")
}
if rState == nil {
return nil, errors.New("nil state")
}
rawObj, err := rState.MarshalSSZ()
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(fuluKey, rawObj...)), nil
default:
return nil, errors.New("invalid inner state")
}

View File

@@ -139,7 +139,7 @@ func TestState_CanSaveRetrieve(t *testing.T) {
st, err := util.NewBeaconStateElectra()
require.NoError(t, err)
require.NoError(t, st.SetSlot(100))
p, err := blocks.WrappedExecutionPayloadHeaderElectra(&enginev1.ExecutionPayloadHeaderElectra{
p, err := blocks.WrappedExecutionPayloadHeaderDeneb(&enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),

View File

@@ -131,11 +131,11 @@ go_test(
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//accounts/abi/bind/backends:go_default_library",
"@com_github_ethereum_go_ethereum//beacon/engine:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//ethclient/simulated:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -12,6 +12,7 @@ import (
dbutil "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/types"
"github.com/prysmaticlabs/prysm/v5/config/params"
contracts "github.com/prysmaticlabs/prysm/v5/contracts/deposit"
"github.com/prysmaticlabs/prysm/v5/contracts/deposit/mock"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
@@ -44,7 +45,7 @@ func TestLatestMainchainInfo_OK(t *testing.T) {
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
testAcc.Backend.Commit()
@@ -141,7 +142,7 @@ func TestBlockExists_ValidHash(t *testing.T) {
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
testAcc.Backend.Commit()
block, err := testAcc.Backend.BlockByNumber(context.Background(), big.NewInt(0))
block, err := testAcc.Backend.Client().BlockByNumber(context.Background(), big.NewInt(0))
assert.NoError(t, err)
exists, height, err := web3Service.BlockExists(context.Background(), block.Hash())
@@ -201,8 +202,10 @@ func TestBlockExists_UsesCachedBlockInfo(t *testing.T) {
}
func TestService_BlockNumberByTimestamp(t *testing.T) {
ctx := context.Background()
beaconDB := dbutil.SetupDB(t)
testAcc, err := mock.Setup()
require.NoError(t, err, "Unable to set up simulated backend")
server, endpoint, err := mockExecution.SetupRPCServer()
require.NoError(t, err)
@@ -216,16 +219,22 @@ func TestService_BlockNumberByTimestamp(t *testing.T) {
require.NoError(t, err)
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
// simulated backend sets eth1 block
params.SetupTestConfigCleanup(t)
conf := params.BeaconConfig().Copy()
conf.SecondsPerETH1Block = 1
params.OverrideBeaconConfig(conf)
initialHead, err := testAcc.Backend.Client().HeaderByNumber(ctx, nil)
require.NoError(t, err)
for i := 0; i < 200; i++ {
testAcc.Backend.Commit()
}
ctx := context.Background()
hd, err := testAcc.Backend.HeaderByNumber(ctx, nil)
hd, err := testAcc.Backend.Client().HeaderByNumber(ctx, nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockTime = hd.Time
web3Service.latestEth1Data.BlockHeight = hd.Number.Uint64()
blk, err := web3Service.BlockByTimestamp(ctx, 1000 /* time */)
blk, err := web3Service.BlockByTimestamp(ctx, initialHead.Time+100 /* time */)
require.NoError(t, err)
if blk.Number.Cmp(big.NewInt(0)) == 0 {
t.Error("Returned a block with zero number, expected to be non zero")
@@ -253,7 +262,7 @@ func TestService_BlockNumberByTimestampLessTargetTime(t *testing.T) {
testAcc.Backend.Commit()
}
ctx := context.Background()
hd, err := testAcc.Backend.HeaderByNumber(ctx, nil)
hd, err := testAcc.Backend.Client().HeaderByNumber(ctx, nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockTime = hd.Time
// Use extremely small deadline to illustrate that context deadlines are respected.
@@ -291,7 +300,7 @@ func TestService_BlockNumberByTimestampMoreTargetTime(t *testing.T) {
testAcc.Backend.Commit()
}
ctx := context.Background()
hd, err := testAcc.Backend.HeaderByNumber(ctx, nil)
hd, err := testAcc.Backend.Client().HeaderByNumber(ctx, nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockTime = hd.Time
// Use extremely small deadline to illustrate that context deadlines are respected.

View File

@@ -136,30 +136,18 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
defer cancel()
result := &pb.PayloadStatus{}
switch payload.Proto().(type) {
switch payloadPb := payload.Proto().(type) {
case *pb.ExecutionPayload:
payloadPb, ok := payload.Proto().(*pb.ExecutionPayload)
if !ok {
return nil, errors.New("execution data must be a Bellatrix or Capella execution payload")
}
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethod, payloadPb)
if err != nil {
return nil, handleRPCError(err)
}
case *pb.ExecutionPayloadCapella:
payloadPb, ok := payload.Proto().(*pb.ExecutionPayloadCapella)
if !ok {
return nil, errors.New("execution data must be a Capella execution payload")
}
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV2, payloadPb)
if err != nil {
return nil, handleRPCError(err)
}
case *pb.ExecutionPayloadDeneb:
payloadPb, ok := payload.Proto().(*pb.ExecutionPayloadDeneb)
if !ok {
return nil, errors.New("execution data must be a Deneb execution payload")
}
if executionRequests == nil {
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb, versionedHashes, parentBlockRoot)
if err != nil {
@@ -233,7 +221,7 @@ func (s *Service) ForkchoiceUpdated(
if err != nil {
return nil, nil, handleRPCError(err)
}
case version.Deneb, version.Electra:
case version.Deneb, version.Electra, version.Fulu:
a, err := attrs.PbV3()
if err != nil {
return nil, nil, err
@@ -643,43 +631,7 @@ func fullPayloadFromPayloadBody(
return nil, errors.New("execution block and header cannot be nil")
}
switch bVersion {
case version.Bellatrix:
return blocks.WrappedExecutionPayload(&pb.ExecutionPayload{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: pb.RecastHexutilByteSlice(body.Transactions),
})
case version.Capella:
return blocks.WrappedExecutionPayloadCapella(&pb.ExecutionPayloadCapella{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: pb.RecastHexutilByteSlice(body.Transactions),
Withdrawals: body.Withdrawals,
}) // We can't get the block value and don't care about the block value for this instance
case version.Deneb, version.Electra:
if bVersion >= version.Deneb {
ebg, err := header.ExcessBlobGas()
if err != nil {
return nil, errors.Wrap(err, "unable to extract ExcessBlobGas attribute from execution payload header")
@@ -708,9 +660,48 @@ func fullPayloadFromPayloadBody(
ExcessBlobGas: ebg,
BlobGasUsed: bgu,
}) // We can't get the block value and don't care about the block value for this instance
default:
return nil, fmt.Errorf("unknown execution block version for payload %d", bVersion)
}
if bVersion >= version.Capella {
return blocks.WrappedExecutionPayloadCapella(&pb.ExecutionPayloadCapella{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: pb.RecastHexutilByteSlice(body.Transactions),
Withdrawals: body.Withdrawals,
}) // We can't get the block value and don't care about the block value for this instance
}
if bVersion >= version.Bellatrix {
return blocks.WrappedExecutionPayload(&pb.ExecutionPayload{
ParentHash: header.ParentHash(),
FeeRecipient: header.FeeRecipient(),
StateRoot: header.StateRoot(),
ReceiptsRoot: header.ReceiptsRoot(),
LogsBloom: header.LogsBloom(),
PrevRandao: header.PrevRandao(),
BlockNumber: header.BlockNumber(),
GasLimit: header.GasLimit(),
GasUsed: header.GasUsed(),
Timestamp: header.Timestamp(),
ExtraData: header.ExtraData(),
BaseFeePerGas: header.BaseFeePerGas(),
BlockHash: header.BlockHash(),
Transactions: pb.RecastHexutilByteSlice(body.Transactions),
})
}
return nil, fmt.Errorf("unknown execution block version for payload %s", version.String(bVersion))
}
// Handles errors received from the RPC server according to the specification.
@@ -802,35 +793,7 @@ func tDStringToUint256(td string) (*uint256.Int, error) {
}
func EmptyExecutionPayload(v int) (proto.Message, error) {
switch v {
case version.Bellatrix:
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
}, nil
case version.Capella:
return &pb.ExecutionPayloadCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
case version.Deneb, version.Electra:
if v >= version.Deneb {
return &pb.ExecutionPayloadDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
@@ -844,15 +807,10 @@ func EmptyExecutionPayload(v int) (proto.Message, error) {
Transactions: make([][]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
default:
return nil, errors.Wrapf(ErrUnsupportedVersion, "version=%s", version.String(v))
}
}
func EmptyExecutionPayloadHeader(v int) (proto.Message, error) {
switch v {
case version.Bellatrix:
return &pb.ExecutionPayloadHeader{
if v >= version.Capella {
return &pb.ExecutionPayloadCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
@@ -862,22 +820,31 @@ func EmptyExecutionPayloadHeader(v int) (proto.Message, error) {
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}, nil
case version.Capella:
return &pb.ExecutionPayloadHeaderCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
}
if v >= version.Bellatrix {
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
}, nil
case version.Deneb, version.Electra:
}
return nil, errors.Wrapf(ErrUnsupportedVersion, "version=%s", version.String(v))
}
func EmptyExecutionPayloadHeader(v int) (proto.Message, error) {
if v >= version.Deneb {
return &pb.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
@@ -891,9 +858,39 @@ func EmptyExecutionPayloadHeader(v int) (proto.Message, error) {
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
}, nil
default:
return nil, errors.Wrapf(ErrUnsupportedVersion, "version=%s", version.String(v))
}
if v >= version.Capella {
return &pb.ExecutionPayloadHeaderCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
}, nil
}
if v >= version.Bellatrix {
return &pb.ExecutionPayloadHeader{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
}, nil
}
return nil, errors.Wrapf(ErrUnsupportedVersion, "version=%s", version.String(v))
}
func toBlockNumArg(number *big.Int) string {

View File

@@ -47,7 +47,7 @@ func TestProcessDepositLog_OK(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
testAcc.Backend.Commit()
@@ -72,7 +72,7 @@ func TestProcessDepositLog_OK(t *testing.T) {
},
}
logs, err := testAcc.Backend.FilterLogs(web3Service.ctx, query)
logs, err := testAcc.Backend.Client().FilterLogs(web3Service.ctx, query)
require.NoError(t, err, "Unable to retrieve logs")
if len(logs) == 0 {
@@ -116,7 +116,7 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
testAcc.Backend.Commit()
@@ -144,7 +144,7 @@ func TestProcessDepositLog_InsertsPendingDeposit(t *testing.T) {
},
}
logs, err := testAcc.Backend.FilterLogs(web3Service.ctx, query)
logs, err := testAcc.Backend.Client().FilterLogs(web3Service.ctx, query)
require.NoError(t, err, "Unable to retrieve logs")
web3Service.chainStartData.Chainstarted = true
@@ -176,7 +176,7 @@ func TestUnpackDepositLogData_OK(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
testAcc.Backend.Commit()
@@ -199,7 +199,7 @@ func TestUnpackDepositLogData_OK(t *testing.T) {
},
}
logz, err := testAcc.Backend.FilterLogs(web3Service.ctx, query)
logz, err := testAcc.Backend.Client().FilterLogs(web3Service.ctx, query)
require.NoError(t, err, "Unable to retrieve logs")
loggedPubkey, withCreds, _, loggedSig, index, err := contracts.UnpackDepositLogData(logz[0].Data)
@@ -232,7 +232,7 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
@@ -269,7 +269,7 @@ func TestProcessETH2GenesisLog_8DuplicatePubkeys(t *testing.T) {
},
}
logs, err := testAcc.Backend.FilterLogs(web3Service.ctx, query)
logs, err := testAcc.Backend.Client().FilterLogs(web3Service.ctx, query)
require.NoError(t, err, "Unable to retrieve logs")
for i := range logs {
@@ -307,7 +307,7 @@ func TestProcessETH2GenesisLog(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
@@ -342,7 +342,7 @@ func TestProcessETH2GenesisLog(t *testing.T) {
},
}
logs, err := testAcc.Backend.FilterLogs(web3Service.ctx, query)
logs, err := testAcc.Backend.Client().FilterLogs(web3Service.ctx, query)
require.NoError(t, err, "Unable to retrieve logs")
require.Equal(t, depositsReqForChainStart, len(logs))
@@ -400,16 +400,18 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.httpLogger = testAcc.Backend
web3Service.httpLogger = testAcc.Backend.Client()
web3Service.latestEth1Data.LastRequestedBlock = 0
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
block, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = block.NumberU64()
web3Service.latestEth1Data.BlockTime = block.Time()
bConfig := params.MinimalSpecConfig().Copy()
bConfig.MinGenesisTime = 0
bConfig.SecondsPerETH1Block = 10
bConfig.SecondsPerETH1Block = 1
params.OverrideBeaconConfig(bConfig)
nConfig := params.BeaconNetworkConfig()
nConfig.ContractDeploymentBlock = 0
@@ -444,8 +446,10 @@ func TestProcessETH2GenesisLog_CorrectNumOfDeposits(t *testing.T) {
for i := uint64(0); i < params.BeaconConfig().Eth1FollowDistance; i++ {
testAcc.Backend.Commit()
}
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
b, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = b.NumberU64()
web3Service.latestEth1Data.BlockTime = b.Time()
// Set up our subscriber now to listen for the chain started event.
stateChannel := make(chan *feed.Event, 1)
@@ -497,13 +501,15 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.httpLogger = testAcc.Backend
web3Service.httpLogger = testAcc.Backend.Client()
web3Service.latestEth1Data.LastRequestedBlock = 0
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
b, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = b.NumberU64()
web3Service.latestEth1Data.BlockTime = b.Time()
bConfig := params.MinimalSpecConfig().Copy()
bConfig.SecondsPerETH1Block = 10
params.OverrideBeaconConfig(bConfig)
@@ -540,14 +546,19 @@ func TestProcessETH2GenesisLog_LargePeriodOfNoLogs(t *testing.T) {
for i := uint64(0); i < 1500; i++ {
testAcc.Backend.Commit()
}
wantedGenesisTime := testAcc.Backend.Blockchain().CurrentBlock().Time
genesisBlock, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
wantedGenesisTime := genesisBlock.Time()
// Forward the chain to account for the follow distance
for i := uint64(0); i < params.BeaconConfig().Eth1FollowDistance; i++ {
testAcc.Backend.Commit()
}
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
currBlock, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = currBlock.NumberU64()
web3Service.latestEth1Data.BlockTime = currBlock.Time()
// Set the genesis time 500 blocks ahead of the last
// deposit log.
@@ -608,7 +619,7 @@ func newPowchainService(t *testing.T, eth1Backend *mock.TestAccount, beaconDB db
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(eth1Backend.ContractAddr, eth1Backend.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(eth1Backend.ContractAddr, eth1Backend.Backend.Client())
require.NoError(t, err)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: eth1Backend.Backend}

View File

@@ -17,14 +17,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
type versioner struct {
version int
}
func (v versioner) Version() int {
return v.version
}
func payloadToBody(t *testing.T, ed interfaces.ExecutionData) *pb.ExecutionPayloadBody {
body := &pb.ExecutionPayloadBody{}
txs, err := ed.Transactions()
@@ -45,6 +37,7 @@ type blindedBlockFixtures struct {
emptyDenebBlock *fullAndBlinded
afterSkipDeneb *fullAndBlinded
electra *fullAndBlinded
fulu *fullAndBlinded
}
type fullAndBlinded struct {
@@ -77,6 +70,12 @@ func electraSlot(t *testing.T) primitives.Slot {
return s
}
func fuluSlot(t *testing.T) primitives.Slot {
s, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
require.NoError(t, err)
return s
}
func testBlindedBlockFixtures(t *testing.T) *blindedBlockFixtures {
pfx := fixturesStruct()
fx := &blindedBlockFixtures{}
@@ -104,11 +103,18 @@ func testBlindedBlockFixtures(t *testing.T) *blindedBlockFixtures {
electra.BlockNumber = 5
electraBlock, _ := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, electraSlot(t), 0, util.WithElectraPayload(electra))
fx.electra = blindedBlockWithHeader(t, electraBlock)
fulu := fixturesStruct().ExecutionPayloadDeneb
fulu.BlockHash = bytesutil.PadTo([]byte("fulu"), 32)
fulu.BlockNumber = 6
fuluBlock, _ := util.GenerateTestElectraBlockWithSidecar(t, [32]byte{}, fuluSlot(t), 0, util.WithElectraPayload(fulu))
fx.fulu = blindedBlockWithHeader(t, fuluBlock)
return fx
}
func TestPayloadBodiesViaUnblinder(t *testing.T) {
defer util.HackElectraMaxuint(t)()
defer util.HackForksMaxuint(t, []int{version.Electra, version.Fulu})()
fx := testBlindedBlockFixtures(t)
t.Run("mix of non-empty and empty", func(t *testing.T) {
cli, srv := newMockEngine(t)
@@ -145,7 +151,7 @@ func TestPayloadBodiesViaUnblinder(t *testing.T) {
}
func TestFixtureEquivalence(t *testing.T) {
defer util.HackElectraMaxuint(t)()
defer util.HackForksMaxuint(t, []int{version.Electra, version.Fulu})()
fx := testBlindedBlockFixtures(t)
t.Run("full and blinded block equivalence", func(t *testing.T) {
testAssertReconstructedEquivalent(t, fx.denebBlock.blinded.block, fx.denebBlock.full)
@@ -248,7 +254,7 @@ func TestComputeRanges(t *testing.T) {
}
func TestReconstructBlindedBlockBatchFallbackToRange(t *testing.T) {
defer util.HackElectraMaxuint(t)()
defer util.HackForksMaxuint(t, []int{version.Electra, version.Fulu})()
ctx := context.Background()
t.Run("fallback fails", func(t *testing.T) {
cli, srv := newMockEngine(t)
@@ -333,18 +339,19 @@ func TestReconstructBlindedBlockBatchFallbackToRange(t *testing.T) {
})
}
func TestReconstructBlindedBlockBatchDenebAndElectra(t *testing.T) {
defer util.HackElectraMaxuint(t)()
t.Run("deneb and electra", func(t *testing.T) {
func TestReconstructBlindedBlockBatchDenebAndBeyond(t *testing.T) {
defer util.HackForksMaxuint(t, []int{version.Electra, version.Fulu})()
t.Run("deneb and beyond", func(t *testing.T) {
cli, srv := newMockEngine(t)
fx := testBlindedBlockFixtures(t)
srv.register(GetPayloadBodiesByHashV1, func(msg *jsonrpcMessage, w http.ResponseWriter, r *http.Request) {
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.denebBlock.blinded.header), payloadToBody(t, fx.electra.blinded.header)}
executionPayloadBodies := []*pb.ExecutionPayloadBody{payloadToBody(t, fx.denebBlock.blinded.header), payloadToBody(t, fx.electra.blinded.header), payloadToBody(t, fx.fulu.blinded.header)}
mockWriteResult(t, w, msg, executionPayloadBodies)
})
blinded := []interfaces.ReadOnlySignedBeaconBlock{
fx.denebBlock.blinded.block,
fx.electra.blinded.block,
fx.fulu.blinded.block,
}
unblinded, err := reconstructBlindedBlockBatch(context.Background(), cli, blinded)
require.NoError(t, err)

View File

@@ -8,10 +8,10 @@ import (
"time"
"github.com/ethereum/go-ethereum"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
gethTypes "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethclient/simulated"
"github.com/ethereum/go-ethereum/rpc"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
@@ -44,7 +44,7 @@ var _ POWBlockFetcher = (*Service)(nil)
var _ Chain = (*Service)(nil)
type goodLogger struct {
backend *backends.SimulatedBackend
backend *simulated.Backend
}
func (_ *goodLogger) Close() {}
@@ -53,7 +53,7 @@ func (g *goodLogger) SubscribeFilterLogs(ctx context.Context, q ethereum.FilterQ
if g.backend == nil {
return new(event.Feed).Subscribe(ch), nil
}
return g.backend.SubscribeFilterLogs(ctx, q, ch)
return g.backend.Client().SubscribeFilterLogs(ctx, q, ch)
}
func (g *goodLogger) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]gethTypes.Log, error) {
@@ -69,7 +69,7 @@ func (g *goodLogger) FilterLogs(ctx context.Context, q ethereum.FilterQuery) ([]
}
return logs, nil
}
return g.backend.FilterLogs(ctx, q)
return g.backend.Client().FilterLogs(ctx, q)
}
type goodNotifier struct {
@@ -109,7 +109,7 @@ func TestStart_OK(t *testing.T) {
require.NoError(t, err, "unable to setup execution service")
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
testAcc.Backend.Commit()
@@ -156,7 +156,7 @@ func TestStop_OK(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
testAcc.Backend.Commit()
@@ -186,10 +186,12 @@ func TestService_Eth1Synced(t *testing.T) {
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service = setDefaultMocks(web3Service)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
currTime := testAcc.Backend.Blockchain().CurrentHeader().Time
header, err := testAcc.Backend.Client().HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
currTime := header.Time
now := time.Now()
assert.NoError(t, testAcc.Backend.AdjustTime(now.Sub(time.Unix(int64(currTime), 0))))
testAcc.Backend.Commit()
@@ -212,22 +214,29 @@ func TestFollowBlock_OK(t *testing.T) {
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
// simulated backend sets eth1 block
// time as 10 seconds
params.SetupTestConfigCleanup(t)
conf := params.BeaconConfig().Copy()
conf.SecondsPerETH1Block = 10
conf.SecondsPerETH1Block = 1
params.OverrideBeaconConfig(conf)
web3Service = setDefaultMocks(web3Service)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
baseHeight := testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
block, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
baseHeight := block.NumberU64()
// process follow_distance blocks
var lastHash common.Hash
for i := 0; i < int(params.BeaconConfig().Eth1FollowDistance); i++ {
testAcc.Backend.Commit()
lastHash = testAcc.Backend.Commit()
}
lb, err := testAcc.Backend.Client().BlockByHash(context.Background(), lastHash)
require.NoError(t, err)
log.Println(lb.NumberU64())
// set current height
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
block, err = testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
web3Service.latestEth1Data.BlockHeight = block.NumberU64()
web3Service.latestEth1Data.BlockTime = block.Time()
h, err := web3Service.followedBlockHeight(context.Background())
require.NoError(t, err)
@@ -238,9 +247,12 @@ func TestFollowBlock_OK(t *testing.T) {
for i := uint64(0); i < numToForward; i++ {
testAcc.Backend.Commit()
}
newBlock, err := testAcc.Backend.Client().BlockByNumber(context.Background(), nil)
require.NoError(t, err)
// set current height
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentBlock().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentBlock().Time
web3Service.latestEth1Data.BlockHeight = newBlock.NumberU64()
web3Service.latestEth1Data.BlockTime = newBlock.Time()
h, err = web3Service.followedBlockHeight(context.Background())
require.NoError(t, err)
@@ -325,11 +337,11 @@ func TestLogTillGenesis_OK(t *testing.T) {
WithDatabase(beaconDB),
)
require.NoError(t, err, "unable to setup web3 ETH1.0 chain service")
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend)
web3Service.depositContractCaller, err = contracts.NewDepositContractCaller(testAcc.ContractAddr, testAcc.Backend.Client())
require.NoError(t, err)
web3Service.rpcClient = &mockExecution.RPCClient{Backend: testAcc.Backend}
web3Service.httpLogger = testAcc.Backend
web3Service.httpLogger = testAcc.Backend.Client()
for i := 0; i < 30; i++ {
testAcc.Backend.Commit()
}
@@ -485,15 +497,18 @@ func TestNewService_EarliestVotingBlock(t *testing.T) {
for i := 0; i < numToForward; i++ {
testAcc.Backend.Commit()
}
currTime := testAcc.Backend.Blockchain().CurrentHeader().Time
currHeader, err := testAcc.Backend.Client().HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
currTime := currHeader.Time
now := time.Now()
err = testAcc.Backend.AdjustTime(now.Sub(time.Unix(int64(currTime), 0)))
require.NoError(t, err)
testAcc.Backend.Commit()
currTime = testAcc.Backend.Blockchain().CurrentHeader().Time
web3Service.latestEth1Data.BlockHeight = testAcc.Backend.Blockchain().CurrentHeader().Number.Uint64()
web3Service.latestEth1Data.BlockTime = testAcc.Backend.Blockchain().CurrentHeader().Time
currHeader, err = testAcc.Backend.Client().HeaderByNumber(context.Background(), nil)
require.NoError(t, err)
currTime = currHeader.Time
web3Service.latestEth1Data.BlockHeight = currHeader.Number.Uint64()
web3Service.latestEth1Data.BlockTime = currHeader.Time
web3Service.chainStartData.GenesisTime = currTime
// With a current slot of zero, only request follow_blocks behind.

View File

@@ -25,10 +25,10 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//accounts/abi/bind/backends:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//ethclient/simulated:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -9,10 +9,10 @@ import (
"net/http/httptest"
"time"
"github.com/ethereum/go-ethereum/accounts/abi/bind/backends"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
gethTypes "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/ethclient/simulated"
"github.com/ethereum/go-ethereum/rpc"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
@@ -141,7 +141,7 @@ func (m *Chain) ETH1ConnectionErrors() []error {
// RPCClient defines the mock rpc client.
type RPCClient struct {
Backend *backends.SimulatedBackend
Backend *simulated.Backend
BlockNumMap map[uint64]*types.HeaderInfo
}
@@ -195,7 +195,7 @@ func (r *RPCClient) CallContext(ctx context.Context, obj interface{}, methodName
return err
}
}
h, err := r.Backend.HeaderByNumber(ctx, num)
h, err := r.Backend.Client().HeaderByNumber(ctx, num)
if err != nil {
return err
}
@@ -213,7 +213,7 @@ func (r *RPCClient) CallContext(ctx context.Context, obj interface{}, methodName
if !ok {
return errors.Errorf("wrong argument type provided: %T", args[0])
}
h, err := r.Backend.HeaderByHash(ctx, val)
h, err := r.Backend.Client().HeaderByHash(ctx, val)
if err != nil {
return err
}
@@ -241,7 +241,7 @@ func (r *RPCClient) BatchCall(b []rpc.BatchElem) error {
if err != nil {
return err
}
h, err := r.Backend.HeaderByNumber(context.Background(), num)
h, err := r.Backend.Client().HeaderByNumber(context.Background(), num)
if err != nil {
return err
}

View File

@@ -236,7 +236,7 @@ func (c *AttCaches) AggregatedAttestationsBySlotIndexElectra(
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Electra && slot == as[0].GetData().Slot && as[0].CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
if as[0].Version() >= version.Electra && slot == as[0].GetData().Slot && as[0].CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
for _, a := range as {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version

View File

@@ -115,7 +115,7 @@ func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if a.Version() == version.Electra && slot == a.GetData().Slot && a.CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
if a.Version() >= version.Electra && slot == a.GetData().Slot && a.CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {

View File

@@ -169,7 +169,7 @@ func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMs
// Ensure we have peers with this subnet.
// This adds in a special value to the subnet
// to ensure that we can re-use the same subnet locker.
// to ensure that we can reuse the same subnet locker.
wrappedSubIdx := subnet + syncLockerVal
s.subnetLocker(wrappedSubIdx).RLock()
hasPeer := s.hasPeerWithSubnet(syncCommitteeToTopic(subnet, forkDigest))

View File

@@ -213,7 +213,7 @@ func TestService_BroadcastAttestation(t *testing.T) {
func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
// Setup bootnode.
cfg := &Config{}
cfg := &Config{PingInterval: testPingInterval}
port := 2000
cfg.UDPPort = uint(port)
_, pkey := createAddrAndPrivKey(t)
@@ -238,12 +238,16 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
cfg = &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
MaxPeers: 2,
PingInterval: testPingInterval,
}
// Setup 2 different hosts
for i := 1; i <= 2; i++ {
h, pkey, ipAddr := createHost(t, port+i)
cfg.UDPPort = uint(port + i)
cfg.TCPPort = uint(port + i)
if len(listeners) > 0 {
cfg.Discv5BootStrapAddrs = append(cfg.Discv5BootStrapAddrs, listeners[len(listeners)-1].Self().String())
}
s := &Service{
cfg: cfg,
genesisTime: genesisTime,

View File

@@ -1,6 +1,8 @@
package p2p
import (
"time"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
@@ -15,6 +17,7 @@ type Config struct {
NoDiscovery bool
EnableUPnP bool
StaticPeerID bool
DisableLivenessCheck bool
StaticPeers []string
Discv5BootStrapAddrs []string
RelayNodeAddr string
@@ -27,6 +30,7 @@ type Config struct {
QUICPort uint
TCPPort uint
UDPPort uint
PingInterval time.Duration
MaxPeers uint
QueueSize uint
AllowListCIDR string

View File

@@ -9,43 +9,44 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
)
// DataColumnsAdmissibleCustodyPeers returns a list of peers that custody a super set of the local node's custody columns.
func (s *Service) DataColumnsAdmissibleCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
localCustodySubnetCount := peerdas.CustodySubnetCount()
return s.dataColumnsAdmissiblePeers(peers, localCustodySubnetCount)
// AdmissibleCustodyGroupsPeers returns a list of peers that custody a super set of the local node's custody groups.
func (s *Service) AdmissibleCustodyGroupsPeers(peers []peer.ID) ([]peer.ID, error) {
localCustodyGroupCount := peerdas.CustodyGroupCount()
return s.custodyGroupsAdmissiblePeers(peers, localCustodyGroupCount)
}
// DataColumnsAdmissibleSubnetSamplingPeers returns a list of peers that custody a super set of the local node's sampling columns.
func (s *Service) DataColumnsAdmissibleSubnetSamplingPeers(peers []peer.ID) ([]peer.ID, error) {
localSubnetSamplingSize := peerdas.SubnetSamplingSize()
return s.dataColumnsAdmissiblePeers(peers, localSubnetSamplingSize)
// AdmissibleCustodySamplingPeers returns a list of peers that custody a super set of the local node's sampling columns.
func (s *Service) AdmissibleCustodySamplingPeers(peers []peer.ID) ([]peer.ID, error) {
localSubnetSamplingSize := peerdas.CustodyGroupSamplingSize()
return s.custodyGroupsAdmissiblePeers(peers, localSubnetSamplingSize)
}
// dataColumnsAdmissiblePeers computes the first columns of the local node corresponding to `subnetCount`, then
// filters out `peers` that do not custody a super set of these columns.
func (s *Service) dataColumnsAdmissiblePeers(peers []peer.ID, subnetCount uint64) ([]peer.ID, error) {
// Get the total number of columns.
numberOfColumns := params.BeaconConfig().NumberOfColumns
// custodyGroupsAdmissiblePeers filters out `peers` that do not custody a super set of our own custody groups.
func (s *Service) custodyGroupsAdmissiblePeers(peers []peer.ID, custodyGroupCount uint64) ([]peer.ID, error) {
// Get the total number of custody groups.
numberOfCustodyGroups := params.BeaconConfig().NumberOfCustodyGroups
// Retrieve the local node ID.
localNodeId := s.NodeID()
// Retrieve the needed columns.
neededColumns, err := peerdas.CustodyColumns(localNodeId, subnetCount)
// Retrieve the needed custody groups.
neededCustodyGroups, err := peerdas.CustodyGroups(localNodeId, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns for local node")
return nil, errors.Wrap(err, "custody groups")
}
// Get the number of needed columns.
localneededColumnsCount := uint64(len(neededColumns))
// Find the valid peers.
validPeers := make([]peer.ID, 0, len(peers))
loop:
for _, pid := range peers {
// Get the custody subnets count of the remote peer.
remoteCustodySubnetCount := s.DataColumnsCustodyCountFromRemotePeer(pid)
// Get the custody group count of the remote peer.
remoteCustodyGroupCount := s.CustodyGroupCountFromPeer(pid)
// If the remote peer custodies less groups than we do, skip it.
if remoteCustodyGroupCount < custodyGroupCount {
continue
}
// Get the remote node ID from the peer ID.
remoteNodeID, err := ConvertPeerIDToNodeID(pid)
@@ -53,44 +54,39 @@ loop:
return nil, errors.Wrap(err, "convert peer ID to node ID")
}
// Get the custody columns of the remote peer.
remoteCustodyColumns, err := peerdas.CustodyColumns(remoteNodeID, remoteCustodySubnetCount)
// Get the custody groups of the remote peer.
remoteCustodyGroups, err := peerdas.CustodyGroups(remoteNodeID, remoteCustodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
return nil, errors.Wrap(err, "custody groups")
}
remoteCustodyColumnsCount := uint64(len(remoteCustodyColumns))
// If the remote peer custodies less columns than the local node needs, skip it.
if remoteCustodyColumnsCount < localneededColumnsCount {
continue
}
remoteCustodyGroupsCount := uint64(len(remoteCustodyGroups))
// If the remote peers custodies all the possible columns, add it to the list.
if remoteCustodyColumnsCount == numberOfColumns {
copiedId := pid
validPeers = append(validPeers, copiedId)
if remoteCustodyGroupsCount == numberOfCustodyGroups {
validPeers = append(validPeers, pid)
continue
}
// Filter out invalid peers.
for c := range neededColumns {
if !remoteCustodyColumns[c] {
for custodyGroup := range neededCustodyGroups {
if !remoteCustodyGroups[custodyGroup] {
continue loop
}
}
copiedId := pid
// Add valid peer to list
validPeers = append(validPeers, copiedId)
validPeers = append(validPeers, pid)
}
return validPeers, nil
}
func (s *Service) custodyCountFromRemotePeerEnr(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
// custodyGroupCountFromPeerENR retrieves the custody count from the peer ENR.
// If the ENR is not available, it defaults to the minimum number of custody groups
// an honest node custodies and serves samples from.
func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of groups.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
@@ -104,42 +100,47 @@ func (s *Service) custodyCountFromRemotePeerEnr(pid peer.ID) uint64 {
return custodyRequirement
}
// Retrieve the custody subnets count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
// Retrieve the custody group count from the ENR.
custodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
log.WithError(err).WithFields(logrus.Fields{
"peerID": pid,
"defaultValue": custodyRequirement,
}).Debug("Failed to retrieve custody count from ENR for peer, defaulting to the default value")
}).Debug("Failed to retrieve custody group count from ENR for peer, defaulting to the default value")
return custodyRequirement
}
return custodyCount
return custodyGroupCount
}
// DataColumnsCustodyCountFromRemotePeer retrieves the custody count from a remote peer.
func (s *Service) DataColumnsCustodyCountFromRemotePeer(pid peer.ID) uint64 {
// Try to get the custody count from the peer's metadata.
// CustodyGroupCountFromPeer retrieves custody group count from a peer.
// It first tries to get the custody group count from the peer's metadata,
// then falls back to the ENR value if the metadata is not available, then
// falls back to the minimum number of custody groups an honest node should custodiy
// and serve samples from if ENR is not available.
func (s *Service) CustodyGroupCountFromPeer(pid peer.ID) uint64 {
// Try to get the custody group count from the peer's metadata.
metadata, err := s.peers.Metadata(pid)
if err != nil {
// On error, default to the ENR value.
log.WithError(err).WithField("peerID", pid).Debug("Failed to retrieve metadata for peer, defaulting to the ENR value")
return s.custodyCountFromRemotePeerEnr(pid)
return s.custodyGroupCountFromPeerENR(pid)
}
// If the metadata is nil, default to the ENR value.
if metadata == nil {
log.WithField("peerID", pid).Debug("Metadata is nil, defaulting to the ENR value")
return s.custodyCountFromRemotePeerEnr(pid)
return s.custodyGroupCountFromPeerENR(pid)
}
// Get the custody subnets count from the metadata.
custodyCount := metadata.CustodySubnetCount()
custodyCount := metadata.CustodyGroupCount()
// If the custody count is null, default to the ENR value.
if custodyCount == 0 {
log.WithField("peerID", pid).Debug("The custody count extracted from the metadata equals to 0, defaulting to the ENR value")
return s.custodyCountFromRemotePeerEnr(pid)
return s.custodyGroupCountFromPeerENR(pid)
}
return custodyCount

View File

@@ -41,13 +41,13 @@ func createPeer(t *testing.T, privateKeyOffset int, custodyCount uint64) (*enr.R
require.NoError(t, err)
record := &enr.Record{}
record.Set(peerdas.Csc(custodyCount))
record.Set(peerdas.Cgc(custodyCount))
record.Set(enode.Secp256k1(privateKey.PublicKey))
return record, peerID, privateKey
}
func TestDataColumnsAdmissibleCustodyPeers(t *testing.T) {
func TestAdmissibleCustodyGroupsPeers(t *testing.T) {
genesisValidatorRoot := make([]byte, 32)
for i := 0; i < 32; i++ {
@@ -70,18 +70,18 @@ func TestDataColumnsAdmissibleCustodyPeers(t *testing.T) {
custodyRequirement := params.BeaconConfig().CustodyRequirement
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Peer 1 custodies exactly the same columns than us.
// Peer 1 custodies exactly the same groups than us.
// (We use the same keys pair than ours for simplicity)
peer1Record, peer1ID, localPrivateKey := createPeer(t, 1, custodyRequirement)
// Peer 2 custodies all the columns.
// Peer 2 custodies all the groups.
peer2Record, peer2ID, _ := createPeer(t, 2, dataColumnSidecarSubnetCount)
// Peer 3 custodies different columns than us (but the same count).
// Peer 3 custodies different groups than us (but the same count).
// (We use the same public key than peer 2 for simplicity)
peer3Record, peer3ID, _ := createPeer(t, 3, custodyRequirement)
// Peer 4 custodies less columns than us.
// Peer 4 custodies less groups than us.
peer4Record, peer4ID, _ := createPeer(t, 4, custodyRequirement-1)
createListener := func() (*discover.UDPv5, error) {
@@ -98,40 +98,40 @@ func TestDataColumnsAdmissibleCustodyPeers(t *testing.T) {
service.peers.Add(peer3Record, peer3ID, nil, network.DirOutbound)
service.peers.Add(peer4Record, peer4ID, nil, network.DirOutbound)
actual, err := service.DataColumnsAdmissibleCustodyPeers([]peer.ID{peer1ID, peer2ID, peer3ID, peer4ID})
actual, err := service.AdmissibleCustodyGroupsPeers([]peer.ID{peer1ID, peer2ID, peer3ID, peer4ID})
require.NoError(t, err)
expected := []peer.ID{peer1ID, peer2ID}
require.DeepSSZEqual(t, expected, actual)
}
func TestDataColumnsCustodyCountFromRemotePeer(t *testing.T) {
func TestCustodyGroupCountFromPeer(t *testing.T) {
const (
expectedENR uint64 = 7
expectedMetadata uint64 = 8
pid = "test-id"
)
csc := peerdas.Csc(expectedENR)
cgc := peerdas.Cgc(expectedENR)
// Define a nil record
var nilRecord *enr.Record = nil
// Define an empty record (record with non `csc` entry)
// Define an empty record (record with non `cgc` entry)
emptyRecord := &enr.Record{}
// Define a nominal record
nominalRecord := &enr.Record{}
nominalRecord.Set(csc)
nominalRecord.Set(cgc)
// Define a metadata with zero custody.
zeroMetadata := wrapper.WrappedMetadataV2(&pb.MetaDataV2{
CustodySubnetCount: 0,
CustodyGroupCount: 0,
})
// Define a nominal metadata.
nominalMetadata := wrapper.WrappedMetadataV2(&pb.MetaDataV2{
CustodySubnetCount: expectedMetadata,
CustodyGroupCount: expectedMetadata,
})
testCases := []struct {
@@ -191,7 +191,7 @@ func TestDataColumnsCustodyCountFromRemotePeer(t *testing.T) {
}
// Retrieve the custody count from the remote peer.
actual := service.DataColumnsCustodyCountFromRemotePeer(pid)
actual := service.CustodyGroupCountFromPeer(pid)
// Verify the result.
require.Equal(t, tc.expected, actual)

View File

@@ -228,10 +228,10 @@ func (s *Service) RefreshPersistentSubnets() {
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
// Compare current epoch with the Electra fork epoch.
electraForkEpoch := params.BeaconConfig().ElectraForkEpoch
// Compare current epoch with the Fulu fork epoch.
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
if currentEpoch < electraForkEpoch {
if currentEpoch < fuluForkEpoch {
// Altair behaviour.
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
// Nothing to do, return early.
@@ -247,28 +247,28 @@ func (s *Service) RefreshPersistentSubnets() {
return
}
// Get the current custody subnet count.
custodySubnetCount := peerdas.CustodySubnetCount()
// Get the current custody group count.
custodyGroupCount := peerdas.CustodyGroupCount()
// Get the custody subnet count we store in our record.
inRecordCustodySubnetCount, err := peerdas.CustodyCountFromRecord(record)
// Get the custody group count we store in our record.
inRecordCustodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
log.WithError(err).Error("Could not retrieve custody subnet count")
return
}
// Get the custody subnet count in our metadata.
inMetadataCustodySubnetCount := s.Metadata().CustodySubnetCount()
// Get the custody group count in our metadata.
inMetadataCustodyGroupCount := s.Metadata().CustodyGroupCount()
isCustodySubnetCountUpToDate := (custodySubnetCount == inRecordCustodySubnetCount && custodySubnetCount == inMetadataCustodySubnetCount)
isCustodyGroupCountUpToDate := (custodyGroupCount == inRecordCustodyGroupCount && custodyGroupCount == inMetadataCustodyGroupCount)
if isBitVUpToDate && isBitSUpToDate && isCustodySubnetCountUpToDate {
if isBitVUpToDate && isBitSUpToDate && isCustodyGroupCountUpToDate {
// Nothing to do, return early.
return
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodySubnetCount)
s.updateSubnetRecordWithMetadataV3(bitV, bitS, custodyGroupCount)
// Ping all peers.
s.pingPeersAndLogEnr()
@@ -458,8 +458,10 @@ func (s *Service) createListener(
}
dv5Cfg := discover.Config{
PrivateKey: privKey,
Bootnodes: bootNodes,
PrivateKey: privKey,
Bootnodes: bootNodes,
PingInterval: s.cfg.PingInterval,
NoFindnodeLivenessCheck: s.cfg.DisableLivenessCheck,
}
listener, err := discover.ListenV5(conn, localNode, dv5Cfg)
@@ -495,9 +497,9 @@ func (s *Service) createLocalNode(
localNode.Set(quicEntry)
}
if params.PeerDASEnabled() {
custodySubnetCount := peerdas.CustodySubnetCount()
localNode.Set(peerdas.Csc(custodySubnetCount))
if params.FuluEnabled() {
custodyGroupCount := peerdas.CustodyGroupCount()
localNode.Set(peerdas.Cgc(custodyGroupCount))
}
localNode.SetFallbackIP(ipAddr)

View File

@@ -88,7 +88,7 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -96,6 +96,10 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
var listeners []*listenerWrapper
@@ -104,6 +108,8 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
ipAddr, pkey := createAddrAndPrivKey(t)
s = &Service{
@@ -135,9 +141,15 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
func TestCreateLocalNode(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.ElectraForkEpoch = 1
params.OverrideBeaconConfig(cfg)
// Set the fulu fork epoch to something other than the far future epoch.
initFuluForkEpoch := params.BeaconConfig().FuluForkEpoch
params.BeaconConfig().FuluForkEpoch = 42
defer func() {
params.BeaconConfig().FuluForkEpoch = initFuluForkEpoch
}()
testCases := []struct {
name string
cfg *Config
@@ -169,6 +181,7 @@ func TestCreateLocalNode(t *testing.T) {
expectedError: false,
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
// Define ports.
@@ -235,10 +248,10 @@ func TestCreateLocalNode(t *testing.T) {
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, syncSubnets)))
require.DeepSSZEqual(t, []byte{0}, *syncSubnets)
// Check custody_subnet_count config.
custodySubnetCount := new(uint64)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(peerdas.CustodySubnetCountEnrKey, custodySubnetCount)))
require.Equal(t, params.BeaconConfig().CustodyRequirement, *custodySubnetCount)
// Check cgc config.
custodyGroupCount := new(uint64)
require.NoError(t, localNode.Node().Record().Load(enr.WithEntry(peerdas.CustodyGroupCountEnrKey, custodyGroupCount)))
require.Equal(t, params.BeaconConfig().CustodyRequirement, *custodyGroupCount)
})
}
}
@@ -357,10 +370,10 @@ func TestStaticPeering_PeersAreAdded(t *testing.T) {
}
func TestHostIsResolved(t *testing.T) {
// As defined in RFC 2606 , example.org is a
// reserved example domain name.
exampleHost := "example.org"
exampleIP := "93.184.215.14"
// ip.addr.tools - construct domain names that resolve to any given IP address
// ex: 192-0-2-1.ip.addr.tools resolves to 192.0.2.1.
exampleHost := "96-7-129-13.ip.addr.tools"
exampleIP := "96.7.129.13"
s := &Service{
cfg: &Config{
@@ -538,7 +551,7 @@ type check struct {
metadataSequenceNumber uint64
attestationSubnets []uint64
syncSubnets []uint64
custodySubnetCount *uint64
custodyGroupCount *uint64
}
func checkPingCountCacheMetadataRecord(
@@ -605,16 +618,16 @@ func checkPingCountCacheMetadataRecord(
require.DeepSSZEqual(t, expectedBitS, actualBitSMetadata)
}
if expected.custodySubnetCount != nil {
if expected.custodyGroupCount != nil {
// Check custody subnet count in ENR.
var actualCustodySubnetCount uint64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(peerdas.CustodySubnetCountEnrKey, &actualCustodySubnetCount))
var actualCustodyGroupCount uint64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(peerdas.CustodyGroupCountEnrKey, &actualCustodyGroupCount))
require.NoError(t, err)
require.Equal(t, *expected.custodySubnetCount, actualCustodySubnetCount)
require.Equal(t, *expected.custodyGroupCount, actualCustodyGroupCount)
// Check custody subnet count in metadata.
actualCustodySubnetCountMetadata := service.metaData.CustodySubnetCount()
require.Equal(t, *expected.custodySubnetCount, actualCustodySubnetCountMetadata)
actualGroupCountMetadata := service.metaData.CustodyGroupCount()
require.Equal(t, *expected.custodyGroupCount, actualGroupCountMetadata)
}
}
@@ -626,17 +639,17 @@ func TestRefreshPersistentSubnets(t *testing.T) {
defer cache.SyncSubnetIDs.EmptyAllCaches()
const (
altairForkEpoch = 5
electraForkEpoch = 10
altairForkEpoch = 5
fuluForkEpoch = 10
)
custodySubnetCount := params.BeaconConfig().CustodyRequirement
custodyGroupCount := params.BeaconConfig().CustodyRequirement
// Set up epochs.
defaultCfg := params.BeaconConfig()
cfg := defaultCfg.Copy()
cfg.AltairForkEpoch = altairForkEpoch
cfg.ElectraForkEpoch = electraForkEpoch
cfg.FuluForkEpoch = fuluForkEpoch
params.OverrideBeaconConfig(cfg)
// Compute the number of seconds per epoch.
@@ -706,8 +719,8 @@ func TestRefreshPersistentSubnets(t *testing.T) {
},
},
{
name: "PeerDAS",
epochSinceGenesis: electraForkEpoch,
name: "Fulu",
epochSinceGenesis: fuluForkEpoch,
checks: []check{
{
pingCount: 0,
@@ -720,21 +733,21 @@ func TestRefreshPersistentSubnets(t *testing.T) {
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
syncSubnets: nil,
custodySubnetCount: &custodySubnetCount,
custodyGroupCount: &custodyGroupCount,
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
custodySubnetCount: &custodySubnetCount,
custodyGroupCount: &custodyGroupCount,
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
custodySubnetCount: &custodySubnetCount,
custodyGroupCount: &custodyGroupCount,
},
},
},

View File

@@ -34,8 +34,10 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
s := &Service{
cfg: &Config{
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
@@ -44,11 +46,17 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
StateNotifier: &mock.MockStateNotifier{},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
@@ -124,7 +132,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
cfg: &Config{UDPPort: uint(port), PingInterval: testPingInterval, DisableLivenessCheck: true},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -132,10 +140,16 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNode := bootListener.Self()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
UDPPort: uint(port),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
}
var listeners []*listenerWrapper
@@ -143,7 +157,6 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
c := params.BeaconConfig().Copy()
nextForkEpoch := primitives.Epoch(i)
c.ForkVersionSchedule[[4]byte{'A', 'B', 'C', 'D'}] = nextForkEpoch

View File

@@ -18,7 +18,8 @@ func (s *Service) forkWatcher() {
currEpoch == params.BeaconConfig().BellatrixForkEpoch ||
currEpoch == params.BeaconConfig().CapellaForkEpoch ||
currEpoch == params.BeaconConfig().DenebForkEpoch ||
currEpoch == params.BeaconConfig().ElectraForkEpoch {
currEpoch == params.BeaconConfig().ElectraForkEpoch ||
currEpoch == params.BeaconConfig().FuluForkEpoch {
// If we are in the fork epoch, we update our enr with
// the updated fork digest. These repeatedly does
// this over the epoch, which might be slightly wasteful

View File

@@ -30,6 +30,9 @@ var gossipTopicMappings = map[string]func() proto.Message{
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
switch topic {
case BlockSubnetTopicFormat:
if epoch >= params.BeaconConfig().FuluForkEpoch {
return &ethpb.SignedBeaconBlockFulu{}
}
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedBeaconBlockElectra{}
}
@@ -92,17 +95,25 @@ func init() {
for k, v := range gossipTopicMappings {
GossipTypeMapping[reflect.TypeOf(v())] = k
}
// Specially handle Altair objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockAltair{})] = BlockSubnetTopicFormat
// Specially handle Bellatrix objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockBellatrix{})] = BlockSubnetTopicFormat
// Specially handle Capella objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockCapella{})] = BlockSubnetTopicFormat
// Specially handle Deneb objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockDeneb{})] = BlockSubnetTopicFormat
// Specially handle Electra objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttestationElectra{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat
// Specially handle Fulu objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockFulu{})] = BlockSubnetTopicFormat
}

View File

@@ -30,17 +30,20 @@ func TestGossipTopicMappings_CorrectType(t *testing.T) {
capellaForkEpoch := primitives.Epoch(300)
denebForkEpoch := primitives.Epoch(400)
electraForkEpoch := primitives.Epoch(500)
fuluForkEpoch := primitives.Epoch(600)
bCfg.AltairForkEpoch = altairForkEpoch
bCfg.BellatrixForkEpoch = bellatrixForkEpoch
bCfg.CapellaForkEpoch = capellaForkEpoch
bCfg.DenebForkEpoch = denebForkEpoch
bCfg.ElectraForkEpoch = electraForkEpoch
bCfg.FuluForkEpoch = fuluForkEpoch
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = primitives.Epoch(100)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.BellatrixForkVersion)] = primitives.Epoch(200)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.CapellaForkVersion)] = primitives.Epoch(300)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.DenebForkVersion)] = primitives.Epoch(400)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.ElectraForkVersion)] = primitives.Epoch(500)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.FuluForkVersion)] = primitives.Epoch(600)
params.OverrideBeaconConfig(bCfg)
// Phase 0

View File

@@ -114,7 +114,7 @@ type MetadataProvider interface {
}
type DataColumnsHandler interface {
DataColumnsCustodyCountFromRemotePeer(peer.ID) uint64
DataColumnsAdmissibleCustodyPeers([]peer.ID) ([]peer.ID, error)
DataColumnsAdmissibleSubnetSamplingPeers([]peer.ID) ([]peer.ID, error)
CustodyGroupCountFromPeer(peer.ID) uint64
AdmissibleCustodyGroupsPeers([]peer.ID) ([]peer.ID, error)
AdmissibleCustodySamplingPeers([]peer.ID) ([]peer.ID, error)
}

View File

@@ -78,6 +78,11 @@ func (s *Service) CanSubscribe(topic string) bool {
log.WithError(err).Error("Could not determine Electra fork digest")
return false
}
fuluForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().FuluForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Fulu fork digest")
return false
}
switch parts[2] {
case fmt.Sprintf("%x", phase0ForkDigest):
case fmt.Sprintf("%x", altairForkDigest):
@@ -85,6 +90,7 @@ func (s *Service) CanSubscribe(topic string) bool {
case fmt.Sprintf("%x", capellaForkDigest):
case fmt.Sprintf("%x", denebForkDigest):
case fmt.Sprintf("%x", electraForkDigest):
case fmt.Sprintf("%x", fuluForkDigest):
default:
return false
}

View File

@@ -108,6 +108,7 @@ func (g gossipTracer) setMetricFromRPC(act action, subCtr prometheus.Counter, pu
ctrlCtr.WithLabelValues("prune").Add(float64(len(rpc.Control.Prune)))
ctrlCtr.WithLabelValues("ihave").Add(float64(len(rpc.Control.Ihave)))
ctrlCtr.WithLabelValues("iwant").Add(float64(len(rpc.Control.Iwant)))
ctrlCtr.WithLabelValues("idontwant").Add(float64(len(rpc.Control.Idontwant)))
}
for _, msg := range rpc.Publish {
// For incoming messages from pubsub, we do not record metrics for them as these values

View File

@@ -71,16 +71,16 @@ const (
// RPCBlobSidecarsByRangeTopicV1 is a topic for requesting blob sidecars
// in the slot range [start_slot, start_slot + count), leading up to the current head block as selected by fork choice.
// Protocol ID: /eth2/beacon_chain/req/blob_sidecars_by_range/1/ - New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_range/1/ - New in deneb.
RPCBlobSidecarsByRangeTopicV1 = protocolPrefix + BlobSidecarsByRangeName + SchemaVersionV1
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root. New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/ - New in deneb.
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root. New in PeerDAS.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1
// RPCDataColumnSidecarsByRootTopicV1 is a topic for requesting data column sidecars by their block root.
// /eth2/beacon_chain/req/data_column_sidecars_by_root/1 - New in Fulu.
RPCDataColumnSidecarsByRootTopicV1 = protocolPrefix + DataColumnSidecarsByRootName + SchemaVersionV1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot. New in PeerDAS.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1
// RPCDataColumnSidecarsByRangeTopicV1 is a topic for requesting data column sidecars by their slot.
// /eth2/beacon_chain/req/data_column_sidecars_by_range/1 - New in Fulu.
RPCDataColumnSidecarsByRangeTopicV1 = protocolPrefix + DataColumnSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
@@ -92,6 +92,7 @@ const (
RPCMetaDataTopicV2 = protocolPrefix + MetadataMessageName + SchemaVersionV2
// V3 RPC Topics
// RPCMetaDataTopicV3 defines the v3 topic for the metadata rpc method.
RPCMetaDataTopicV3 = protocolPrefix + MetadataMessageName + SchemaVersionV3
)
@@ -155,8 +156,8 @@ var altairMapping = map[string]bool{
MetadataMessageName: true,
}
// Maps all the RPC messages which are to updated with peerDAS fork epoch.
var peerDASMapping = map[string]bool{
// Maps all the RPC messages which are to updated in fulu.
var fuluMapping = map[string]bool{
MetadataMessageName: true,
}
@@ -305,9 +306,9 @@ func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
version = SchemaVersionV2
}
// Check if the message is to be updated in peerDAS.
isPeerDAS := epoch >= params.BeaconConfig().ElectraForkEpoch
if isPeerDAS && peerDASMapping[msg] {
// Check if the message is to be updated in fulu.
isFulu := epoch >= params.BeaconConfig().FuluForkEpoch
if isFulu && fuluMapping[msg] {
version = SchemaVersionV3
}

View File

@@ -26,6 +26,8 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
const testPingInterval = 100 * time.Millisecond
type mockListener struct {
localNode *enode.LocalNode
}
@@ -184,7 +186,7 @@ func TestListenForNewNodes(t *testing.T) {
params.SetupTestConfigCleanup(t)
// Setup bootnode.
notifier := &mock.MockStateNotifier{}
cfg := &Config{StateNotifier: notifier}
cfg := &Config{StateNotifier: notifier, PingInterval: testPingInterval, DisableLivenessCheck: true}
port := 2000
cfg.UDPPort = uint(port)
_, pkey := createAddrAndPrivKey(t)
@@ -200,6 +202,17 @@ func TestListenForNewNodes(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
defer func() {
pollingPeriod = currentPeriod
}()
bootNode := bootListener.Self()
var listeners []*listenerWrapper
@@ -208,6 +221,8 @@ func TestListenForNewNodes(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg = &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
MaxPeers: 30,
ClockWaiter: cs,
}

View File

@@ -30,9 +30,9 @@ var (
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
custodySubnetCountEnrKey = params.BeaconNetworkConfig().CustodySubnetCountKey
attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
custodyGroupCountEnrKey = params.BeaconNetworkConfig().CustodyGroupCountKey
)
// The value used with the subnet, in order
@@ -56,7 +56,7 @@ const blobSubnetLockerVal = 110
// chosen more than sync, attestation and blob subnet (6) combined.
const dataColumnSubnetVal = 150
// nodeFilter return a function that filters nodes based on the subnet topic and subnet index.
// nodeFilter returns a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, index uint64) (func(node *enode.Node) bool, error) {
switch {
case strings.Contains(topic, GossipAttestationMessage):
@@ -346,24 +346,24 @@ func (s *Service) updateSubnetRecordWithMetadataV2(bitVAtt bitfield.Bitvector64,
func (s *Service) updateSubnetRecordWithMetadataV3(
bitVAtt bitfield.Bitvector64,
bitVSync bitfield.Bitvector4,
custodySubnetCount uint64,
custodyGroupCount uint64,
) {
attSubnetsEntry := enr.WithEntry(attSubnetEnrKey, &bitVAtt)
syncSubnetsEntry := enr.WithEntry(syncCommsSubnetEnrKey, &bitVSync)
custodySubnetCountEntry := enr.WithEntry(custodySubnetCountEnrKey, custodySubnetCount)
custodyGroupCountEntry := enr.WithEntry(custodyGroupCountEnrKey, custodyGroupCount)
localNode := s.dv5Listener.LocalNode()
localNode.Set(attSubnetsEntry)
localNode.Set(syncSubnetsEntry)
localNode.Set(custodySubnetCountEntry)
localNode.Set(custodyGroupCountEntry)
newSeqNumber := s.metaData.SequenceNumber() + 1
s.metaData = wrapper.WrappedMetadataV2(&pb.MetaDataV2{
SeqNumber: newSeqNumber,
Attnets: bitVAtt,
Syncnets: bitVSync,
CustodySubnetCount: custodySubnetCount,
SeqNumber: newSeqNumber,
Attnets: bitVAtt,
Syncnets: bitVSync,
CustodyGroupCount: custodyGroupCount,
})
}
@@ -381,7 +381,7 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
return nil
}
// initializePersistentColumnSubnets initialize persisten column subnets
// initializePersistentColumnSubnets initialize persistent column subnets
func initializePersistentColumnSubnets(id enode.ID) error {
// Check if the column subnets are already cached.
_, ok, expTime := cache.ColumnSubnetIDs.GetColumnSubnets()
@@ -389,15 +389,25 @@ func initializePersistentColumnSubnets(id enode.ID) error {
return nil
}
// Retrieve the subnets we should be subscribed to.
subnetSamplingSize := peerdas.SubnetSamplingSize()
subnetsMap, err := peerdas.CustodyColumnSubnets(id, subnetSamplingSize)
// Compute the number of custody groups we should sample.
custodyGroupSamplingSize := peerdas.CustodyGroupSamplingSize()
// Compute the custody groups we should sample.
custodyGroups, err := peerdas.CustodyGroups(id, custodyGroupSamplingSize)
if err != nil {
return errors.Wrap(err, "custody column subnets")
return errors.Wrap(err, "custody groups")
}
subnets := make([]uint64, 0, len(subnetsMap))
for subnet := range subnetsMap {
// Compute the column subnets for the custody groups.
custodyColumns, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Compute subnets from the custody columns.
subnets := make([]uint64, 0, len(custodyColumns))
for column := range custodyColumns {
subnet := peerdas.ComputeSubnetForDataColumnSidecar(column)
subnets = append(subnets, subnet)
}
@@ -530,23 +540,29 @@ func syncSubnets(record *enr.Record) ([]uint64, error) {
return committeeIdxs, nil
}
// Retrieve the data columns subnets from a node's ENR and node ID.
// TODO: Add tests
func dataColumnSubnets(nodeID enode.ID, record *enr.Record) (map[uint64]bool, error) {
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the custody count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
custodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
// If we fail to retrieve the custody count, we default to the custody requirement.
custodyCount = custodyRequirement
return nil, errors.Wrap(err, "custody group count from record")
}
// Retrieve the custody subnets from the remote peer
custodyColumnsSubnets, err := peerdas.CustodyColumnSubnets(nodeID, custodyCount)
// Retrieve the custody groups from the remote peer.
custodyGroups, err := peerdas.CustodyGroups(nodeID, custodyGroupCount)
if err != nil {
return nil, errors.Wrap(err, "custody column subnets")
return nil, errors.Wrap(err, "custody groups")
}
return custodyColumnsSubnets, nil
// Retrieve the custody columns from the groups.
custodyColumns, err := peerdas.CustodyColumns(custodyGroups)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
// Get custody columns subnets from the columns.
return peerdas.DataColumnSubnets(custodyColumns), nil
}
// Parses the attestation subnets ENR entry in a node and extracts its value
@@ -574,7 +590,7 @@ func syncBitvector(record *enr.Record) (bitfield.Bitvector4, error) {
}
// The subnet locker is a map which keeps track of all
// mutexes stored per subnet. This locker is re-used
// mutexes stored per subnet. This locker is reused
// between both the attestation, sync and blob subnets.
// Sync subnets are stored by (subnet+syncLockerVal).
// Blob subnets are stored by (subnet+blobSubnetLockerVal).

View File

@@ -66,7 +66,7 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
genesisTime := time.Now()
bootNodeService := &Service{
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000},
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000, DisableLivenessCheck: true, PingInterval: testPingInterval},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -78,6 +78,10 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Allow bootnode's table to have its initial refresh. This allows
// inbound nodes to be added in.
time.Sleep(5 * time.Second)
bootNodeENR := bootListener.Self().String()
// Create 3 nodes, each subscribed to a different subnet.
@@ -92,6 +96,8 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
UDPPort: uint(2000 + i),
TCPPort: uint(3000 + i),
QUICPort: uint(3000 + i),
PingInterval: testPingInterval,
DisableLivenessCheck: true,
})
require.NoError(t, err)
@@ -133,6 +139,8 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
PingInterval: testPingInterval,
DisableLivenessCheck: true,
MaxPeers: 30,
UDPPort: 2010,
TCPPort: 3010,

View File

@@ -185,14 +185,14 @@ func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.Disc
return true, 0
}
func (*FakeP2P) DataColumnsCustodyCountFromRemotePeer(peer.ID) uint64 {
func (*FakeP2P) CustodyGroupCountFromPeer(peer.ID) uint64 {
return 0
}
func (*FakeP2P) DataColumnsAdmissibleCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
func (*FakeP2P) AdmissibleCustodyGroupsPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}
func (*FakeP2P) DataColumnsAdmissibleSubnetSamplingPeers(peers []peer.ID) ([]peer.ID, error) {
func (*FakeP2P) AdmissibleCustodySamplingPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -68,7 +68,10 @@ func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
libp2p.DefaultListenAddrs,
}
options = append(options, userOptions...)
// Favour user options if provided.
if len(userOptions) > 0 {
options = append(userOptions, options...)
}
h, err := libp2p.New(options...)
require.NoError(t, err)
@@ -445,8 +448,8 @@ func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.Disc
return true, 0
}
func (s *TestP2P) DataColumnsCustodyCountFromRemotePeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
func (s *TestP2P) CustodyGroupCountFromPeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of groups.
custodyRequirement := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
@@ -456,18 +459,18 @@ func (s *TestP2P) DataColumnsCustodyCountFromRemotePeer(pid peer.ID) uint64 {
}
// Retrieve the custody subnets count from the ENR.
custodyCount, err := peerdas.CustodyCountFromRecord(record)
custodyGroupCount, err := peerdas.CustodyGroupCountFromRecord(record)
if err != nil {
return custodyRequirement
}
return custodyCount
return custodyGroupCount
}
func (*TestP2P) DataColumnsAdmissibleCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
func (*TestP2P) AdmissibleCustodyGroupsPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}
func (*TestP2P) DataColumnsAdmissibleSubnetSamplingPeers(peers []peer.ID) ([]peer.ID, error) {
func (*TestP2P) AdmissibleCustodySamplingPeers(peers []peer.ID) ([]peer.ID, error) {
return peers, nil
}

View File

@@ -67,7 +67,12 @@ func InitializeDataMaps() {
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockElectra{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadElectra{}}}},
&ethpb.SignedBeaconBlockElectra{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockFulu{Body: &ethpb.BeaconBlockBodyFulu{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
}
@@ -87,9 +92,12 @@ func InitializeDataMaps() {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
},
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{}), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
},
}
@@ -114,6 +122,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (ethpb.Att, error) {
return &ethpb.AttestationElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.Att, error) {
return &ethpb.AttestationElectra{}, nil
},
}
// Reset our aggregate attestation map.
@@ -136,5 +147,8 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
},
}
}

View File

@@ -78,7 +78,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
if !(cfg.StaticPeerID || params.FuluEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}

View File

@@ -354,6 +354,29 @@ func (s *Server) publishBlindedBlockSSZ(ctx context.Context, w http.ResponseWrit
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
}
fuluBlock := &eth.SignedBlindedBeaconBlockFulu{}
if err = fuluBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_BlindedFulu{
BlindedFulu: fuluBlock,
},
}
if err = s.validateBroadcast(ctx, r, genericBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Fulu) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Fulu), err.Error()),
http.StatusBadRequest,
)
return
}
electraBlock := &eth.SignedBlindedBeaconBlockElectra{}
if err = electraBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
@@ -508,6 +531,27 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
var consensusBlock *eth.GenericSignedBeaconBlock
var fuluBlock *structs.SignedBlindedBeaconBlockFulu
if err = unmarshalStrict(body, &fuluBlock); err == nil {
consensusBlock, err = fuluBlock.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
s.proposeBlock(ctx, w, consensusBlock)
return
}
}
if versionHeader == version.String(version.Fulu) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Fulu), err.Error()),
http.StatusBadRequest,
)
return
}
var electraBlock *structs.SignedBlindedBeaconBlockElectra
if err = unmarshalStrict(body, &electraBlock); err == nil {
consensusBlock, err = electraBlock.ToGeneric()
@@ -692,6 +736,39 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
return
}
fuluBlock := &eth.SignedBeaconBlockContentsFulu{}
if err = fuluBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Fulu{
Fulu: fuluBlock,
},
}
if err = s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.broadcastSeenBlockSidecars(ctx, b, genericBlock.GetFulu().Blobs, genericBlock.GetFulu().KzgProofs); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecars")
}
}
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
s.proposeBlock(ctx, w, genericBlock)
return
}
if versionHeader == version.String(version.Fulu) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Fulu), err.Error()),
http.StatusBadRequest,
)
return
}
electraBlock := &eth.SignedBeaconBlockContentsElectra{}
if err = electraBlock.UnmarshalSSZ(body); err == nil {
genericBlock := &eth.GenericSignedBeaconBlock{
@@ -867,6 +944,37 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
var consensusBlock *eth.GenericSignedBeaconBlock
var fuluBlockContents *structs.SignedBeaconBlockContentsFulu
if err = unmarshalStrict(body, &fuluBlockContents); err == nil {
consensusBlock, err = fuluBlockContents.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(consensusBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.broadcastSeenBlockSidecars(ctx, b, consensusBlock.GetFulu().Blobs, consensusBlock.GetFulu().KzgProofs); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecars")
}
}
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
s.proposeBlock(ctx, w, consensusBlock)
return
}
}
if versionHeader == version.String(version.Fulu) {
httputil.HandleError(
w,
fmt.Sprintf("Could not decode request body into %s consensus block: %v", version.String(version.Fulu), err.Error()),
http.StatusBadRequest,
)
return
}
var electraBlockContents *structs.SignedBeaconBlockContentsElectra
if err = unmarshalStrict(body, &electraBlockContents); err == nil {
consensusBlock, err = electraBlockContents.ToGeneric()
@@ -1081,13 +1189,16 @@ func (s *Server) validateConsensus(ctx context.Context, b *eth.GenericSignedBeac
var blobs [][]byte
var proofs [][]byte
switch {
case blk.Version() == version.Electra:
blobs = b.GetElectra().Blobs
proofs = b.GetElectra().KzgProofs
case blk.Version() == version.Deneb:
switch blk.Version() {
case version.Deneb:
blobs = b.GetDeneb().Blobs
proofs = b.GetDeneb().KzgProofs
case version.Electra:
blobs = b.GetElectra().Blobs
proofs = b.GetElectra().KzgProofs
case version.Fulu:
blobs = b.GetFulu().Blobs
proofs = b.GetFulu().KzgProofs
default:
return nil
}
@@ -1118,7 +1229,8 @@ func (s *Server) validateBlobSidecars(blk interfaces.SignedBeaconBlock, blobs []
return errors.New("number of blobs, proofs, and commitments do not match")
}
for i, blob := range blobs {
if err := kzg4844.VerifyBlobProof(kzg4844.Blob(blob), kzg4844.Commitment(kzgs[i]), kzg4844.Proof(proofs[i])); err != nil {
b := kzg4844.Blob(blob)
if err := kzg4844.VerifyBlobProof(&b, kzg4844.Commitment(kzgs[i]), kzg4844.Proof(proofs[i])); err != nil {
return errors.Wrap(err, "could not verify blob proof")
}
}

View File

@@ -1991,7 +1991,7 @@ func TestSubmitAttesterSlashings(t *testing.T) {
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashing)
assert.Equal(t, true, ok)
})
t.Run("accross-fork", func(t *testing.T) {
t.Run("across-fork", func(t *testing.T) {
attestationData1.Slot = params.BeaconConfig().SlotsPerEpoch
attestationData2.Slot = params.BeaconConfig().SlotsPerEpoch
slashing := &ethpbv1alpha1.AttesterSlashing{
@@ -2147,7 +2147,7 @@ func TestSubmitAttesterSlashings(t *testing.T) {
_, ok := broadcaster.BroadcastMessages[0].(*ethpbv1alpha1.AttesterSlashingElectra)
assert.Equal(t, true, ok)
})
t.Run("accross-fork", func(t *testing.T) {
t.Run("across-fork", func(t *testing.T) {
attestationData1.Slot = params.BeaconConfig().SlotsPerEpoch
attestationData2.Slot = params.BeaconConfig().SlotsPerEpoch
slashing := &ethpbv1alpha1.AttesterSlashingElectra{

View File

@@ -302,6 +302,38 @@ func TestGetBlockV2(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("fulu", func(t *testing.T) {
b := util.NewBeaconBlockFulu()
b.Block.Slot = 123
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: sb}
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Fulu), resp.Version)
sbb := &structs.SignedBeaconBlockFulu{Message: &structs.BeaconBlockFulu{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
blk, err := sbb.ToConsensus()
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("execution optimistic", func(t *testing.T) {
b := util.NewBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(b)
@@ -518,6 +550,29 @@ func TestGetBlockSSZV2(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
})
t.Run("fulu", func(t *testing.T) {
b := util.NewBeaconBlockFulu()
b.Block.Slot = 123
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
s := &Server{
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}", nil)
request.SetPathValue("block_id", "head")
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Fulu), writer.Header().Get(api.VersionHeader))
sszExpected, err := b.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
})
}
func TestGetBlockAttestations(t *testing.T) {
@@ -1035,6 +1090,35 @@ func TestGetBlindedBlock(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("fulu", func(t *testing.T) {
b := util.NewBlindedBeaconBlockFulu()
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blinded_blocks/{block_id}", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Fulu), resp.Version)
sbb := &structs.SignedBlindedBeaconBlockFulu{Message: &structs.BlindedBeaconBlockFulu{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
blk, err := sbb.ToConsensus()
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("execution optimistic", func(t *testing.T) {
b := util.NewBlindedBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(b)
@@ -1349,11 +1433,12 @@ func TestPublishBlock(t *testing.T) {
t.Run("Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Electra)
converted, err := structs.SignedBeaconBlockContentsElectraFromConsensus(block.Electra)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
converted, err := structs.SignedBeaconBlockContentsFuluFromConsensus(block.Fulu)
require.NoError(t, err)
var signedblock *structs.SignedBeaconBlockContentsElectra
err = json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &signedblock)
var signedblock *structs.SignedBeaconBlockContentsFulu
err = json.Unmarshal([]byte(rpctesting.FuluBlockContents), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock)
return ok
@@ -1369,6 +1454,29 @@ func TestPublishBlock(t *testing.T) {
server.PublishBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
converted, err := structs.SignedBeaconBlockContentsFuluFromConsensus(block.Fulu)
require.NoError(t, err)
var signedblock *structs.SignedBeaconBlockContentsFulu
err = json.Unmarshal([]byte(rpctesting.FuluBlockContents), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.FuluBlockContents)))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("invalid block", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
@@ -1555,7 +1663,8 @@ func TestPublishBlockSSZ(t *testing.T) {
t.Run("Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Electra)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
return ok
}))
server := &Server{
@@ -1563,16 +1672,42 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBeaconBlockContentsElectra
err := json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &blk)
var blk structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetElectra().MarshalSSZ()
ssz, err := genericBlock.GetFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlock(writer, request)
@@ -1760,11 +1895,12 @@ func TestPublishBlindedBlock(t *testing.T) {
t.Run("Blinded Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedElectra)
converted, err := structs.BlindedBeaconBlockElectraFromConsensus(block.BlindedElectra.Message)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
converted, err := structs.BlindedBeaconBlockFuluFromConsensus(block.BlindedFulu.Message)
require.NoError(t, err)
var signedblock *structs.SignedBlindedBeaconBlockElectra
err = json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &signedblock)
var signedblock *structs.SignedBlindedBeaconBlockFulu
err = json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
return ok
@@ -1781,6 +1917,30 @@ func TestPublishBlindedBlock(t *testing.T) {
server.PublishBlindedBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Blinded Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
converted, err := structs.BlindedBeaconBlockFuluFromConsensus(block.BlindedFulu.Message)
require.NoError(t, err)
var signedblock *structs.SignedBlindedBeaconBlockFulu
err = json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.BlindedFuluBlock)))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("invalid block", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
@@ -1968,7 +2128,8 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
t.Run("Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedElectra)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
return ok
}))
server := &Server{
@@ -1976,16 +2137,42 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBlindedBeaconBlockElectra
err := json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &blk)
var blk structs.SignedBlindedBeaconBlockFulu
err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetBlindedElectra().MarshalSSZ()
ssz, err := genericBlock.GetBlindedFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBlindedBeaconBlockFulu
err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetBlindedFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlock(writer, request)
@@ -2165,11 +2352,12 @@ func TestPublishBlockV2(t *testing.T) {
t.Run("Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Electra)
converted, err := structs.SignedBeaconBlockContentsElectraFromConsensus(block.Electra)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
converted, err := structs.SignedBeaconBlockContentsFuluFromConsensus(block.Fulu)
require.NoError(t, err)
var signedblock *structs.SignedBeaconBlockContentsElectra
err = json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &signedblock)
var signedblock *structs.SignedBeaconBlockContentsFulu
err = json.Unmarshal([]byte(rpctesting.FuluBlockContents), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock)
return ok
@@ -2186,6 +2374,30 @@ func TestPublishBlockV2(t *testing.T) {
server.PublishBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
converted, err := structs.SignedBeaconBlockContentsFuluFromConsensus(block.Fulu)
require.NoError(t, err)
var signedblock *structs.SignedBeaconBlockContentsFulu
err = json.Unmarshal([]byte(rpctesting.FuluBlockContents), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.FuluBlockContents)))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("invalid block", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
@@ -2385,7 +2597,8 @@ func TestPublishBlockV2SSZ(t *testing.T) {
t.Run("Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Electra)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
return ok
}))
server := &Server{
@@ -2393,16 +2606,42 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBeaconBlockContentsElectra
err := json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &blk)
var blk structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetElectra().MarshalSSZ()
ssz, err := genericBlock.GetFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_Fulu)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBeaconBlockContentsFulu
err := json.Unmarshal([]byte(rpctesting.FuluBlockContents), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlockV2(writer, request)
@@ -2603,11 +2842,12 @@ func TestPublishBlindedBlockV2(t *testing.T) {
t.Run("Blinded Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedElectra)
converted, err := structs.BlindedBeaconBlockElectraFromConsensus(block.BlindedElectra.Message)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
converted, err := structs.BlindedBeaconBlockFuluFromConsensus(block.BlindedFulu.Message)
require.NoError(t, err)
var signedblock *structs.SignedBlindedBeaconBlockElectra
err = json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &signedblock)
var signedblock *structs.SignedBlindedBeaconBlockFulu
err = json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
return ok
@@ -2624,6 +2864,30 @@ func TestPublishBlindedBlockV2(t *testing.T) {
server.PublishBlindedBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Blinded Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
converted, err := structs.BlindedBeaconBlockFuluFromConsensus(block.BlindedFulu.Message)
require.NoError(t, err)
var signedblock *structs.SignedBlindedBeaconBlockFulu
err = json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.BlindedFuluBlock)))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("invalid block", func(t *testing.T) {
server := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
@@ -2823,7 +3087,8 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
t.Run("Electra", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedElectra)
// Convert back Fulu to Electra when there is at least one difference between Electra and Fulu blocks.
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
return ok
}))
server := &Server{
@@ -2831,16 +3096,42 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBlindedBeaconBlockElectra
err := json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &blk)
var blk structs.SignedBlindedBeaconBlockFulu
err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetBlindedElectra().MarshalSSZ()
ssz, err := genericBlock.GetBlindedFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
})
t.Run("Fulu", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
_, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedFulu)
return ok
}))
server := &Server{
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk structs.SignedBlindedBeaconBlockFulu
err := json.Unmarshal([]byte(rpctesting.BlindedFuluBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
require.NoError(t, err)
ssz, err := genericBlock.GetBlindedFulu().MarshalSSZ()
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader(ssz))
request.Header.Set("Content-Type", api.OctetStreamMediaType)
request.Header.Set(api.VersionHeader, version.String(version.Fulu))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlindedBlock(writer, request)

View File

@@ -79,6 +79,8 @@ func TestGetSpec(t *testing.T) {
config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107
config.FuluForkVersion = []byte("FuluForkVersion")
config.FuluForkEpoch = 109
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24
@@ -189,7 +191,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 158, len(data))
assert.Equal(t, 165, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -267,6 +269,10 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v)
case "FULU_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("FuluForkVersion")), v)
case "FULU_FORK_EPOCH":
assert.Equal(t, "109", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":
@@ -530,6 +536,18 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "6", v)
case "MAX_BLOBS_PER_BLOCK_ELECTRA":
assert.Equal(t, "9", v)
case "MAX_REQUEST_BLOB_SIDECARS_ELECTRA":
assert.Equal(t, "1152", v)
case "MAX_REQUEST_BLOB_SIDECARS_FULU":
assert.Equal(t, "1536", v)
case "NUMBER_OF_CUSTODY_GROUPS":
assert.Equal(t, "128", v)
case "CUSTODY_REQUIREMENT":
assert.Equal(t, "4", v)
case "SAMPLES_PER_SLOT":
assert.Equal(t, "8", v)
case "MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS":
assert.Equal(t, "4096", v)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -94,6 +94,12 @@ func (s *Server) getBeaconStateV2(ctx context.Context, w http.ResponseWriter, id
httputil.HandleError(w, errMsgStateFromConsensus+": "+err.Error(), http.StatusInternalServerError)
return
}
case version.Fulu:
respSt, err = structs.BeaconStateFuluFromConsensus(st)
if err != nil {
httputil.HandleError(w, errMsgStateFromConsensus+": "+err.Error(), http.StatusInternalServerError)
return
}
default:
httputil.HandleError(w, "Unsupported state version", http.StatusInternalServerError)
return

View File

@@ -195,6 +195,34 @@ func TestGetBeaconStateV2(t *testing.T) {
require.NoError(t, json.Unmarshal(resp.Data, st))
assert.Equal(t, "123", st.Slot)
})
t.Run("Fulu", func(t *testing.T) {
fakeState, err := util.NewBeaconStateFulu()
require.NoError(t, err)
require.NoError(t, fakeState.SetSlot(123))
chainService := &blockchainmock.ChainService{}
s := &Server{
Stater: &testutil.MockStater{
BeaconState: fakeState,
},
HeadFetcher: chainService,
OptimisticModeFetcher: chainService,
FinalizationFetcher: chainService,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/debug/beacon/states/{state_id}", nil)
request.SetPathValue("state_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBeaconStateV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBeaconStateV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Fulu), resp.Version)
st := &structs.BeaconStateElectra{}
require.NoError(t, json.Unmarshal(resp.Data, st))
assert.Equal(t, "123", st.Slot)
})
t.Run("execution optimistic", func(t *testing.T) {
parentRoot := [32]byte{'a'}
blk := util.NewBeaconBlock()

View File

@@ -380,7 +380,7 @@ func (es *eventStreamer) writeOutbox(ctx context.Context, w *streamingResponseWr
case <-ctx.Done():
return ctx.Err()
case rf := <-es.outbox:
// We don't want to call Flush until we've exhausted all the writes - it's always preferrable to
// We don't want to call Flush until we've exhausted all the writes - it's always preferable to
// just keep draining the outbox and rely on the underlying Write code to flush+block when it
// needs to based on buffering. Whenever we fill the buffer with a string of writes, the underlying
// code will flush on its own, so it's better to explicitly flush only once, after we've totally

View File

@@ -10,7 +10,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -35,41 +34,34 @@ func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Reques
// Get the block
blockRootParam, err := hexutil.Decode(req.PathValue("block_root"))
if err != nil {
httputil.HandleError(w, "invalid block root: "+err.Error(), http.StatusBadRequest)
httputil.HandleError(w, "Invalid block root: "+err.Error(), http.StatusBadRequest)
return
}
blockRoot := bytesutil.ToBytes32(blockRootParam)
blk, err := s.Blocker.Block(ctx, blockRoot[:])
if !shared.WriteBlockFetchError(w, blk, err) {
bootstrap, err := s.BeaconDB.LightClientBootstrap(ctx, blockRoot[:])
if err != nil {
httputil.HandleError(w, "Could not get light client bootstrap: "+err.Error(), http.StatusInternalServerError)
return
}
if bootstrap == nil {
httputil.HandleError(w, "Light client bootstrap not found", http.StatusNotFound)
return
}
// Get the state
state, err := s.Stater.StateBySlot(ctx, blk.Block().Slot())
if err != nil {
httputil.HandleError(w, "could not get state: "+err.Error(), http.StatusInternalServerError)
return
}
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(ctx, s.ChainInfoFetcher.CurrentSlot(), state, blk)
if err != nil {
httputil.HandleError(w, "could not get light client bootstrap: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(bootstrap.Version()))
if httputil.RespondWithSsz(req) {
ssz, err := bootstrap.MarshalSSZ()
if err != nil {
httputil.HandleError(w, "could not marshal bootstrap to SSZ: "+err.Error(), http.StatusInternalServerError)
httputil.HandleError(w, "Could not marshal bootstrap to SSZ: "+err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, ssz, "light_client_bootstrap.ssz")
} else {
data, err := structs.LightClientBootstrapFromConsensus(bootstrap)
if err != nil {
httputil.HandleError(w, "could not marshal bootstrap to JSON: "+err.Error(), http.StatusInternalServerError)
httputil.HandleError(w, "Could not marshal bootstrap to JSON: "+err.Error(), http.StatusInternalServerError)
return
}
response := &structs.LightClientBootstrapResponse{

View File

@@ -46,33 +46,35 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
cfg.CapellaForkEpoch = 2
cfg.DenebForkEpoch = 3
cfg.ElectraForkEpoch = 4
cfg.FuluForkEpoch = 5
params.OverrideBeaconConfig(cfg)
t.Run("altair", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
slot := primitives.Slot(params.BeaconConfig().AltairForkEpoch * primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)).Add(1)
stateRoot, err := l.State.HashTreeRoot(l.Ctx)
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(l.Ctx, slot, l.State, l.Block)
require.NoError(t, err)
db := dbtesting.SetupDB(t)
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
mockChainInfoFetcher := &mock.ChainService{Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
ChainInfoFetcher: mockChainInfoFetcher,
BeaconDB: db,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(stateRoot[:]))
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetLightClientBootstrap(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
var resp structs.LightClientBootstrapResponse
err = json.Unmarshal(writer.Body.Bytes(), &resp)
require.NoError(t, err)
@@ -89,6 +91,32 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
require.NotNil(t, resp.Data.CurrentSyncCommittee)
require.NotNil(t, resp.Data.CurrentSyncCommitteeBranch)
})
t.Run("altair - no bootstrap found", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
slot := primitives.Slot(params.BeaconConfig().AltairForkEpoch * primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)).Add(1)
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(l.Ctx, slot, l.State, l.Block)
require.NoError(t, err)
db := dbtesting.SetupDB(t)
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[1:], bootstrap)
require.NoError(t, err)
s := &Server{
BeaconDB: db,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetLightClientBootstrap(writer, request)
require.Equal(t, http.StatusNotFound, writer.Code)
})
t.Run("bellatrix", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestBellatrix()
@@ -96,16 +124,16 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
mockChainInfoFetcher := &mock.ChainService{Slot: &slot}
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(l.Ctx, slot, l.State, l.Block)
require.NoError(t, err)
db := dbtesting.SetupDB(t)
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
ChainInfoFetcher: mockChainInfoFetcher,
BeaconDB: db,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
@@ -137,16 +165,16 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
mockChainInfoFetcher := &mock.ChainService{Slot: &slot}
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(l.Ctx, slot, l.State, l.Block)
require.NoError(t, err)
db := dbtesting.SetupDB(t)
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
ChainInfoFetcher: mockChainInfoFetcher,
BeaconDB: db,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
@@ -178,16 +206,16 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
mockChainInfoFetcher := &mock.ChainService{Slot: &slot}
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(l.Ctx, slot, l.State, l.Block)
require.NoError(t, err)
db := dbtesting.SetupDB(t)
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
ChainInfoFetcher: mockChainInfoFetcher,
BeaconDB: db,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
@@ -219,16 +247,16 @@ func TestLightClientHandler_GetLightClientBootstrap(t *testing.T) {
blockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
mockChainInfoFetcher := &mock.ChainService{Slot: &slot}
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(l.Ctx, slot, l.State, l.Block)
require.NoError(t, err)
db := dbtesting.SetupDB(t)
err = db.SaveLightClientBootstrap(l.Ctx, blockRoot[:], bootstrap)
require.NoError(t, err)
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
ChainInfoFetcher: mockChainInfoFetcher,
BeaconDB: db,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(blockRoot[:]))
@@ -1821,7 +1849,7 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
StateRoot: sampleRoot,
BodyRoot: sampleRoot,
},
Execution: &enginev1.ExecutionPayloadHeaderElectra{
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
@@ -1839,6 +1867,34 @@ func createUpdate(t *testing.T, v int) (interfaces.LightClientUpdate, error) {
require.NoError(t, err)
st, err = util.NewBeaconStateElectra()
require.NoError(t, err)
case version.Fulu:
slot = primitives.Slot(config.FuluForkEpoch * primitives.Epoch(config.SlotsPerEpoch)).Add(1)
header, err = light_client.NewWrappedHeader(&pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: 1,
ProposerIndex: primitives.ValidatorIndex(rand.Int()),
ParentRoot: sampleRoot,
StateRoot: sampleRoot,
BodyRoot: sampleRoot,
},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
},
ExecutionBranch: sampleExecutionBranch,
})
require.NoError(t, err)
st, err = util.NewBeaconStateFulu()
require.NoError(t, err)
default:
return nil, fmt.Errorf("unsupported version %s", version.String(v))
}

File diff suppressed because one or more lines are too long

View File

@@ -299,6 +299,18 @@ func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *h
handleProduceElectraV3(w, isSSZ, electraBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
blindedFuluBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedFulu)
if ok {
w.Header().Set(api.VersionHeader, version.String(version.Fulu))
handleProduceBlindedFuluV3(w, isSSZ, blindedFuluBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
fuluBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Fulu)
if ok {
w.Header().Set(api.VersionHeader, version.String(version.Fulu))
handleProduceFuluV3(w, isSSZ, fuluBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
}
func getConsensusBlockValue(ctx context.Context, blockRewardsFetcher rewards.BlockRewardsFetcher, i interface{} /* block as argument */) (string, *httputil.DefaultJsonError) {
@@ -670,3 +682,74 @@ func handleProduceElectraV3(
Data: jsonBytes,
})
}
func handleProduceBlindedFuluV3(
w http.ResponseWriter,
isSSZ bool,
blk *eth.GenericBeaconBlock_BlindedFulu,
executionPayloadValue string,
consensusPayloadValue string,
) {
if isSSZ {
sszResp, err := blk.BlindedFulu.MarshalSSZ()
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, sszResp, "blindedFuluBlockContents.ssz")
return
}
blindedBlock, err := structs.BlindedBeaconBlockFuluFromConsensus(blk.BlindedFulu)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
jsonBytes, err := json.Marshal(blindedBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteJson(w, &structs.ProduceBlockV3Response{
Version: version.String(version.Fulu),
ExecutionPayloadBlinded: true,
ExecutionPayloadValue: executionPayloadValue,
ConsensusBlockValue: consensusPayloadValue,
Data: jsonBytes,
})
}
func handleProduceFuluV3(
w http.ResponseWriter,
isSSZ bool,
blk *eth.GenericBeaconBlock_Fulu,
executionPayloadValue string,
consensusBlockValue string,
) {
if isSSZ {
sszResp, err := blk.Fulu.MarshalSSZ()
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, sszResp, "fuluBlockContents.ssz")
return
}
blockContents, err := structs.BeaconBlockContentsFuluFromConsensus(blk.Fulu)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
jsonBytes, err := json.Marshal(blockContents)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteJson(w, &structs.ProduceBlockV3Response{
Version: version.String(version.Fulu),
ExecutionPayloadBlinded: false,
ExecutionPayloadValue: executionPayloadValue, // mev not available at this point
ConsensusBlockValue: consensusBlockValue,
Data: jsonBytes,
})
}

View File

@@ -291,14 +291,18 @@ func (p *BeaconDbBlocker) blobsFromReconstructedDataColumns(
// This function expects data columns to be stored (aka. no blobs).
// If not enough data columns are available to extract blobs from them (either directly or after reconstruction), an error is returned.
func (p *BeaconDbBlocker) blobsFromStoredDataColumns(indices map[uint64]bool, rootBytes []byte) ([]*blocks.VerifiedROBlob, *core.RpcError) {
// Get our count of columns we should custody.
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
root := bytesutil.ToBytes32(rootBytes)
// Get the number of columns we should custody.
custodyColumnsCount := peerdas.CustodyColumnCount()
// Get the number of groups we should custody.
custodyGroupCount := peerdas.CustodyGroupCount()
// Determine if we are theoretically able to reconstruct the data columns.
canTheoreticallyReconstruct := peerdas.CanSelfReconstruct(custodyColumnsCount)
canTheoreticallyReconstruct := peerdas.CanSelfReconstruct(custodyGroupCount)
// Retrieve the data columns indice actually we store.
storedDataColumnsIndices, err := p.BlobStorage.ColumnIndices(root)
@@ -307,10 +311,11 @@ func (p *BeaconDbBlocker) blobsFromStoredDataColumns(indices map[uint64]bool, ro
return nil, &core.RpcError{Err: errors.Wrap(err, "could not retrieve columns indices stored for block root"), Reason: core.Internal}
}
storedDataColumnsCount := uint64(len(storedDataColumnsIndices))
storedDataColumnCount := uint64(len(storedDataColumnsIndices))
storedGroupCount := storedDataColumnCount / columnsPerGroup
// Determine is we acually able to reconstruct the data columns.
canActuallyReconstruct := peerdas.CanSelfReconstruct(storedDataColumnsCount)
canActuallyReconstruct := peerdas.CanSelfReconstruct(storedGroupCount)
if !canTheoreticallyReconstruct && !canActuallyReconstruct {
// There is no way to reconstruct the data columns.
@@ -325,7 +330,7 @@ func (p *BeaconDbBlocker) blobsFromStoredDataColumns(indices map[uint64]bool, ro
if canTheoreticallyReconstruct && !canActuallyReconstruct {
// This case may happen if the node started recently with a big enough custody count, but did not (yet) backfill all the columns.
return nil, &core.RpcError{
Err: errors.Errorf("not all data columns are available for this blob. Wanted: %d, got: %d. Please retry later.", nonExtendedColumnsCount, storedDataColumnsCount),
Err: errors.Errorf("not all data columns are available for this blob. Wanted: %d, got: %d. Please retry later.", nonExtendedColumnsCount, storedDataColumnCount),
Reason: core.NotFound}
}
@@ -487,12 +492,12 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices map[uint
blockSlot := b.Block().Slot()
// Get the first peerDAS epoch.
electraForkEpoch := params.BeaconConfig().ElectraForkEpoch
fuluForkEpoch := params.BeaconConfig().FuluForkEpoch
// Compute the first peerDAS slot.
peerDASStartSlot := primitives.Slot(math.MaxUint64)
if electraForkEpoch != primitives.Epoch(math.MaxUint64) {
peerDASStartSlot, err = slots.EpochStart(electraForkEpoch)
if fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
peerDASStartSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate peerDAS start slot"), Reason: core.Internal}
}

View File

@@ -56,9 +56,7 @@ func (_ *Server) SetLoggingLevel(_ context.Context, req *pbrpc.LoggingLevelReque
// Libp2p specific logging.
golog.SetAllLoggers(golog.LevelDebug)
// Geth specific logging.
glogger := gethlog.NewGlogHandler(gethlog.StreamHandler(os.Stderr, gethlog.TerminalFormat(true)))
glogger.Verbosity(gethlog.LvlTrace)
gethlog.Root().SetHandler(glogger)
gethlog.SetDefault(gethlog.NewLogger(gethlog.NewTerminalHandlerWithLevel(os.Stderr, gethlog.LvlTrace, true)))
}
return &empty.Empty{}, nil
}

Some files were not shown because too many files have changed in this diff Show More