Compare commits

..

33 Commits

Author SHA1 Message Date
Taranpreet26311
f265eb7a84 Merge branch 'develop' into Stale_PR_checker 2024-06-25 17:40:29 +04:00
Taranpreet26311
6562036c54 PR to add Stale pull request checker for Prysm 2024-06-25 17:39:46 +04:00
Radosław Kapka
b8aad84285 EIP-7549: attestation pool (#14121)
* implementation

* test fixes

* Electra tests

* remove aggregator tests

* id comments and tests

* make Id equal to [33]byte
2024-06-25 13:18:07 +00:00
Sammy Rosso
5f0d6074d6 Prysmctl ui bug (#14140)
* Fix ui bug

* Better logging
2024-06-25 13:13:42 +00:00
james-prysm
9d6a2f5390 Remove electra duplicate helpers (#14138)
* removing duplicate helper functions to reduce 6110 size

* linting
2024-06-24 21:22:46 +00:00
Preston Van Loon
490ddbf782 Enable golang.org/x/tools/go/analysis/errorsas static analysis check (#14135) 2024-06-24 17:20:45 +00:00
Radosław Kapka
adc875b20d EIP-7549: p2p and sync (#14085)
* EIP-7549: p2p and sync

* small cleanup

* fuzz fix

* deepsource

* review

* fix ineffectual assignment

* fix pubsub

* update ComputeSubnetForAttestation

* review

* review
2024-06-24 13:57:11 +00:00
kasey
8cd249c1c8 update codegen dep and cleanup organization (#14127)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-06-20 23:33:54 +00:00
Preston Van Loon
305d5850e7 ssz: Move stateutil.SliceRoot to ssz package (#14123) 2024-06-20 20:55:15 +00:00
Radosław Kapka
df3a9f218d More tracing in the validator client (#14125)
* More tracing in the validator client

* change context expectation in tests
2024-06-20 16:13:53 +00:00
Preston Van Loon
ae451a3a02 Update github.com/prysmaticlabs/go-bitfield (#14120) 2024-06-18 14:33:06 +00:00
Radosław Kapka
17561a6576 Do not fail production when consensus block value is unavailable (#14111)
* Do not fail production when consensus block value is unavailable

* add log

* use empty string instead of 0

* build fix
2024-06-14 18:30:40 +00:00
james-prysm
b842b7ea01 proposer settings log ux (#14106)
* adding some logs to improve debugging

* fixing log functions

* Update config/proposer/loader/loader.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing feedback

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-06-14 14:16:43 +00:00
sam (jgscripts)
9bbe12e28c Correcting spelling errors (#14107)
* fix small spelling error

* fix small grammar error

* fix small spelling errors

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-06-14 13:41:33 +00:00
Delweng
0674cf64cc chore: make deepsource happy (#14081)
* chore(pruner): return error directly

Signed-off-by: jsvisa <delweng@gmail.com>

* chore(rpc): unused method receiver

Signed-off-by: jsvisa <delweng@gmail.com>

* fix(rpc): use net.JoinHostPort instead of fmt.Sprintf

Signed-off-by: jsvisa <delweng@gmail.com>

* chore(amiddleware):use http.NoBody instead of nil

Signed-off-by: jsvisa <delweng@gmail.com>

* chore(rpc): rm notused params

Signed-off-by: jsvisa <delweng@gmail.com>

* chore(p2p): comment

Signed-off-by: jsvisa <delweng@gmail.com>

* feat(db/prune): reduce complexity

Signed-off-by: jsvisa <delweng@gmail.com>

* chore(db/pruner): name

Signed-off-by: jsvisa <delweng@gmail.com>

* Revert "chore(pruner): return error directly"

This reverts commit d76e745f60.

Signed-off-by: jsvisa <delweng@gmail.com>

* revert back pruner.go

Signed-off-by: jsvisa <delweng@gmail.com>

---------

Signed-off-by: jsvisa <delweng@gmail.com>
2024-06-13 16:12:04 +00:00
james-prysm
3413d05b34 Electra: field renames (#14091)
* renaming functions and fields based on consensus changes

* execution api rename

* fixing test

* reverting spectests changes, it should be changed with new version

* reverting temporarily

* revert exclusions
2024-06-12 15:16:31 +00:00
Patrice Vignola
070a765d24 Add stub for VerifySignature when build tag blst_disabled is set (#12246)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-06-12 12:46:23 +00:00
Preston Van Loon
8ac1647436 Delete unused config (#14016) 2024-06-11 15:51:17 +00:00
james-prysm
dfe31c9242 adding in softer check for content type (#14097) 2024-06-10 17:15:23 +00:00
Radosław Kapka
b7866be3a9 EIP 7549 spectests (#14027)
* EIP 7549 spectests

* merge fix
2024-06-10 16:33:43 +00:00
Radosław Kapka
8413660d5f Keep only the latest value in the health channel (#14087)
* Increase health tracker channel buffer size

* keep only the latest value

* Make health test blocking as a regression test for PR #14807

* Fix new race conditions in the MockHealthClient

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2024-06-06 18:45:35 +00:00
Radosław Kapka
e037491756 Deprectate EnableDebugRPCEndpoints flag (#14015)
* Deprectate `EnableDebugRPCEndpoints` flag

* test fix

* add flag to deprecated list

* disable by default

* test fixes
2024-06-05 12:04:21 +00:00
kasey
ea2624b5ab always close cache warm chan to prevent blocking (#14080)
* always close cache warm chan to prevent blocking

* test that waitForCache does not block

* combine defers to reduce cognitive overhead

* lint

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-06-04 22:08:06 +00:00
james-prysm
1b40f941cf middleware for content type and accept headers (#14075)
* middleware for content type

* adding accept middleware too and tests

* Update beacon-chain/rpc/endpoints.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/endpoints.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/endpoints.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/endpoints.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* including radek's review

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-06-04 20:38:21 +00:00
terence
57830435d7 Process epoch error to use correct state version (#14069)
* Process epoch error to use correct state version

* Fix return instead
2024-06-03 19:54:28 +00:00
Nishant Das
44d850de51 Change It To Debug (#14072) 2024-06-03 10:50:00 +00:00
Radosław Kapka
b08e691127 Organize validator flags (#14028)
* Organize validator flags

* whitespace

* fix comment in test

* remove unneeded flags
2024-05-31 18:36:18 +00:00
Radosław Kapka
968e82b02d EIP-7549 gRPC (part 1) (#14055)
* interfaces move

* build fix

* remove annoying warning

* more build fixes

* review

* core code

* tests part 1

* tests part 2

* TranslateParticipation doesn't need Electra

* remove unused function

* pending atts don't need Electra

* tests part 3

* build fixes

* review

* EIP-7549 gRPC part 1
2024-05-31 17:24:06 +00:00
james-prysm
de04ce8329 EIP-7002:Execution layer triggerable withdrawals (#14031)
* wip fixing 7002 branch

* fixing tests and functions

* fixing linting

* temp fix for transition

* adding unit tests for method

* fixing linting

* partial review from terence

* Update withdrawals.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update withdrawals.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update withdrawals.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/core/electra/withdrawals.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing feedback

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-05-31 16:55:49 +00:00
terence
5efecff631 Fix mockgen sh (#14068)
* Fix mockgen sh

* Radek's suggestion

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Generate prysm chain client

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-05-31 16:24:17 +00:00
Radosław Kapka
3ab759e163 One more validator client cleanup (#14048)
* interface names

* interface method names

* inspection

* regenerate pb and mock

* Revert beacon node changes

* build fix

* review

* more functions

* combine parameters
2024-05-31 15:53:58 +00:00
james-prysm
836d369c6c api fix for panic on unsynced unfound block (#14063)
* api fix for panic

* adding test

* fixing how we handle the error
2024-05-31 14:46:38 +00:00
Nishant Das
568273453b Update Libp2p Dependencies (#14060)
* Update to v0.35.0 and v0.11.0

* Update Protobuf

* Update bazel deps
2024-05-30 08:56:55 +00:00
441 changed files with 28581 additions and 31454 deletions

View File

@@ -22,7 +22,6 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

27
.github/workflows/stale_pr_checker.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Find stale PRs
on:
schedule:
- cron: '0 13 * * 1'
jobs:
fetch-PRs:
runs-on: ubuntu-latest
steps:
- name: Fetch pull requests from here
id: local
uses: paritytech/stale-pr-finder@main
with:
GITHUB_TOKEN: ${{ github.token }}
repo: prysm
- name: Post to a Slack channel
id: slack
uses: slackapi/slack-github-action@v1.23.0
with:
channel-id: ${{ secrets.CHANNEL }}
slack-message: |
Stale PRs this week:
${{ steps.local.outputs.message }}
env:
SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}
LOCAL_PR: ${{ steps.local.outputs.message }}"

View File

@@ -224,6 +224,7 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
"@org_golang_x_tools//go/analysis/passes/errorsas:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",

View File

@@ -48,8 +48,13 @@ func (n *NodeHealthTracker) CheckHealth(ctx context.Context) bool {
if isStatusChanged {
// Update the health status
n.isHealthy = &newStatus
// Send the new status to the health channel
n.healthChan <- newStatus
// Send the new status to the health channel, potentially overwriting the existing value
select {
case <-n.healthChan:
n.healthChan <- newStatus
default:
n.healthChan <- newStatus
}
}
return newStatus
}

View File

@@ -87,12 +87,6 @@ func TestNodeHealth_Concurrency(t *testing.T) {
// Number of goroutines to spawn for both reading and writing
numGoroutines := 6
go func() {
for range n.HealthUpdates() {
// Consume values to avoid blocking on channel send.
}
}()
wg.Add(numGoroutines * 2) // for readers and writers
// Concurrently update health status

View File

@@ -3,6 +3,7 @@ package testing
import (
"context"
"reflect"
"sync"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface"
"go.uber.org/mock/gomock"
@@ -16,6 +17,7 @@ var (
type MockHealthClient struct {
ctrl *gomock.Controller
recorder *MockHealthClientMockRecorder
sync.Mutex
}
// MockHealthClientMockRecorder is the mock recorder for MockHealthClient.
@@ -25,6 +27,8 @@ type MockHealthClientMockRecorder struct {
// IsHealthy mocks base method.
func (m *MockHealthClient) IsHealthy(arg0 context.Context) bool {
m.Lock()
defer m.Unlock()
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsHealthy", arg0)
ret0, ok := ret[0].(bool)
@@ -41,6 +45,8 @@ func (m *MockHealthClient) EXPECT() *MockHealthClientMockRecorder {
// IsHealthy indicates an expected call of IsHealthy.
func (mr *MockHealthClientMockRecorder) IsHealthy(arg0 any) *gomock.Call {
mr.mock.Lock()
defer mr.mock.Unlock()
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockHealthClient)(nil).IsHealthy), arg0)
}

View File

@@ -14,7 +14,7 @@ go_library(
"//validator:__subpackages__",
],
deps = [
"//api/server:go_default_library",
"//api/server/middleware:go_default_library",
"//runtime:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",

View File

@@ -11,7 +11,7 @@ import (
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/runtime"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
@@ -104,7 +104,7 @@ func (g *Gateway) Start() {
}
}
corsMux := server.CorsHandler(g.cfg.allowedOrigins).Middleware(g.cfg.router)
corsMux := middleware.CorsHandler(g.cfg.allowedOrigins).Middleware(g.cfg.router)
if g.cfg.muxHandler != nil {
g.cfg.router.PathPrefix("/").HandlerFunc(func(w http.ResponseWriter, r *http.Request) {

View File

@@ -2,29 +2,14 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"error.go",
"middleware.go",
"util.go",
],
srcs = ["error.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server",
visibility = ["//visibility:public"],
deps = [
"@com_github_gorilla_mux//:go_default_library",
"@com_github_rs_cors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"error_test.go",
"middleware_test.go",
"util_test.go",
],
srcs = ["error_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
deps = ["//testing/assert:go_default_library"],
)

View File

@@ -1,32 +0,0 @@
package server
import (
"net/http"
"github.com/gorilla/mux"
"github.com/rs/cors"
)
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
NormalizeQueryValues(query)
r.URL.RawQuery = query.Encode()
next.ServeHTTP(w, r)
})
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) mux.MiddlewareFunc {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler
}

View File

@@ -0,0 +1,29 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"middleware.go",
"util.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/middleware",
visibility = ["//visibility:public"],
deps = [
"@com_github_gorilla_mux//:go_default_library",
"@com_github_rs_cors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"middleware_test.go",
"util_test.go",
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -0,0 +1,112 @@
package middleware
import (
"fmt"
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/rs/cors"
)
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
NormalizeQueryValues(query)
r.URL.RawQuery = query.Encode()
next.ServeHTTP(w, r)
})
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) mux.MiddlewareFunc {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler
}
// ContentTypeHandler checks request for the appropriate media types otherwise returning a http.StatusUnsupportedMediaType error
func ContentTypeHandler(acceptedMediaTypes []string) mux.MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// skip the GET request
if r.Method == http.MethodGet {
next.ServeHTTP(w, r)
return
}
contentType := r.Header.Get("Content-Type")
if contentType == "" {
http.Error(w, "Content-Type header is missing", http.StatusUnsupportedMediaType)
return
}
accepted := false
for _, acceptedType := range acceptedMediaTypes {
if strings.Contains(strings.TrimSpace(contentType), strings.TrimSpace(acceptedType)) {
accepted = true
break
}
}
if !accepted {
http.Error(w, fmt.Sprintf("Unsupported media type: %s", contentType), http.StatusUnsupportedMediaType)
return
}
next.ServeHTTP(w, r)
})
}
}
// AcceptHeaderHandler checks if the client's response preference is handled
func AcceptHeaderHandler(serverAcceptedTypes []string) mux.MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
acceptHeader := r.Header.Get("Accept")
// header is optional and should skip if not provided
if acceptHeader == "" {
next.ServeHTTP(w, r)
return
}
accepted := false
acceptTypes := strings.Split(acceptHeader, ",")
// follows rules defined in https://datatracker.ietf.org/doc/html/rfc2616#section-14.1
for _, acceptType := range acceptTypes {
acceptType = strings.TrimSpace(acceptType)
if acceptType == "*/*" {
accepted = true
break
}
for _, serverAcceptedType := range serverAcceptedTypes {
if strings.HasPrefix(acceptType, serverAcceptedType) {
accepted = true
break
}
if acceptType != "/*" && strings.HasSuffix(acceptType, "/*") && strings.HasPrefix(serverAcceptedType, acceptType[:len(acceptType)-2]) {
accepted = true
break
}
}
if accepted {
break
}
}
if !accepted {
http.Error(w, fmt.Sprintf("Not Acceptable: %s", acceptHeader), http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
})
}
}

View File

@@ -0,0 +1,204 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := NormalizeQueryValuesHandler(nextHandler)
tests := []struct {
name string
inputQuery string
expectedQuery string
}{
{
name: "3 values",
inputQuery: "key=value1,value2,value3",
expectedQuery: "key=value1&key=value2&key=value3", // replace with expected normalized value
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, http.NoBody)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", rr.Code, http.StatusOK)
}
if req.URL.RawQuery != test.expectedQuery {
t.Errorf("query not normalized: got %v want %v", req.URL.RawQuery, test.expectedQuery)
}
if rr.Body.String() != "next handler" {
t.Errorf("next handler was not executed")
}
})
}
}
func TestContentTypeHandler(t *testing.T) {
acceptedMediaTypes := []string{api.JsonMediaType, api.OctetStreamMediaType}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := ContentTypeHandler(acceptedMediaTypes)(nextHandler)
tests := []struct {
name string
contentType string
expectedStatusCode int
isGet bool
}{
{
name: "Accepted Content-Type - application/json",
contentType: api.JsonMediaType,
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Content-Type - ssz format",
contentType: api.OctetStreamMediaType,
expectedStatusCode: http.StatusOK,
},
{
name: "Unsupported Content-Type - text/plain",
contentType: "text/plain",
expectedStatusCode: http.StatusUnsupportedMediaType,
},
{
name: "Missing Content-Type",
contentType: "",
expectedStatusCode: http.StatusUnsupportedMediaType,
},
{
name: "GET request skips content type check",
contentType: "",
expectedStatusCode: http.StatusOK,
isGet: true,
},
{
name: "Content type contains charset is ok",
contentType: "application/json; charset=utf-8",
expectedStatusCode: http.StatusOK,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
httpMethod := http.MethodPost
if tt.isGet {
httpMethod = http.MethodGet
}
req := httptest.NewRequest(httpMethod, "/", nil)
if tt.contentType != "" {
req.Header.Set("Content-Type", tt.contentType)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if status := rr.Code; status != tt.expectedStatusCode {
t.Errorf("handler returned wrong status code: got %v want %v", status, tt.expectedStatusCode)
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := AcceptHeaderHandler(acceptedTypes)(nextHandler)
tests := []struct {
name string
acceptHeader string
expectedStatusCode int
}{
{
name: "Accepted Accept-Type - application/json",
acceptHeader: "application/json",
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Accept-Type - application/octet-stream",
acceptHeader: "application/octet-stream",
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Accept-Type with parameters",
acceptHeader: "application/json;q=0.9, application/octet-stream;q=0.8",
expectedStatusCode: http.StatusOK,
},
{
name: "Unsupported Accept-Type - text/plain",
acceptHeader: "text/plain",
expectedStatusCode: http.StatusNotAcceptable,
},
{
name: "Missing Accept header",
acceptHeader: "",
expectedStatusCode: http.StatusOK,
},
{
name: "*/* is accepted",
acceptHeader: "*/*",
expectedStatusCode: http.StatusOK,
},
{
name: "application/* is accepted",
acceptHeader: "application/*",
expectedStatusCode: http.StatusOK,
},
{
name: "/* is unsupported",
acceptHeader: "/*",
expectedStatusCode: http.StatusNotAcceptable,
},
{
name: "application/ is unsupported",
acceptHeader: "application/",
expectedStatusCode: http.StatusNotAcceptable,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
if tt.acceptHeader != "" {
req.Header.Set("Accept", tt.acceptHeader)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if status := rr.Code; status != tt.expectedStatusCode {
t.Errorf("handler returned wrong status code: got %v want %v", status, tt.expectedStatusCode)
}
})
}
}

View File

@@ -1,4 +1,4 @@
package server
package middleware
import (
"net/url"

View File

@@ -1,4 +1,4 @@
package server
package middleware
import (
"testing"

View File

@@ -1,54 +0,0 @@
package server
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := NormalizeQueryValuesHandler(nextHandler)
tests := []struct {
name string
inputQuery string
expectedQuery string
}{
{
name: "3 values",
inputQuery: "key=value1,value2,value3",
expectedQuery: "key=value1&key=value2&key=value3", // replace with expected normalized value
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, nil)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", rr.Code, http.StatusOK)
}
if req.URL.RawQuery != test.expectedQuery {
t.Errorf("query not normalized: got %v want %v", req.URL.RawQuery, test.expectedQuery)
}
if rr.Body.String() != "next handler" {
t.Errorf("next handler was not executed")
}
})
}
}

View File

@@ -26,7 +26,6 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_data_column.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -49,7 +48,6 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -69,7 +67,6 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -160,7 +157,6 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",

View File

@@ -33,7 +33,6 @@ var (
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
var errMaxDataColumnsExceeded = errors.New("Expected data columns for node exceeds NUMBER_OF_COLUMNS")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.

View File

@@ -12,8 +12,6 @@ go_library(
deps = [
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -5,8 +5,6 @@ import (
"encoding/json"
GoKZG "github.com/crate-crypto/go-kzg-4844"
CKZG "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
@@ -14,38 +12,17 @@ var (
//go:embed trusted_setup.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context
kzgLoaded bool
)
func Start() error {
parsedSetup := &GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, parsedSetup)
parsedSetup := GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup)
if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON")
}
kzgContext, err = GoKZG.NewContext4096(parsedSetup)
kzgContext, err = GoKZG.NewContext4096(&parsedSetup)
if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context")
}
g1Lagrange := &parsedSetup.SetupG1Lagrange
// Length of a G1 point, converted from hex to binary.
g1s := make([]byte, len(g1Lagrange)*(len(g1Lagrange[0])-2)/2)
for i, g1 := range g1Lagrange {
copy(g1s[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2s := make([]byte, len(parsedSetup.SetupG2)*(len(parsedSetup.SetupG2[0])-2)/2)
for i, g2 := range parsedSetup.SetupG2 {
copy(g2s[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1s, g2s); err != nil {
panic(err)
}
}
kzgLoaded = true
return nil
}

View File

@@ -118,9 +118,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Acceser) Option {
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
return func(s *Service) error {
s.cfg.P2P = p
s.cfg.P2p = p
return nil
}
}

View File

@@ -6,8 +6,6 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -502,7 +500,7 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
}
indices, err := bs.Indices(root)
if err != nil {
return nil, errors.Wrap(err, "indices")
return nil, err
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
@@ -516,35 +514,12 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
return missing, nil
}
func missingDataColumns(bs *filesystem.BlobStorage, root [32]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > int(params.BeaconConfig().NumberOfColumns) {
return nil, errMaxDataColumnsExceeded
}
indices, err := bs.ColumnIndices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]bool, len(expected))
for col := range expected {
if !indices[col] {
missing[col] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if features.Get().EnablePeerDAS {
return s.isDataAvailableDataColumns(ctx, root, signed)
}
if signed.Version() < version.Deneb {
return nil
}
@@ -574,7 +549,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil {
return errors.Wrap(err, "missing indices")
return err
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
@@ -593,13 +568,8 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if len(missing) == 0 {
return
}
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
})
defer nst.Stop()
}
@@ -621,104 +591,12 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
}
}
func (s *Service) isDataAvailableDataColumns(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
return nil
}
block := signed.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
custodiedSubnetCount := params.BeaconConfig().CustodyRequirement
if flags.Get().SubscribeToAllSubnets {
custodiedSubnetCount = params.BeaconConfig().DataColumnSidecarSubnetCount
}
colMap, err := peerdas.CustodyColumns(s.cfg.P2P.NodeID(), custodiedSubnetCount)
if err != nil {
return err
}
// Expected is the number of custodied data columnns a node is expected to have.
expected := len(colMap)
if expected == 0 {
return nil
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
// Get a map of data column indices that are not currently available.
missing, err := missingDataColumns(s.blobStorage, root, colMap)
if err != nil {
return err
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missing) == 0 {
return nil
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": len(missing),
}).Error("Still waiting for data columns DA check at slot end.")
})
defer nst.Stop()
}
for {
select {
case rootIndex := <-rootIndexChan:
if rootIndex.Root != root {
// This is not the root we are looking for.
continue
}
// Remove the index from the missing map.
delete(missing, rootIndex.Index)
// Exit if there is no more missing data columns.
if len(missing) == 0 {
return nil
}
case <-ctx.Done():
missingIndexes := make([]uint64, 0, len(missing))
for val := range missing {
copiedVal := val
missingIndexes = append(missingIndexes, copiedVal)
}
return errors.Wrapf(ctx.Err(), "context deadline waiting for data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndexes)
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
}
}

View File

@@ -50,12 +50,6 @@ type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(context.Context, blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)

View File

@@ -1,16 +0,0 @@
package blockchain
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
)
func (s *Service) ReceiveDataColumn(ctx context.Context, ds blocks.VerifiedRODataColumn) error {
if err := s.blobStorage.SaveDataColumn(ds); err != nil {
return errors.Wrap(err, "save data column")
}
return nil
}

View File

@@ -82,7 +82,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2P p2p.Acceser
P2p p2p.Broadcaster
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
@@ -107,17 +107,15 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
// TODO: Separate Data Columns from blobs
/*
if idx >= fieldparams.MaxBlobsPerBlock {
return
}*/
if idx >= fieldparams.MaxBlobsPerBlock {
return
}
bn.Lock()
seen := bn.seenIndex[root]
@@ -131,7 +129,7 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.NumberOfColumns)
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
bn.notifiers[root] = c
}
@@ -145,7 +143,7 @@ func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.NumberOfColumns)
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
bn.notifiers[root] = c
}
return c
@@ -171,7 +169,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
}
srv := &Service{
ctx: ctx,

View File

@@ -95,7 +95,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockAccesser{}),
WithP2PBroadcaster(&mockBroadcaster{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc),
WithAttestationService(attService),
@@ -518,7 +518,7 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
notifiers: make(map[[32]byte]chan uint64),
}

View File

@@ -19,7 +19,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -46,11 +45,6 @@ type mockBroadcaster struct {
broadcastCalled bool
}
type mockAccesser struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
@@ -71,11 +65,6 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ context.Context, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}

View File

@@ -628,11 +628,6 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil
}
// ReceiveDataColumn implements the same method in chain service
func (c *ChainService) ReceiveDataColumn(_ context.Context, _ blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil

View File

@@ -8,7 +8,6 @@ go_library(
"attestation_data.go",
"balance_cache_key.go",
"checkpoint_state.go",
"column_subnet_ids.go",
"committee.go",
"committee_disabled.go", # keep
"committees.go",

View File

@@ -1,65 +0,0 @@
package cache
import (
"sync"
"time"
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
type columnSubnetIDs struct {
colSubCache *cache.Cache
colSubLock sync.RWMutex
}
// ColumnSubnetIDs for column subnet participants
var ColumnSubnetIDs = newColumnSubnetIDs()
const columnKey = "columns"
func newColumnSubnetIDs() *columnSubnetIDs {
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
// Set the default duration of a column subnet subscription as the column expiry period.
subLength := epochDuration * time.Duration(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
persistentCache := cache.New(subLength*time.Second, epochDuration*time.Second)
return &columnSubnetIDs{colSubCache: persistentCache}
}
// GetColumnSubnets retrieves the data column subnets.
func (s *columnSubnetIDs) GetColumnSubnets() ([]uint64, bool, time.Time) {
s.colSubLock.RLock()
defer s.colSubLock.RUnlock()
id, duration, ok := s.colSubCache.GetWithExpiration(columnKey)
if !ok {
return nil, false, time.Time{}
}
// Retrieve indices from the cache.
idxs, ok := id.([]uint64)
if !ok {
return nil, false, time.Time{}
}
return idxs, ok, duration
}
// AddColumnSubnets adds the relevant data column subnets.
func (s *columnSubnetIDs) AddColumnSubnets(colIdx []uint64) {
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Set(columnKey, colIdx, 0)
}
// EmptyAllCaches empties out all the related caches and flushes any stored
// entries on them. This should only ever be used for testing, in normal
// production, handling of the relevant subnets for each role is done
// separately.
func (s *columnSubnetIDs) EmptyAllCaches() {
// Clear the cache.
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Flush()
}

View File

@@ -28,87 +28,87 @@ import (
// process_historical_roots_update(state)
// process_participation_flag_updates(state) # [New in Altair]
// process_sync_committee_updates(state) # [New in Altair]
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "altair.ProcessEpoch")
defer span.End()
if state == nil || state.IsNil() {
return nil, errors.New("nil state")
return errors.New("nil state")
}
vp, bp, err := InitializePrecomputeValidators(ctx, state)
if err != nil {
return nil, err
return err
}
// New in Altair.
vp, bp, err = ProcessEpochParticipation(ctx, state, bp, vp)
if err != nil {
return nil, err
return err
}
state, err = precompute.ProcessJustificationAndFinalizationPreCompute(state, bp)
if err != nil {
return nil, errors.Wrap(err, "could not process justification")
return errors.Wrap(err, "could not process justification")
}
// New in Altair.
state, vp, err = ProcessInactivityScores(ctx, state, vp)
if err != nil {
return nil, errors.Wrap(err, "could not process inactivity updates")
return errors.Wrap(err, "could not process inactivity updates")
}
// New in Altair.
state, err = ProcessRewardsAndPenaltiesPrecompute(state, bp, vp)
if err != nil {
return nil, errors.Wrap(err, "could not process rewards and penalties")
return errors.Wrap(err, "could not process rewards and penalties")
}
state, err = e.ProcessRegistryUpdates(ctx, state)
if err != nil {
return nil, errors.Wrap(err, "could not process registry updates")
return errors.Wrap(err, "could not process registry updates")
}
// Modified in Altair and Bellatrix.
proportionalSlashingMultiplier, err := state.ProportionalSlashingMultiplier()
if err != nil {
return nil, err
return err
}
state, err = e.ProcessSlashings(state, proportionalSlashingMultiplier)
if err != nil {
return nil, err
return err
}
state, err = e.ProcessEth1DataReset(state)
if err != nil {
return nil, err
return err
}
state, err = e.ProcessEffectiveBalanceUpdates(state)
if err != nil {
return nil, err
return err
}
state, err = e.ProcessSlashingsReset(state)
if err != nil {
return nil, err
return err
}
state, err = e.ProcessRandaoMixesReset(state)
if err != nil {
return nil, err
return err
}
state, err = e.ProcessHistoricalDataUpdate(state)
if err != nil {
return nil, err
return err
}
// New in Altair.
state, err = ProcessParticipationFlagUpdates(state)
if err != nil {
return nil, err
return err
}
// New in Altair.
state, err = ProcessSyncCommitteeUpdates(ctx, state)
_, err = ProcessSyncCommitteeUpdates(ctx, state)
if err != nil {
return nil, err
return err
}
return state, nil
return nil
}

View File

@@ -13,9 +13,9 @@ import (
func TestProcessEpoch_CanProcess(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
newState, err := altair.ProcessEpoch(context.Background(), st)
err := altair.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), newState.Slashings()[2], "Unexpected slashed balance")
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
b := st.Balances()
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(b)))
@@ -45,9 +45,9 @@ func TestProcessEpoch_CanProcess(t *testing.T) {
func TestProcessEpoch_CanProcessBellatrix(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
newState, err := altair.ProcessEpoch(context.Background(), st)
err := altair.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), newState.Slashings()[2], "Unexpected slashed balance")
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
b := st.Balances()
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(b)))

View File

@@ -96,24 +96,6 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"attestation.go",
"churn.go",
"consolidations.go",
"deposits.go",
@@ -22,6 +23,7 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
@@ -32,7 +34,9 @@ go_library(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)
@@ -46,6 +50,7 @@ go_test(
"effective_balance_updates_test.go",
"upgrade_test.go",
"validator_test.go",
"withdrawals_test.go",
],
deps = [
":go_default_library",
@@ -63,8 +68,12 @@ go_test(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/interop:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -0,0 +1,7 @@
package electra
import "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
var (
ProcessAttestationsNoVerifySignature = altair.ProcessAttestationsNoVerifySignature
)

View File

@@ -40,7 +40,7 @@ import (
//
// state.pending_consolidations = state.pending_consolidations[next_pending_consolidation:]
func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
_, span := trace.StartSpan(ctx, "electra.ProcessPendingConsolidations")
defer span.End()
if st == nil || st.IsNil() {
@@ -68,7 +68,7 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
break
}
if err := SwitchToCompoundingValidator(ctx, st, pc.TargetIndex); err != nil {
if err := SwitchToCompoundingValidator(st, pc.TargetIndex); err != nil {
return err
}

View File

@@ -77,11 +77,11 @@ func ProcessPendingBalanceDeposits(ctx context.Context, st state.BeaconState, ac
}
}
// ProcessDepositReceipts is a function as part of electra to process execution layer deposits
func ProcessDepositReceipts(ctx context.Context, beaconState state.BeaconState, receipts []*enginev1.DepositReceipt) (state.BeaconState, error) {
_, span := trace.StartSpan(ctx, "electra.ProcessDepositReceipts")
// ProcessDepositRequests is a function as part of electra to process execution layer deposits
func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState, requests []*enginev1.DepositRequest) (state.BeaconState, error) {
_, span := trace.StartSpan(ctx, "electra.ProcessDepositRequests")
defer span.End()
// TODO: replace with 6110 logic
// return b.ProcessDepositReceipts(beaconState, receipts)
// return b.ProcessDepositRequests(beaconState, requests)
return beaconState, nil
}

View File

@@ -46,80 +46,84 @@ var (
// process_effective_balance_updates(state)
// process_slashings_reset(state)
// process_randao_mixes_reset(state)
func ProcessEpoch(ctx context.Context, state state.BeaconState) (state.BeaconState, error) {
func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "electra.ProcessEpoch")
defer span.End()
if state == nil || state.IsNil() {
return nil, errors.New("nil state")
return errors.New("nil state")
}
vp, bp, err := InitializePrecomputeValidators(ctx, state)
if err != nil {
return nil, err
return err
}
vp, bp, err = ProcessEpochParticipation(ctx, state, bp, vp)
if err != nil {
return nil, err
return err
}
state, err = precompute.ProcessJustificationAndFinalizationPreCompute(state, bp)
if err != nil {
return nil, errors.Wrap(err, "could not process justification")
return errors.Wrap(err, "could not process justification")
}
state, vp, err = ProcessInactivityScores(ctx, state, vp)
if err != nil {
return nil, errors.Wrap(err, "could not process inactivity updates")
return errors.Wrap(err, "could not process inactivity updates")
}
state, err = ProcessRewardsAndPenaltiesPrecompute(state, bp, vp)
if err != nil {
return nil, errors.Wrap(err, "could not process rewards and penalties")
return errors.Wrap(err, "could not process rewards and penalties")
}
state, err = ProcessRegistryUpdates(ctx, state)
if err != nil {
return nil, errors.Wrap(err, "could not process registry updates")
return errors.Wrap(err, "could not process registry updates")
}
proportionalSlashingMultiplier, err := state.ProportionalSlashingMultiplier()
if err != nil {
return nil, err
return err
}
state, err = ProcessSlashings(state, proportionalSlashingMultiplier)
if err != nil {
return nil, err
return err
}
state, err = ProcessEth1DataReset(state)
if err != nil {
return nil, err
return err
}
if err = ProcessPendingBalanceDeposits(ctx, state, primitives.Gwei(bp.ActiveCurrentEpoch)); err != nil {
return nil, err
return err
}
if err := ProcessPendingConsolidations(ctx, state); err != nil {
return nil, err
if err = ProcessPendingConsolidations(ctx, state); err != nil {
return err
}
if err := ProcessEffectiveBalanceUpdates(state); err != nil {
return nil, err
if err = ProcessEffectiveBalanceUpdates(state); err != nil {
return err
}
state, err = ProcessSlashingsReset(state)
if err != nil {
return nil, err
return err
}
state, err = ProcessRandaoMixesReset(state)
if err != nil {
return nil, err
return err
}
state, err = ProcessHistoricalDataUpdate(state)
if err != nil {
return nil, err
return err
}
state, err = ProcessParticipationFlagUpdates(state)
if err != nil {
return nil, err
return err
}
state, err = ProcessSyncCommitteeUpdates(ctx, state)
_, err = ProcessSyncCommitteeUpdates(ctx, state)
if err != nil {
return nil, err
return err
}
return state, nil
return nil
}

View File

@@ -38,7 +38,7 @@ import (
// withdrawals_root=pre.latest_execution_payload_header.withdrawals_root,
// blob_gas_used=pre.latest_execution_payload_header.blob_gas_used,
// excess_blob_gas=pre.latest_execution_payload_header.excess_blob_gas,
// deposit_receipts_root=Root(), # [New in Electra:EIP6110]
// deposit_requests_root=Root(), # [New in Electra:EIP6110]
// withdrawal_requests_root=Root(), # [New in Electra:EIP7002],
// )
//
@@ -94,7 +94,7 @@ import (
// # Deep history valid from Capella onwards
// historical_summaries=pre.historical_summaries,
// # [New in Electra:EIP6110]
// deposit_receipts_start_index=UNSET_DEPOSIT_RECEIPTS_START_INDEX,
// deposit_requests_start_index=UNSET_DEPOSIT_REQUESTS_START_INDEX,
// # [New in Electra:EIP7251]
// deposit_balance_to_consume=0,
// exit_balance_to_consume=0,
@@ -261,14 +261,14 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
WithdrawalsRoot: wdRoot,
ExcessBlobGas: excessBlobGas,
BlobGasUsed: blobGasUsed,
DepositReceiptsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP6110]
DepositRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP6110]
WithdrawalRequestsRoot: bytesutil.Bytes32(0), // [New in Electra:EIP7002]
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositReceiptsStartIndex: params.BeaconConfig().UnsetDepositReceiptsStartIndex,
DepositRequestsStartIndex: params.BeaconConfig().UnsetDepositRequestsStartIndex,
DepositBalanceToConsume: 0,
ExitBalanceToConsume: helpers.ActivationExitChurnLimit(primitives.Gwei(tab)),
EarliestExitEpoch: earliestExitEpoch,
@@ -295,14 +295,14 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
for _, index := range preActivationIndices {
if err := helpers.QueueEntireBalanceAndResetValidator(post, index); err != nil {
if err := QueueEntireBalanceAndResetValidator(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue entire balance and reset validator")
}
}
// Ensure early adopters of compounding credentials go through the activation churn
for _, index := range compoundWithdrawalIndices {
if err := helpers.QueueExcessActiveBalance(post, index); err != nil {
if err := QueueExcessActiveBalance(post, index); err != nil {
return nil, errors.Wrap(err, "failed to queue excess active balance")
}
}

View File

@@ -128,7 +128,7 @@ func TestUpgradeToElectra(t *testing.T) {
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
DepositReceiptsRoot: bytesutil.Bytes32(0),
DepositRequestsRoot: bytesutil.Bytes32(0),
WithdrawalRequestsRoot: bytesutil.Bytes32(0),
}
require.DeepEqual(t, wanted, protoHeader)
@@ -145,9 +145,9 @@ func TestUpgradeToElectra(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 0, len(summaries))
startIndex, err := mSt.DepositReceiptsStartIndex()
startIndex, err := mSt.DepositRequestsStartIndex()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().UnsetDepositReceiptsStartIndex, startIndex)
require.Equal(t, params.BeaconConfig().UnsetDepositRequestsStartIndex, startIndex)
balance, err := mSt.DepositBalanceToConsume()
require.NoError(t, err)

View File

@@ -1,7 +1,6 @@
package electra
import (
"context"
"errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
@@ -19,7 +18,7 @@ import (
// if has_eth1_withdrawal_credential(validator):
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
v, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
@@ -32,12 +31,12 @@ func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return queueExcessActiveBalance(ctx, s, idx)
return QueueExcessActiveBalance(s, idx)
}
return nil
}
// queueExcessActiveBalance
// QueueExcessActiveBalance
//
// Spec definition:
//
@@ -49,7 +48,7 @@ func SwitchToCompoundingValidator(ctx context.Context, s state.BeaconState, idx
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
func queueExcessActiveBalance(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
@@ -80,7 +79,7 @@ func queueExcessActiveBalance(ctx context.Context, s state.BeaconState, idx prim
// )
//
//nolint:dupword
func QueueEntireBalanceAndResetValidator(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex) error {
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err

View File

@@ -2,7 +2,6 @@ package electra_test
import (
"bytes"
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
@@ -11,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestSwitchToCompoundingValidator(t *testing.T) {
@@ -34,10 +34,10 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
})
// Test that a validator with no withdrawal credentials cannot be switched to compounding.
require.NoError(t, err)
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(context.TODO(), s, 0))
require.ErrorContains(t, "validator has no withdrawal credentials", electra.SwitchToCompoundingValidator(s, 0))
// Test that a validator with withdrawal credentials can be switched to compounding.
require.NoError(t, electra.SwitchToCompoundingValidator(context.TODO(), s, 1))
require.NoError(t, electra.SwitchToCompoundingValidator(s, 1))
v, err := s.ValidatorAtIndex(1)
require.NoError(t, err)
require.Equal(t, true, bytes.HasPrefix(v.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte}), "withdrawal credentials were not updated")
@@ -50,7 +50,7 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
require.Equal(t, 0, len(pbd), "pending balance deposits should be empty")
// Test that a validator with excess balance can be switched to compounding, excess balance is queued.
require.NoError(t, electra.SwitchToCompoundingValidator(context.TODO(), s, 2))
require.NoError(t, electra.SwitchToCompoundingValidator(s, 2))
b, err = s.BalanceAtIndex(2)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was not changed")
@@ -74,7 +74,7 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
},
})
require.NoError(t, err)
require.NoError(t, electra.QueueEntireBalanceAndResetValidator(context.TODO(), s, 0))
require.NoError(t, electra.QueueEntireBalanceAndResetValidator(s, 0))
b, err := s.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), b, "balance was not changed")
@@ -88,3 +88,57 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
require.Equal(t, params.BeaconConfig().MinActivationBalance+100_000, pbd[0].Amount, "pending balance deposit amount is incorrect")
require.Equal(t, primitives.ValidatorIndex(0), pbd[0].Index, "pending balance deposit index is incorrect")
}
func TestSwitchToCompoundingValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
vals := st.Validators()
vals[0].WithdrawalCredentials = []byte{params.BeaconConfig().ETH1AddressWithdrawalPrefixByte}
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1010
require.NoError(t, st.SetBalances(bals))
require.NoError(t, electra.SwitchToCompoundingValidator(st, 0))
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1010), pbd[0].Amount) // appends it at the end
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
bytes.HasPrefix(val.WithdrawalCredentials, []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte})
}
func TestQueueExcessActiveBalance_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
err := electra.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount) // appends it at the end
bals = st.Balances()
require.Equal(t, params.BeaconConfig().MinActivationBalance, bals[0])
}
func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
// need to manually set this to 0 as after 6110 these balances are now 0 and instead populates pending balance deposits
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance - 1000
require.NoError(t, st.SetBalances(bals))
err := electra.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, params.BeaconConfig().MinActivationBalance-1000, pbd[0].Amount)
bal, err := st.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), bal)
}

View File

@@ -1,21 +1,33 @@
package electra
import (
"bytes"
"context"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ProcessExecutionLayerWithdrawRequests processes the validator withdrawals from the provided execution payload
// ProcessWithdrawalRequests processes the validator withdrawals from the provided execution payload
// into the beacon state triggered by the execution layer.
//
// Spec pseudocode definition:
//
// def process_execution_layer_withdrawal_request(
// def process_withdrawal_request(
//
// state: BeaconState,
// execution_layer_withdrawal_request: ExecutionLayerWithdrawalRequest
// withdrawal_request: WithdrawalRequest
//
// ) -> None:
// amount = execution_layer_withdrawal_request.amount
@@ -74,7 +86,109 @@ import (
// amount=to_withdraw,
// withdrawable_epoch=withdrawable_epoch,
// ))
func ProcessExecutionLayerWithdrawRequests(ctx context.Context, st state.BeaconState, wrs []*enginev1.ExecutionLayerWithdrawalRequest) (state.BeaconState, error) {
//TODO: replace with real implementation
func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []*enginev1.WithdrawalRequest) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "electra.ProcessWithdrawalRequests")
defer span.End()
currentEpoch := slots.ToEpoch(st.Slot())
for _, wr := range wrs {
if wr == nil {
return nil, errors.New("nil execution layer withdrawal request")
}
amount := wr.Amount
isFullExitRequest := amount == params.BeaconConfig().FullExitRequestAmount
// If partial withdrawal queue is full, only full exits are processed
if n, err := st.NumPendingPartialWithdrawals(); err != nil {
return nil, err
} else if n == params.BeaconConfig().PendingPartialWithdrawalsLimit && !isFullExitRequest {
// if the PendingPartialWithdrawalsLimit is met, the user would have paid for a partial withdrawal that's not included
log.Debugln("Skipping execution layer withdrawal request, PendingPartialWithdrawalsLimit reached")
continue
}
vIdx, exists := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(wr.ValidatorPubkey))
if !exists {
log.Debugf("Skipping execution layer withdrawal request, validator index for %s not found\n", hexutil.Encode(wr.ValidatorPubkey))
continue
}
validator, err := st.ValidatorAtIndex(vIdx)
if err != nil {
return nil, err
}
// Verify withdrawal credentials
hasCorrectCredential := helpers.HasExecutionWithdrawalCredentials(validator)
isCorrectSourceAddress := bytes.Equal(validator.WithdrawalCredentials[12:], wr.SourceAddress)
if !hasCorrectCredential || !isCorrectSourceAddress {
log.Debugln("Skipping execution layer withdrawal request, wrong withdrawal credentials")
continue
}
// Verify the validator is active.
if !helpers.IsActiveValidator(validator, currentEpoch) {
log.Debugln("Skipping execution layer withdrawal request, validator not active")
continue
}
// Verify the validator has not yet submitted an exit.
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
log.Debugln("Skipping execution layer withdrawal request, validator has submitted an exit already")
continue
}
// Verify the validator has been active long enough.
if currentEpoch < validator.ActivationEpoch.AddEpoch(params.BeaconConfig().ShardCommitteePeriod) {
log.Debugln("Skipping execution layer withdrawal request, validator has not been active long enough")
continue
}
pendingBalanceToWithdraw, err := st.PendingBalanceToWithdraw(vIdx)
if err != nil {
return nil, err
}
if isFullExitRequest {
// Only exit validator if it has no pending withdrawals in the queue
if pendingBalanceToWithdraw == 0 {
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
var err error
st, _, err = validators.InitiateValidatorExit(ctx, st, vIdx, maxExitEpoch, churn)
if err != nil {
return nil, err
}
}
continue
}
hasSufficientEffectiveBalance := validator.EffectiveBalance >= params.BeaconConfig().MinActivationBalance
vBal, err := st.BalanceAtIndex(vIdx)
if err != nil {
return nil, err
}
hasExcessBalance := vBal > params.BeaconConfig().MinActivationBalance+pendingBalanceToWithdraw
// Only allow partial withdrawals with compounding withdrawal credentials
if helpers.HasCompoundingWithdrawalCredential(validator) && hasSufficientEffectiveBalance && hasExcessBalance {
// Spec definition:
// to_withdraw = min(
// state.balances[index] - MIN_ACTIVATION_BALANCE - pending_balance_to_withdraw,
// amount
// )
// note: you can safely subtract these values because haxExcessBalance is checked
toWithdraw := min(vBal-params.BeaconConfig().MinActivationBalance-pendingBalanceToWithdraw, amount)
exitQueueEpoch, err := st.ExitEpochAndUpdateChurn(primitives.Gwei(toWithdraw))
if err != nil {
return nil, err
}
// safe add the uint64 to avoid overflow
withdrawableEpoch, err := exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, errors.Wrap(err, "failed to add withdrawability delay to exit queue epoch")
}
if err := st.AppendPendingPartialWithdrawal(&ethpb.PendingPartialWithdrawal{
Index: vIdx,
Amount: toWithdraw,
WithdrawableEpoch: withdrawableEpoch,
}); err != nil {
return nil, err
}
}
}
return st, nil
}

View File

@@ -0,0 +1,295 @@
package electra_test
import (
"context"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
"github.com/sirupsen/logrus/hooks/test"
)
func TestProcessWithdrawRequests(t *testing.T) {
logHook := test.NewGlobal()
source, err := hexutil.Decode("0xb20a608c624Ca5003905aA834De7156C68b2E1d0")
require.NoError(t, err)
st, _ := util.DeterministicGenesisStateElectra(t, 1)
currentSlot := primitives.Slot(uint64(params.BeaconConfig().SlotsPerEpoch)*uint64(params.BeaconConfig().ShardCommitteePeriod) + 1)
require.NoError(t, st.SetSlot(currentSlot))
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
type args struct {
st state.BeaconState
wrs []*enginev1.WithdrawalRequest
}
tests := []struct {
name string
args args
wantFn func(t *testing.T, got state.BeaconState)
wantErr bool
}{
{
name: "happy path exit and withdrawal only",
args: args{
st: func() state.BeaconState {
preSt := st.Copy()
require.NoError(t, preSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: params.BeaconConfig().FullExitRequestAmount,
WithdrawableEpoch: 0,
}))
v, err := preSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
require.NoError(t, preSt.SetValidators([]*eth.Validator{v}))
return preSt
}(),
wrs: []*enginev1.WithdrawalRequest{
{
SourceAddress: source,
ValidatorPubkey: bytesutil.SafeCopyBytes(val.PublicKey),
Amount: params.BeaconConfig().FullExitRequestAmount,
},
},
},
wantFn: func(t *testing.T, got state.BeaconState) {
wantPostSt := st.Copy()
v, err := wantPostSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
v.ExitEpoch = 261
v.WithdrawableEpoch = 517
require.NoError(t, wantPostSt.SetValidators([]*eth.Validator{v}))
require.NoError(t, wantPostSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: params.BeaconConfig().FullExitRequestAmount,
WithdrawableEpoch: 0,
}))
_, err = wantPostSt.ExitEpochAndUpdateChurn(primitives.Gwei(v.EffectiveBalance))
require.NoError(t, err)
require.DeepEqual(t, wantPostSt.Validators(), got.Validators())
webc, err := wantPostSt.ExitBalanceToConsume()
require.NoError(t, err)
gebc, err := got.ExitBalanceToConsume()
require.NoError(t, err)
require.DeepEqual(t, webc, gebc)
weee, err := wantPostSt.EarliestExitEpoch()
require.NoError(t, err)
geee, err := got.EarliestExitEpoch()
require.NoError(t, err)
require.DeepEqual(t, weee, geee)
},
},
{
name: "happy path has compounding",
args: args{
st: func() state.BeaconState {
preSt := st.Copy()
require.NoError(t, preSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: params.BeaconConfig().FullExitRequestAmount,
WithdrawableEpoch: 0,
}))
v, err := preSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
require.NoError(t, preSt.SetValidators([]*eth.Validator{v}))
bal, err := preSt.BalanceAtIndex(0)
require.NoError(t, err)
bal += 200
require.NoError(t, preSt.SetBalances([]uint64{bal}))
require.NoError(t, preSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: 100,
WithdrawableEpoch: 0,
}))
return preSt
}(),
wrs: []*enginev1.WithdrawalRequest{
{
SourceAddress: source,
ValidatorPubkey: bytesutil.SafeCopyBytes(val.PublicKey),
Amount: 100,
},
},
},
wantFn: func(t *testing.T, got state.BeaconState) {
wantPostSt := st.Copy()
v, err := wantPostSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
require.NoError(t, wantPostSt.SetValidators([]*eth.Validator{v}))
bal, err := wantPostSt.BalanceAtIndex(0)
require.NoError(t, err)
bal += 200
require.NoError(t, wantPostSt.SetBalances([]uint64{bal}))
require.NoError(t, wantPostSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: 0,
WithdrawableEpoch: 0,
}))
require.NoError(t, wantPostSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: 100,
WithdrawableEpoch: 0,
}))
require.NoError(t, wantPostSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: 100,
WithdrawableEpoch: 517,
}))
wnppw, err := wantPostSt.NumPendingPartialWithdrawals()
require.NoError(t, err)
gnppw, err := got.NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, wnppw, gnppw)
wece, err := wantPostSt.EarliestConsolidationEpoch()
require.NoError(t, err)
gece, err := got.EarliestConsolidationEpoch()
require.NoError(t, err)
require.Equal(t, wece, gece)
_, err = wantPostSt.ExitEpochAndUpdateChurn(primitives.Gwei(100))
require.NoError(t, err)
require.DeepEqual(t, wantPostSt.Validators(), got.Validators())
webc, err := wantPostSt.ExitBalanceToConsume()
require.NoError(t, err)
gebc, err := got.ExitBalanceToConsume()
require.NoError(t, err)
require.DeepEqual(t, webc, gebc)
},
},
{
name: "validator already submitted exit",
args: args{
st: func() state.BeaconState {
preSt := st.Copy()
v, err := preSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
v.ExitEpoch = 1000
require.NoError(t, preSt.SetValidators([]*eth.Validator{v}))
return preSt
}(),
wrs: []*enginev1.WithdrawalRequest{
{
SourceAddress: source,
ValidatorPubkey: bytesutil.SafeCopyBytes(val.PublicKey),
Amount: params.BeaconConfig().FullExitRequestAmount,
},
},
},
wantFn: func(t *testing.T, got state.BeaconState) {
wantPostSt := st.Copy()
v, err := wantPostSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
v.ExitEpoch = 1000
require.NoError(t, wantPostSt.SetValidators([]*eth.Validator{v}))
eee, err := got.EarliestExitEpoch()
require.NoError(t, err)
require.Equal(t, eee, primitives.Epoch(0))
require.DeepEqual(t, wantPostSt.Validators(), got.Validators())
},
},
{
name: "validator too new",
args: args{
st: func() state.BeaconState {
preSt := st.Copy()
require.NoError(t, preSt.SetSlot(0))
v, err := preSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
require.NoError(t, preSt.SetValidators([]*eth.Validator{v}))
return preSt
}(),
wrs: []*enginev1.WithdrawalRequest{
{
SourceAddress: source,
ValidatorPubkey: bytesutil.SafeCopyBytes(val.PublicKey),
Amount: params.BeaconConfig().FullExitRequestAmount,
},
},
},
wantFn: func(t *testing.T, got state.BeaconState) {
wantPostSt := st.Copy()
require.NoError(t, wantPostSt.SetSlot(0))
v, err := wantPostSt.ValidatorAtIndex(0)
require.NoError(t, err)
prefix := make([]byte, 12)
prefix[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
v.WithdrawalCredentials = append(prefix, source...)
require.NoError(t, wantPostSt.SetValidators([]*eth.Validator{v}))
eee, err := got.EarliestExitEpoch()
require.NoError(t, err)
require.Equal(t, eee, primitives.Epoch(0))
require.DeepEqual(t, wantPostSt.Validators(), got.Validators())
},
},
{
name: "PendingPartialWithdrawalsLimit reached with partial withdrawal results in a skip",
args: args{
st: func() state.BeaconState {
cfg := params.BeaconConfig().Copy()
cfg.PendingPartialWithdrawalsLimit = 1
params.OverrideBeaconConfig(cfg)
logrus.SetLevel(logrus.DebugLevel)
preSt := st.Copy()
require.NoError(t, preSt.AppendPendingPartialWithdrawal(&eth.PendingPartialWithdrawal{
Index: 0,
Amount: params.BeaconConfig().FullExitRequestAmount,
WithdrawableEpoch: 0,
}))
return preSt
}(),
wrs: []*enginev1.WithdrawalRequest{
{
SourceAddress: source,
ValidatorPubkey: bytesutil.SafeCopyBytes(val.PublicKey),
Amount: 100,
},
},
},
wantFn: func(t *testing.T, got state.BeaconState) {
assert.LogsContain(t, logHook, "Skipping execution layer withdrawal request, PendingPartialWithdrawalsLimit reached")
params.SetupTestConfigCleanup(t)
logrus.SetLevel(logrus.InfoLevel) // reset
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := electra.ProcessWithdrawalRequests(context.Background(), tt.args.st, tt.args.wrs)
if (err != nil) != tt.wantErr {
t.Errorf("ProcessWithdrawalRequests() error = %v, wantErr %v", err, tt.wantErr)
return
}
tt.wantFn(t, got)
})
}
}

View File

@@ -32,9 +32,6 @@ const (
// AttesterSlashingReceived is sent after an attester slashing is received from gossip or rpc
AttesterSlashingReceived = 8
// DataColumnSidecarReceived is sent after a data column sidecar is received from gossip or rpc.
DataColumnSidecarReceived = 9
)
// UnAggregatedAttReceivedData is the data sent with UnaggregatedAttReceived events.
@@ -80,7 +77,3 @@ type ProposerSlashingReceivedData struct {
type AttesterSlashingReceivedData struct {
AttesterSlashing ethpb.AttSlashing
}
type DataColumnSidecarReceivedData struct {
DataColumn *blocks.VerifiedRODataColumn
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -91,6 +92,14 @@ func IsAggregated(attestation ethpb.Att) bool {
//
// return uint64((committees_since_epoch_start + committee_index) % ATTESTATION_SUBNET_COUNT)
func ComputeSubnetForAttestation(activeValCount uint64, att ethpb.Att) uint64 {
if att.Version() >= version.Electra {
committeeIndex := 0
committeeIndices := att.CommitteeBitsVal().BitIndices()
if len(committeeIndices) > 0 {
committeeIndex = committeeIndices[0]
}
return ComputeSubnetFromCommitteeAndSlot(activeValCount, primitives.CommitteeIndex(committeeIndex), att.GetData().Slot)
}
return ComputeSubnetFromCommitteeAndSlot(activeValCount, att.GetData().CommitteeIndex, att.GetData().Slot)
}

View File

@@ -73,21 +73,37 @@ func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
att := &ethpb.Attestation{
AggregationBits: []byte{'A'},
Data: &ethpb.AttestationData{
Slot: 34,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{'C'},
Source: nil,
Target: nil,
},
Signature: []byte{'B'},
}
valCount, err := helpers.ActiveValidatorCount(context.Background(), state, slots.ToEpoch(att.Data.Slot))
valCount, err := helpers.ActiveValidatorCount(context.Background(), state, slots.ToEpoch(34))
require.NoError(t, err)
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
t.Run("Phase 0", func(t *testing.T) {
att := &ethpb.Attestation{
AggregationBits: []byte{'A'},
Data: &ethpb.AttestationData{
Slot: 34,
CommitteeIndex: 4,
BeaconBlockRoot: []byte{'C'},
},
Signature: []byte{'B'},
}
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
})
t.Run("Electra", func(t *testing.T) {
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(4, true)
att := &ethpb.AttestationElectra{
AggregationBits: []byte{'A'},
CommitteeBits: cb,
Data: &ethpb.AttestationData{
Slot: 34,
BeaconBlockRoot: []byte{'C'},
},
Signature: []byte{'B'},
}
sub := helpers.ComputeSubnetForAttestation(valCount, att)
assert.Equal(t, uint64(6), sub, "Did not get correct subnet for attestation")
})
}
func Test_ValidateAttestationTime(t *testing.T) {

View File

@@ -500,11 +500,11 @@ func LastActivatedValidatorIndex(ctx context.Context, st state.ReadOnlyBeaconSta
// hasETH1WithdrawalCredential returns whether the validator has an ETH1
// Withdrawal prefix. It assumes that the caller has a lock on the state
func HasETH1WithdrawalCredential(val *ethpb.Validator) bool {
func HasETH1WithdrawalCredential(val interfaces.WithWithdrawalCredentials) bool {
if val == nil {
return false
}
return isETH1WithdrawalCredential(val.WithdrawalCredentials)
return isETH1WithdrawalCredential(val.GetWithdrawalCredentials())
}
func isETH1WithdrawalCredential(creds []byte) bool {
@@ -674,68 +674,3 @@ func ValidatorMaxEffectiveBalance(val *ethpb.Validator) uint64 {
}
return params.BeaconConfig().MinActivationBalance
}
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending balance deposit.
//
// Spec definition:
//
// def queue_excess_active_balance(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// if balance > MIN_ACTIVATION_BALANCE:
// excess_balance = balance - MIN_ACTIVATION_BALANCE
// state.balances[index] = MIN_ACTIVATION_BALANCE
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
}
if bal > params.BeaconConfig().MinActivationBalance {
excessBalance := bal - params.BeaconConfig().MinActivationBalance
if err := s.UpdateBalancesAtIndex(idx, params.BeaconConfig().MinActivationBalance); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, excessBalance)
}
return nil
}
// QueueEntireBalanceAndResetValidator queues the entire balance and resets the validator. This is used in electra fork logic.
//
// Spec definition:
//
// def queue_entire_balance_and_reset_validator(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// validator = state.validators[index]
// state.balances[index] = 0
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=balance)
// )
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
return err
}
if err := s.UpdateBalancesAtIndex(idx, 0); err != nil {
return err
}
v, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
}
v.EffectiveBalance = 0
v.ActivationEligibilityEpoch = params.BeaconConfig().FarFutureEpoch
if err := s.UpdateValidatorAtIndex(idx, v); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, bal)
}

View File

@@ -18,7 +18,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestIsActiveValidator_OK(t *testing.T) {
@@ -1120,40 +1119,3 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
// Sanity check that MinActivationBalance equals (pre-electra) MaxEffectiveBalance
assert.Equal(t, params.BeaconConfig().MinActivationBalance, params.BeaconConfig().MaxEffectiveBalance)
}
func TestQueueExcessActiveBalance_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 1000
require.NoError(t, st.SetBalances(bals))
err := helpers.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount)
bals = st.Balances()
require.Equal(t, params.BeaconConfig().MinActivationBalance, bals[0])
}
func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
val, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxEffectiveBalance, val.EffectiveBalance)
pbd, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(pbd))
err = helpers.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err = st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
val, err = st.ValidatorAtIndex(0)
require.NoError(t, err)
require.Equal(t, uint64(0), val.EffectiveBalance)
}

View File

@@ -1,36 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["helpers.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas",
visibility = ["//visibility:public"],
deps = [
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["helpers_test.go"],
deps = [
":go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,302 +0,0 @@
package peerdas
import (
"encoding/binary"
"math"
cKzg4844 "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/holiman/uint256"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// Bytes per cell
const bytesPerCell = cKzg4844.FieldElementsPerCell * cKzg4844.BytesPerFieldElement
var (
// Custom errors
errCustodySubnetCountTooLarge = errors.New("custody subnet count larger than data column sidecar subnet count")
errIndexTooLarge = errors.New("column index is larger than the specified number of columns")
errMismatchLength = errors.New("mismatch in the length of the commitments and proofs")
// maxUint256 is the maximum value of a uint256.
maxUint256 = &uint256.Int{math.MaxUint64, math.MaxUint64, math.MaxUint64, math.MaxUint64}
)
// CustodyColumnSubnets computes the subnets the node should participate in for custody.
func CustodyColumnSubnets(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Check if the custody subnet count is larger than the data column sidecar subnet count.
if custodySubnetCount > dataColumnSidecarSubnetCount {
return nil, errCustodySubnetCountTooLarge
}
// First, compute the subnet IDs that the node should participate in.
subnetIds := make(map[uint64]bool, custodySubnetCount)
one := uint256.NewInt(1)
for currentId := new(uint256.Int).SetBytes(nodeId.Bytes()); uint64(len(subnetIds)) < custodySubnetCount; currentId.Add(currentId, one) {
// Convert to big endian bytes.
currentIdBytesBigEndian := currentId.Bytes32()
// Convert to little endian.
currentIdBytesLittleEndian := bytesutil.ReverseByteOrder(currentIdBytesBigEndian[:])
// Hash the result.
hashedCurrentId := hash.Hash(currentIdBytesLittleEndian)
// Get the subnet ID.
subnetId := binary.LittleEndian.Uint64(hashedCurrentId[:8]) % dataColumnSidecarSubnetCount
// Add the subnet to the map.
subnetIds[subnetId] = true
// Overflow prevention.
if currentId.Cmp(maxUint256) == 0 {
currentId = uint256.NewInt(0)
}
}
return subnetIds, nil
}
// CustodyColumns computes the columns the node should custody.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#helper-functions
func CustodyColumns(nodeId enode.ID, custodySubnetCount uint64) (map[uint64]bool, error) {
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
// Compute the custodied subnets.
subnetIds, err := CustodyColumnSubnets(nodeId, custodySubnetCount)
if err != nil {
return nil, errors.Wrap(err, "custody subnets")
}
columnsPerSubnet := cKzg4844.CellsPerExtBlob / dataColumnSidecarSubnetCount
// Knowing the subnet ID and the number of columns per subnet, select all the columns the node should custody.
// Columns belonging to the same subnet are contiguous.
columnIndices := make(map[uint64]bool, custodySubnetCount*columnsPerSubnet)
for i := uint64(0); i < columnsPerSubnet; i++ {
for subnetId := range subnetIds {
columnIndex := dataColumnSidecarSubnetCount*i + subnetId
columnIndices[columnIndex] = true
}
}
return columnIndices, nil
}
// DataColumnSidecars computes the data column sidecars from the signed block and blobs.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/das-core.md#recover_matrix
func DataColumnSidecars(signedBlock interfaces.ReadOnlySignedBeaconBlock, blobs []cKzg4844.Blob) ([]*ethpb.DataColumnSidecar, error) {
blobsCount := len(blobs)
if blobsCount == 0 {
return nil, nil
}
// Get the signed block header.
signedBlockHeader, err := signedBlock.Header()
if err != nil {
return nil, errors.Wrap(err, "signed block header")
}
// Get the block body.
block := signedBlock.Block()
blockBody := block.Body()
// Get the blob KZG commitments.
blobKzgCommitments, err := blockBody.BlobKzgCommitments()
if err != nil {
return nil, errors.Wrap(err, "blob KZG commitments")
}
// Compute the KZG commitments inclusion proof.
kzgCommitmentsInclusionProof, err := blocks.MerkleProofKZGCommitments(blockBody)
if err != nil {
return nil, errors.Wrap(err, "merkle proof ZKG commitments")
}
// Compute cells and proofs.
cells := make([][cKzg4844.CellsPerExtBlob]cKzg4844.Cell, 0, blobsCount)
proofs := make([][cKzg4844.CellsPerExtBlob]cKzg4844.KZGProof, 0, blobsCount)
for i := range blobs {
blob := &blobs[i]
blobCells, blobProofs, err := cKzg4844.ComputeCellsAndKZGProofs(blob)
if err != nil {
return nil, errors.Wrap(err, "compute cells and KZG proofs")
}
cells = append(cells, blobCells)
proofs = append(proofs, blobProofs)
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, cKzg4844.CellsPerExtBlob)
for columnIndex := uint64(0); columnIndex < cKzg4844.CellsPerExtBlob; columnIndex++ {
column := make([]cKzg4844.Cell, 0, blobsCount)
kzgProofOfColumn := make([]cKzg4844.KZGProof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cell := cells[rowIndex][columnIndex]
column = append(column, cell)
kzgProof := proofs[rowIndex][columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
cell := column[i]
cellBytes := make([]byte, 0, bytesPerCell)
for _, fieldElement := range cell {
copiedElem := fieldElement
cellBytes = append(cellBytes, copiedElem[:]...)
}
columnBytes = append(columnBytes, cellBytes)
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// DataColumnSidecarsForReconstruct is a TEMPORARY function until there is an official specification for it.
// It is scheduled for deletion.
func DataColumnSidecarsForReconstruct(
blobKzgCommitments [][]byte,
signedBlockHeader *ethpb.SignedBeaconBlockHeader,
kzgCommitmentsInclusionProof [][]byte,
blobs []cKzg4844.Blob,
) ([]*ethpb.DataColumnSidecar, error) {
blobsCount := len(blobs)
if blobsCount == 0 {
return nil, nil
}
// Compute cells and proofs.
cells := make([][cKzg4844.CellsPerExtBlob]cKzg4844.Cell, 0, blobsCount)
proofs := make([][cKzg4844.CellsPerExtBlob]cKzg4844.KZGProof, 0, blobsCount)
for i := range blobs {
blob := &blobs[i]
blobCells, blobProofs, err := cKzg4844.ComputeCellsAndKZGProofs(blob)
if err != nil {
return nil, errors.Wrap(err, "compute cells and KZG proofs")
}
cells = append(cells, blobCells)
proofs = append(proofs, blobProofs)
}
// Get the column sidecars.
sidecars := make([]*ethpb.DataColumnSidecar, 0, cKzg4844.CellsPerExtBlob)
for columnIndex := uint64(0); columnIndex < cKzg4844.CellsPerExtBlob; columnIndex++ {
column := make([]cKzg4844.Cell, 0, blobsCount)
kzgProofOfColumn := make([]cKzg4844.KZGProof, 0, blobsCount)
for rowIndex := 0; rowIndex < blobsCount; rowIndex++ {
cell := cells[rowIndex][columnIndex]
column = append(column, cell)
kzgProof := proofs[rowIndex][columnIndex]
kzgProofOfColumn = append(kzgProofOfColumn, kzgProof)
}
columnBytes := make([][]byte, 0, blobsCount)
for i := range column {
cell := column[i]
cellBytes := make([]byte, 0, bytesPerCell)
for _, fieldElement := range cell {
copiedElem := fieldElement
cellBytes = append(cellBytes, copiedElem[:]...)
}
columnBytes = append(columnBytes, cellBytes)
}
kzgProofOfColumnBytes := make([][]byte, 0, blobsCount)
for _, kzgProof := range kzgProofOfColumn {
copiedProof := kzgProof
kzgProofOfColumnBytes = append(kzgProofOfColumnBytes, copiedProof[:])
}
sidecar := &ethpb.DataColumnSidecar{
ColumnIndex: columnIndex,
DataColumn: columnBytes,
KzgCommitments: blobKzgCommitments,
KzgProof: kzgProofOfColumnBytes,
SignedBlockHeader: signedBlockHeader,
KzgCommitmentsInclusionProof: kzgCommitmentsInclusionProof,
}
sidecars = append(sidecars, sidecar)
}
return sidecars, nil
}
// VerifyDataColumnSidecarKZGProofs verifies the provided KZG Proofs for the particular
// data column.
func VerifyDataColumnSidecarKZGProofs(sc *ethpb.DataColumnSidecar) (bool, error) {
if sc.ColumnIndex >= params.BeaconConfig().NumberOfColumns {
return false, errIndexTooLarge
}
if len(sc.DataColumn) != len(sc.KzgCommitments) || len(sc.KzgCommitments) != len(sc.KzgProof) {
return false, errMismatchLength
}
blobsCount := len(sc.DataColumn)
rowIdx := make([]uint64, 0, blobsCount)
colIdx := make([]uint64, 0, blobsCount)
for i := 0; i < len(sc.DataColumn); i++ {
copiedI := uint64(i)
rowIdx = append(rowIdx, copiedI)
colI := sc.ColumnIndex
colIdx = append(colIdx, colI)
}
ckzgComms := make([]cKzg4844.Bytes48, 0, len(sc.KzgCommitments))
for _, com := range sc.KzgCommitments {
ckzgComms = append(ckzgComms, cKzg4844.Bytes48(com))
}
var cells []cKzg4844.Cell
for _, ce := range sc.DataColumn {
var newCell []cKzg4844.Bytes32
for i := 0; i < len(ce); i += 32 {
newCell = append(newCell, cKzg4844.Bytes32(ce[i:i+32]))
}
cells = append(cells, cKzg4844.Cell(newCell))
}
var proofs []cKzg4844.Bytes48
for _, p := range sc.KzgProof {
proofs = append(proofs, cKzg4844.Bytes48(p))
}
return cKzg4844.VerifyCellKZGProofBatch(ckzgComms, rowIdx, colIdx, cells, proofs)
}

View File

@@ -1,91 +0,0 @@
package peerdas_test
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"fmt"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ckzg4844 "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
)
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) ckzg4844.Blob {
var blob ckzg4844.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob ckzg4844.Blob) (ckzg4844.KZGCommitment, ckzg4844.KZGProof, error) {
commitment, err := ckzg4844.BlobToKZGCommitment(&blob)
if err != nil {
return ckzg4844.KZGCommitment{}, ckzg4844.KZGProof{}, err
}
proof, err := ckzg4844.ComputeBlobKZGProof(&blob, ckzg4844.Bytes48(commitment))
if err != nil {
return ckzg4844.KZGCommitment{}, ckzg4844.KZGProof{}, err
}
return commitment, proof, err
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {
dbBlock := util.NewBeaconBlockDeneb()
require.NoError(t, kzg.Start())
comms := [][]byte{}
blobs := []ckzg4844.Blob{}
for i := int64(0); i < 6; i++ {
blob := GetRandBlob(i)
commitment, _, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
comms = append(comms, commitment[:])
blobs = append(blobs, blob)
}
dbBlock.Block.Body.BlobKzgCommitments = comms
sBlock, err := blocks.NewSignedBeaconBlock(dbBlock)
require.NoError(t, err)
sCars, err := peerdas.DataColumnSidecars(sBlock, blobs)
require.NoError(t, err)
for i, sidecar := range sCars {
verified, err := peerdas.VerifyDataColumnSidecarKZGProofs(sidecar)
require.NoError(t, err)
require.Equal(t, true, verified, fmt.Sprintf("sidecar %d failed", i))
}
}

View File

@@ -226,7 +226,7 @@ func TestProcessEpoch_BadBalanceAltair(t *testing.T) {
epochParticipation[0] = participation
assert.NoError(t, s.SetCurrentParticipationBits(epochParticipation))
assert.NoError(t, s.SetPreviousParticipationBits(epochParticipation))
_, err = altair.ProcessEpoch(context.Background(), s)
err = altair.ProcessEpoch(context.Background(), s)
assert.ErrorContains(t, "addition overflows", err)
}

View File

@@ -216,7 +216,7 @@ func TestProcessEpoch_BadBalanceBellatrix(t *testing.T) {
epochParticipation[0] = participation
assert.NoError(t, s.SetCurrentParticipationBits(epochParticipation))
assert.NoError(t, s.SetPreviousParticipationBits(epochParticipation))
_, err = altair.ProcessEpoch(context.Background(), s)
err = altair.ProcessEpoch(context.Background(), s)
assert.ErrorContains(t, "addition overflows", err)
}

View File

@@ -257,14 +257,12 @@ func ProcessSlots(ctx context.Context, state state.BeaconState, slot primitives.
return nil, errors.Wrap(err, "could not process epoch with optimizations")
}
} else if state.Version() <= version.Deneb {
state, err = altair.ProcessEpoch(ctx, state)
if err != nil {
if err = altair.ProcessEpoch(ctx, state); err != nil {
tracing.AnnotateError(span, err)
return nil, errors.Wrap(err, fmt.Sprintf("could not process %s epoch", version.String(state.Version())))
}
} else {
state, err = electra.ProcessEpoch(ctx, state)
if err != nil {
if err = electra.ProcessEpoch(ctx, state); err != nil {
tracing.AnnotateError(span, err)
return nil, errors.Wrap(err, fmt.Sprintf("could not process %s epoch", version.String(state.Version())))
}

View File

@@ -227,7 +227,7 @@ func ProcessBlockNoVerifyAnySig(
// def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_receipts_start_index)
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
@@ -245,7 +245,7 @@ func ProcessBlockNoVerifyAnySig(
// for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
// # [New in Electra:EIP7002:EIP7251]
// for_ops(body.execution_payload.withdrawal_requests, process_execution_layer_withdrawal_request)
// for_ops(body.execution_payload.deposit_receipts, process_deposit_receipt) # [New in Electra:EIP6110]
// for_ops(body.execution_payload.deposit_requests, process_deposit_requests) # [New in Electra:EIP6110]
// for_ops(body.consolidations, process_consolidation) # [New in Electra:EIP7251]
func ProcessOperationsNoVerifyAttsSigs(
ctx context.Context,
@@ -401,7 +401,7 @@ func VerifyBlobCommitmentCount(blk interfaces.ReadOnlyBeaconBlock) error {
// def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_receipts_start_index)
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
@@ -419,7 +419,7 @@ func VerifyBlobCommitmentCount(blk interfaces.ReadOnlyBeaconBlock) error {
// for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
// # [New in Electra:EIP7002:EIP7251]
// for_ops(body.execution_payload.withdrawal_requests, process_execution_layer_withdrawal_request)
// for_ops(body.execution_payload.deposit_receipts, process_deposit_receipt) # [New in Electra:EIP6110]
// for_ops(body.execution_payload.deposit_requests, process_deposit_requests) # [New in Electra:EIP6110]
// for_ops(body.consolidations, process_consolidation) # [New in Electra:EIP7251]
func electraOperations(
ctx context.Context,
@@ -445,11 +445,12 @@ func electraOperations(
if !ok {
return nil, errors.New("could not cast execution data to electra execution data")
}
st, err = electra.ProcessExecutionLayerWithdrawRequests(ctx, st, exe.WithdrawalRequests())
st, err = electra.ProcessWithdrawalRequests(ctx, st, exe.WithdrawalRequests())
if err != nil {
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
}
st, err = electra.ProcessDepositReceipts(ctx, st, exe.DepositReceipts())
st, err = electra.ProcessDepositRequests(ctx, st, exe.DepositRequests()) // TODO: EIP-6110 deposit changes.
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
}

View File

@@ -4,7 +4,6 @@ go_library(
name = "go_default_library",
srcs = [
"availability.go",
"availability_columns.go",
"cache.go",
"iface.go",
"mock.go",
@@ -21,7 +20,6 @@ go_library(
"//runtime/logging:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -1,151 +0,0 @@
package das
import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/p2p/enode"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
// LazilyPersistentStoreColumn is an implementation of AvailabilityStore to be used when batch syncing data columns.
// This implementation will hold any blobs passed to Persist until the IsDataAvailable is called for their
// block, at which time they will undergo full verification and be saved to the disk.
type LazilyPersistentStoreColumn struct {
store *filesystem.BlobStorage
cache *cache
verifier ColumnBatchVerifier
nodeID enode.ID
}
type ColumnBatchVerifier interface {
VerifiedRODataColumns(ctx context.Context, blk blocks.ROBlock, sc []blocks.RODataColumn) ([]blocks.VerifiedRODataColumn, error)
}
func NewLazilyPersistentStoreColumn(store *filesystem.BlobStorage, verifier ColumnBatchVerifier, id enode.ID) *LazilyPersistentStoreColumn {
return &LazilyPersistentStoreColumn{
store: store,
cache: newCache(),
verifier: verifier,
nodeID: id,
}
}
// TODO: Very Ugly, change interface to allow for columns and blobs
func (s *LazilyPersistentStoreColumn) Persist(current primitives.Slot, sc ...blocks.ROBlob) error {
return nil
}
// PersistColumns adds columns to the working column cache. columns stored in this cache will be persisted
// for at least as long as the node is running. Once IsDataAvailable succeeds, all blobs referenced
// by the given block are guaranteed to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStoreColumn) PersistColumns(current primitives.Slot, sc ...blocks.RODataColumn) error {
if len(sc) == 0 {
return nil
}
if len(sc) > 1 {
first := sc[0].BlockRoot()
for i := 1; i < len(sc); i++ {
if first != sc[i].BlockRoot() {
return errMixedRoots
}
}
}
if !params.WithinDAPeriod(slots.ToEpoch(sc[0].Slot()), slots.ToEpoch(current)) {
return nil
}
key := keyFromColumn(sc[0])
entry := s.cache.ensure(key)
for i := range sc {
if err := entry.stashColumns(&sc[i]); err != nil {
return err
}
}
return nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// BlobSidecars already in the db are assumed to have been previously verified against the block.
func (s *LazilyPersistentStoreColumn) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments, err := fullCommitmentsToCheck(b, current)
if err != nil {
return errors.Wrapf(err, "could check data availability for block %#x", b.Root())
}
// Return early for blocks that are pre-deneb or which do not have any commitments.
if blockCommitments.count() == 0 {
return nil
}
key := keyFromBlock(b)
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
sumz, err := s.store.WaitForSummarizer(ctx)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", b.Root())).
WithError(err).
Debug("Failed to receive BlobStorageSummarizer within IsDataAvailable")
} else {
entry.setDiskSummary(sumz.Summary(root))
}
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.
sidecars, err := entry.filterColumns(root, blockCommitments)
if err != nil {
return errors.Wrap(err, "incomplete BlobSidecar batch")
}
// Do thorough verifications of each BlobSidecar for the block.
// Same as above, we don't save BlobSidecars if there are any problems with the batch.
vscs, err := s.verifier.VerifiedRODataColumns(ctx, b, sidecars)
if err != nil {
var me verification.VerificationMultiError
ok := errors.As(err, &me)
if ok {
fails := me.Failures()
lf := make(log.Fields, len(fails))
for i := range fails {
lf[fmt.Sprintf("fail_%d", i)] = fails[i].Error()
}
log.WithFields(lf).
Debug("invalid ColumnSidecars received")
}
return errors.Wrapf(err, "invalid ColumnSidecars received for block %#x", root)
}
// Ensure that each column sidecar is written to disk.
for i := range vscs {
if err := s.store.SaveDataColumn(vscs[i]); err != nil {
return errors.Wrapf(err, "failed to save ColumnSidecar index %d for block %#x", vscs[i].ColumnIndex, root)
}
}
// All ColumnSidecars are persisted - da check succeeds.
return nil
}
func fullCommitmentsToCheck(b blocks.ROBlock, current primitives.Slot) (safeCommitmentsArray, error) {
var ar safeCommitmentsArray
if b.Version() < version.Deneb {
return ar, nil
}
// We are only required to check within MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return ar, nil
}
kc, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return ar, err
}
for i := range ar {
copy(ar[i], kc)
}
return ar, nil
}

View File

@@ -2,7 +2,6 @@ package das
import (
"bytes"
"reflect"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
@@ -39,10 +38,6 @@ func keyFromSidecar(sc blocks.ROBlob) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
func keyFromColumn(sc blocks.RODataColumn) cacheKey {
return cacheKey{slot: sc.Slot(), root: sc.BlockRoot()}
}
// keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value.
func keyFromBlock(b blocks.ROBlock) cacheKey {
return cacheKey{slot: b.Block().Slot(), root: b.Root()}
@@ -66,7 +61,6 @@ func (c *cache) delete(key cacheKey) {
// cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct {
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
colScs [fieldparams.NumberOfColumns]*blocks.RODataColumn
diskSummary filesystem.BlobStorageSummary
}
@@ -88,17 +82,6 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
return nil
}
func (e *cacheEntry) stashColumns(sc *blocks.RODataColumn) error {
if sc.ColumnIndex >= fieldparams.NumberOfColumns {
return errors.Wrapf(errIndexOutOfBounds, "index=%d", sc.ColumnIndex)
}
if e.colScs[sc.ColumnIndex] != nil {
return errors.Wrapf(ErrDuplicateSidecar, "root=%#x, index=%d, commitment=%#x", sc.BlockRoot(), sc.ColumnIndex, sc.KzgCommitments)
}
e.colScs[sc.ColumnIndex] = sc
return nil
}
// filter evicts sidecars that are not committed to by the block and returns custom
// errors if the cache is missing any of the commitments, or if the commitments in
// the cache do not match those found in the block. If err is nil, then all expected
@@ -134,35 +117,6 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
return scs, nil
}
func (e *cacheEntry) filterColumns(root [32]byte, kc safeCommitmentsArray) ([]blocks.RODataColumn, error) {
if e.diskSummary.AllAvailable(kc.count()) {
return nil, nil
}
scs := make([]blocks.RODataColumn, 0, kc.count())
for i := uint64(0); i < fieldparams.NumberOfColumns; i++ {
// We already have this blob, we don't need to write it or validate it.
if e.diskSummary.HasIndex(i) {
continue
}
if kc[i] == nil {
if e.colScs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
}
continue
}
if e.colScs[i] == nil {
return nil, errors.Wrapf(errMissingSidecar, "root=%#x, index=%#x", root, i)
}
if !reflect.DeepEqual(kc[i], e.colScs[i].KzgCommitments) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.colScs[i].KzgCommitments, kc[i])
}
scs = append(scs, *e.colScs[i])
}
return scs, nil
}
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// gratuitous bounds checks.
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte
@@ -175,14 +129,3 @@ func (s safeCommitmentArray) count() int {
}
return fieldparams.MaxBlobsPerBlock
}
type safeCommitmentsArray [fieldparams.NumberOfColumns][][]byte
func (s safeCommitmentsArray) count() int {
for i := range s {
if s[i] == nil {
return i
}
}
return fieldparams.NumberOfColumns
}

View File

@@ -13,7 +13,6 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",

View File

@@ -12,7 +12,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -40,15 +39,8 @@ const (
directoryPermissions = 0700
)
type (
// BlobStorageOption is a functional option for configuring a BlobStorage.
BlobStorageOption func(*BlobStorage) error
RootIndexPair struct {
Root [fieldparams.RootLength]byte
Index uint64
}
)
// BlobStorageOption is a functional option for configuring a BlobStorage.
type BlobStorageOption func(*BlobStorage) error
// WithBasePath is a required option that sets the base path of blob storage.
func WithBasePath(base string) BlobStorageOption {
@@ -78,10 +70,7 @@ func WithSaveFsync(fsync bool) BlobStorageOption {
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node.
func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
b := &BlobStorage{
DataColumnFeed: new(event.Feed),
}
b := &BlobStorage{}
for _, o := range opts {
if err := o(b); err != nil {
return nil, errors.Wrap(err, "failed to create blob storage")
@@ -110,7 +99,6 @@ type BlobStorage struct {
fsync bool
fs afero.Fs
pruner *blobPruner
DataColumnFeed *event.Feed
}
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
@@ -233,110 +221,6 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
return nil
}
// SaveDataColumn saves a data column to our local filesystem.
func (bs *BlobStorage) SaveDataColumn(column blocks.VerifiedRODataColumn) error {
startTime := time.Now()
fname := namerForDataColumn(column)
sszPath := fname.path()
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
}
if exists {
log.Trace("Ignoring a duplicate data column sidecar save attempt")
return nil
}
if bs.pruner != nil {
hRoot, err := column.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
return err
}
if err := bs.pruner.notify(hRoot, column.SignedBlockHeader.Header.Slot, column.ColumnIndex); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", hRoot)
}
}
// Serialize the ethpb.DataColumnSidecar to binary data using SSZ.
sidecarData, err := column.MarshalSSZ()
if err != nil {
return errors.Wrap(err, "failed to serialize sidecar data")
} else if len(sidecarData) == 0 {
return errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath(fmt.Sprintf("%p", sidecarData))
partialMoved := false
// Ensure the partial file is deleted.
defer func() {
if partialMoved {
return
}
// It's expected to error if the save is successful.
err = bs.fs.Remove(partPath)
if err == nil {
log.WithFields(logrus.Fields{
"partPath": partPath,
}).Debugf("Removed partial file")
}
}()
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(partPath)
if err != nil {
return errors.Wrap(err, "failed to create partial file")
}
n, err := partialFile.Write(sidecarData)
if err != nil {
closeErr := partialFile.Close()
if closeErr != nil {
return closeErr
}
return errors.Wrap(err, "failed to write to partial file")
}
if bs.fsync {
if err := partialFile.Sync(); err != nil {
return err
}
}
if err := partialFile.Close(); err != nil {
return err
}
if n != len(sidecarData) {
return fmt.Errorf("failed to write the full bytes of sidecarData, wrote only %d of %d bytes", n, len(sidecarData))
}
if n == 0 {
return errEmptyBlobWritten
}
// Atomically rename the partial file to its final name.
err = bs.fs.Rename(partPath, sszPath)
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
partialMoved = true
// Notify the data column notifier that a new data column has been saved.
bs.DataColumnFeed.Send(RootIndexPair{
Root: column.BlockRoot(),
Index: column.ColumnIndex,
})
// TODO: Use new metrics for data columns
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
// Get retrieves a single BlobSidecar by its root and index.
// Since BlobStorage only writes blobs that have undergone full verification, the return
// value is always a VerifiedROBlob.
@@ -362,20 +246,6 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return verification.BlobSidecarNoop(ro)
}
// GetColumn retrieves a single DataColumnSidecar by its root and index.
func (bs *BlobStorage) GetColumn(root [32]byte, idx uint64) (*ethpb.DataColumnSidecar, error) {
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
if err != nil {
return nil, err
}
s := &ethpb.DataColumnSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return nil, err
}
return s, nil
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
@@ -419,61 +289,6 @@ func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]boo
return mask, nil
}
// ColumnIndices retrieve the stored column indexes from our filesystem.
func (bs *BlobStorage) ColumnIndices(root [32]byte) (map[uint64]bool, error) {
custody := make(map[uint64]bool, fieldparams.NumberOfColumns)
// Get all the files in the directory.
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
// If the directory does not exist, we do not custody any columns.
if os.IsNotExist(err) {
return nil, nil
}
return nil, errors.Wrap(err, "read directory")
}
// Iterate over all the entries in the directory.
for _, entry := range entries {
// If the entry is a directory, skip it.
if entry.IsDir() {
continue
}
// If the entry does not have the correct extension, skip it.
name := entry.Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
// The file should be in the `<index>.<extension>` format.
// Skip the file if it does not match the format.
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
// Get the column index from the file name.
columnIndexStr := parts[0]
columnIndex, err := strconv.ParseUint(columnIndexStr, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
// If the column index is out of bounds, return an error.
if columnIndex >= fieldparams.NumberOfColumns {
return nil, errors.Wrapf(errIndexOutOfBounds, "invalid index %d", columnIndex)
}
// Mark the column index as in custody.
custody[columnIndex] = true
}
return custody, nil
}
// Clear deletes all files on the filesystem.
func (bs *BlobStorage) Clear() error {
dirs, err := listDir(bs.fs, ".")
@@ -506,10 +321,6 @@ func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index}
}
func namerForDataColumn(col blocks.VerifiedRODataColumn) blobNamer {
return blobNamer{root: col.BlockRoot(), index: col.ColumnIndex}
}
func (p blobNamer) dir() string {
return rootString(p.root)
}

View File

@@ -9,7 +9,7 @@ import (
)
// blobIndexMask is a bitmask representing the set of blob indices that are currently set.
type blobIndexMask [fieldparams.NumberOfColumns]bool
type blobIndexMask [fieldparams.MaxBlobsPerBlock]bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct {
@@ -68,12 +68,9 @@ func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
}
func (s *blobStorageCache) ensure(key [32]byte, slot primitives.Slot, idx uint64) error {
// TODO: Separate blob index checks from data column index checks
/*
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
*/
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
s.mu.Lock()
defer s.mu.Unlock()
v := s.cache[key]

View File

@@ -9,7 +9,6 @@ import (
)
func TestSlotByRoot_Summary(t *testing.T) {
t.Skip("Use new test for data columns")
var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask
firstSet[0] = true
lastSet[len(lastSet)-1] = true

View File

@@ -92,14 +92,16 @@ func windowMin(latest, offset primitives.Slot) primitives.Slot {
func (p *blobPruner) warmCache() error {
p.Lock()
defer p.Unlock()
defer func() {
if !p.warmed {
p.warmed = true
close(p.cacheReady)
}
p.Unlock()
}()
if err := p.prune(0); err != nil {
return err
}
if !p.warmed {
p.warmed = true
close(p.cacheReady)
}
return nil
}

View File

@@ -2,16 +2,19 @@ package filesystem
import (
"bytes"
"context"
"fmt"
"math"
"os"
"path"
"sort"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/spf13/afero"
@@ -34,6 +37,34 @@ func TestTryPruneDir_CachedNotExpired(t *testing.T) {
require.Equal(t, 0, pruned)
}
func TestCacheWarmFail(t *testing.T) {
fs := afero.NewMemMapFs()
n := blobNamer{root: bytesutil.ToBytes32([]byte("derp")), index: 0}
bp := n.path()
mkdir := path.Dir(bp)
require.NoError(t, fs.MkdirAll(mkdir, directoryPermissions))
// Create an empty blob index in the fs by touching the file at a seemingly valid path.
fi, err := fs.Create(bp)
require.NoError(t, err)
require.NoError(t, fi.Close())
// Cache warm should fail due to the unexpected EOF.
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
require.ErrorIs(t, pr.warmCache(), errPruningFailures)
// The cache warm has finished, so calling waitForCache with a super short deadline
// should not block or hit the context deadline.
ctx := context.Background()
ctx, cancel := context.WithDeadline(ctx, time.Now().Add(1*time.Millisecond))
defer cancel()
c, err := pr.waitForCache(ctx)
// We will get an error and a nil value for the cache if we hit the deadline.
require.NoError(t, err)
require.NotNil(t, c)
}
func TestTryPruneDir_CachedExpired(t *testing.T) {
t.Run("empty directory", func(t *testing.T) {
fs := afero.NewMemMapFs()

View File

@@ -23,10 +23,10 @@ import (
"go.opencensus.io/trace"
)
// Used to represent errors for inconsistent slot ranges.
// used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root. Return nil if block is not found.
// Block retrieval by root.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()

View File

@@ -149,7 +149,7 @@ func TestState_CanSaveRetrieve(t *testing.T) {
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
DepositReceiptsRoot: make([]byte, 32),
DepositRequestsRoot: make([]byte, 32),
WithdrawalRequestsRoot: make([]byte, 32),
})
require.NoError(t, err)

View File

@@ -631,7 +631,7 @@ func fullPayloadFromPayloadBody(
Withdrawals: body.Withdrawals,
ExcessBlobGas: ebg,
BlobGasUsed: bgu,
DepositReceipts: dr,
DepositRequests: dr,
WithdrawalRequests: wr,
}) // We can't get the block value and don't care about the block value for this instance
default:
@@ -780,8 +780,8 @@ func buildEmptyExecutionPayload(v int) (proto.Message, error) {
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
WithdrawalRequests: make([]*pb.ExecutionLayerWithdrawalRequest, 0),
DepositReceipts: make([]*pb.DepositReceipt, 0),
WithdrawalRequests: make([]*pb.WithdrawalRequest, 0),
DepositRequests: make([]*pb.DepositRequest, 0),
}, nil
default:
return nil, errors.Wrapf(ErrUnsupportedVersion, "version=%s", version.String(v))

View File

@@ -1559,7 +1559,7 @@ func fixturesStruct() *payloadFixtures {
Withdrawals: []*pb.Withdrawal{},
BlobGasUsed: 2,
ExcessBlobGas: 3,
DepositReceipts: dr,
DepositRequests: dr,
WithdrawalRequests: wr,
}
hexUint := hexutil.Uint64(1)

View File

@@ -66,7 +66,7 @@ func payloadToBody(t *testing.T, ed interfaces.ExecutionData) *pb.ExecutionPaylo
}
eed, isElectra := ed.(interfaces.ExecutionDataElectra)
if isElectra {
body.DepositRequests = pb.ProtoDepositRequestsToJson(eed.DepositReceipts())
body.DepositRequests = pb.ProtoDepositRequestsToJson(eed.DepositRequests())
body.WithdrawalRequests = pb.ProtoWithdrawalRequestsToJson(eed.WithdrawalRequests())
}
return body

View File

@@ -16,7 +16,7 @@ go_library(
],
deps = [
"//api/gateway:go_default_library",
"//api/server:go_default_library",
"//api/server/middleware:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
@@ -69,7 +69,6 @@ go_library(
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],

View File

@@ -4,7 +4,6 @@ import (
"fmt"
"github.com/ethereum/go-ethereum/common"
fastssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/cmd"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -197,7 +196,3 @@ func configureExecutionSetting(cliCtx *cli.Context) error {
" Default fee recipient will be used as a fall back", checksumAddress.Hex())
return params.SetActive(c)
}
func configureFastSSZHashingAlgorithm() {
fastssz.EnableVectorizedHTR = true
}

View File

@@ -20,7 +20,7 @@ import (
"github.com/gorilla/mux"
"github.com/pkg/errors"
apigateway "github.com/prysmaticlabs/prysm/v5/api/gateway"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
@@ -120,7 +120,6 @@ type BeaconNode struct {
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
BlobStorageOptions []filesystem.BlobStorageOption
blobRetentionEpochs primitives.Epoch
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
}
@@ -278,8 +277,6 @@ func configureBeacon(cliCtx *cli.Context) error {
return errors.Wrap(err, "could not configure execution setting")
}
configureFastSSZHashingAlgorithm()
return nil
}
@@ -409,8 +406,8 @@ func newRouter(cliCtx *cli.Context) *mux.Router {
allowedOrigins = strings.Split(flags.GPRCGatewayCorsDomain.Value, ",")
}
r := mux.NewRouter()
r.Use(server.NormalizeQueryValuesHandler)
r.Use(server.CorsHandler(allowedOrigins))
r.Use(middleware.NormalizeQueryValuesHandler)
r.Use(middleware.CorsHandler(allowedOrigins))
return r
}
@@ -965,9 +962,8 @@ func (b *BeaconNode) registerRPCService(router *mux.Router) error {
cert := b.cliCtx.String(flags.CertFlag.Name)
key := b.cliCtx.String(flags.KeyFlag.Name)
mockEth1DataVotes := b.cliCtx.Bool(flags.InteropMockEth1DataVotesFlag.Name)
maxMsgSize := b.cliCtx.Int(cmd.GrpcMaxCallRecvMsgSizeFlag.Name)
enableDebugRPCEndpoints := b.cliCtx.Bool(flags.EnableDebugRPCEndpoints.Name)
enableDebugRPCEndpoints := !b.cliCtx.Bool(flags.DisableDebugRPCEndpoints.Name)
p2pService := b.fetchP2P()
rpcService := rpc.NewService(b.ctx, &rpc.Config{
@@ -992,7 +988,6 @@ func (b *BeaconNode) registerRPCService(router *mux.Router) error {
FinalizationFetcher: chainService,
BlockReceiver: chainService,
BlobReceiver: chainService,
DataColumnReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,
@@ -1057,11 +1052,10 @@ func (b *BeaconNode) registerGRPCGateway(router *mux.Router) error {
gatewayPort := b.cliCtx.Int(flags.GRPCGatewayPort.Name)
rpcHost := b.cliCtx.String(flags.RPCHost.Name)
rpcPort := b.cliCtx.Int(flags.RPCPort.Name)
enableDebugRPCEndpoints := !b.cliCtx.Bool(flags.DisableDebugRPCEndpoints.Name)
selfAddress := net.JoinHostPort(rpcHost, strconv.Itoa(rpcPort))
gatewayAddress := net.JoinHostPort(gatewayHost, strconv.Itoa(gatewayPort))
allowedOrigins := strings.Split(b.cliCtx.String(flags.GPRCGatewayCorsDomain.Name), ",")
enableDebugRPCEndpoints := b.cliCtx.Bool(flags.EnableDebugRPCEndpoints.Name)
selfCert := b.cliCtx.String(flags.CertFlag.Name)
maxCallSize := b.cliCtx.Uint64(cmd.GrpcMaxCallRecvMsgSizeFlag.Name)
httpModules := b.cliCtx.String(flags.HTTPModules.Name)

View File

@@ -5,7 +5,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
// Option for beacon node configuration.
@@ -51,11 +50,3 @@ func WithBlobStorageOptions(opt ...filesystem.BlobStorageOption) Option {
return nil
}
}
// WithBlobRetentionEpochs sets the blobRetentionEpochs value, used in kv store initialization.
func WithBlobRetentionEpochs(e primitives.Epoch) Option {
return func(bn *BeaconNode) error {
bn.blobRetentionEpochs = e
return nil
}
}

View File

@@ -21,12 +21,13 @@ go_library(
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -16,9 +16,10 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
@@ -39,14 +40,15 @@ go_test(
embed = [":go_default_library"],
deps = [
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_patrickmn_go_cache//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -9,7 +9,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
attaggregation "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -32,28 +34,28 @@ func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedA
_, span := trace.StartSpan(ctx, "operations.attestations.kv.aggregateUnaggregatedAtts")
defer span.End()
attsByDataRoot := make(map[[32]byte][]ethpb.Att, len(unaggregatedAtts))
attsByVerAndDataRoot := make(map[attestation.Id][]ethpb.Att, len(unaggregatedAtts))
for _, att := range unaggregatedAtts {
attDataRoot, err := att.GetData().HashTreeRoot()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
attsByVerAndDataRoot[id] = append(attsByVerAndDataRoot[id], att)
}
// Aggregate unaggregated attestations from the pool and save them in the pool.
// Track the unaggregated attestations that aren't able to aggregate.
leftOverUnaggregatedAtt := make(map[[32]byte]bool)
leftOverUnaggregatedAtt := make(map[attestation.Id]bool)
leftOverUnaggregatedAtt = c.aggregateParallel(attsByDataRoot, leftOverUnaggregatedAtt)
leftOverUnaggregatedAtt = c.aggregateParallel(attsByVerAndDataRoot, leftOverUnaggregatedAtt)
// Remove the unaggregated attestations from the pool that were successfully aggregated.
for _, att := range unaggregatedAtts {
h, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
if leftOverUnaggregatedAtt[h] {
if leftOverUnaggregatedAtt[id] {
continue
}
if err := c.DeleteUnaggregatedAttestation(att); err != nil {
@@ -66,7 +68,7 @@ func (c *AttCaches) aggregateUnaggregatedAtts(ctx context.Context, unaggregatedA
// aggregateParallel aggregates attestations in parallel for `atts` and saves them in the pool,
// returns the unaggregated attestations that weren't able to aggregate.
// Given `n` CPU cores, it creates a channel of size `n` and spawns `n` goroutines to aggregate attestations
func (c *AttCaches) aggregateParallel(atts map[[32]byte][]ethpb.Att, leftOver map[[32]byte]bool) map[[32]byte]bool {
func (c *AttCaches) aggregateParallel(atts map[attestation.Id][]ethpb.Att, leftOver map[attestation.Id]bool) map[attestation.Id]bool {
var leftoverLock sync.Mutex
wg := sync.WaitGroup{}
@@ -92,13 +94,13 @@ func (c *AttCaches) aggregateParallel(atts map[[32]byte][]ethpb.Att, leftOver ma
continue
}
} else {
h, err := hashFn(aggregated)
id, err := attestation.NewId(aggregated, attestation.Full)
if err != nil {
log.WithError(err).Error("could not hash attestation")
log.WithError(err).Error("Could not create attestation ID")
continue
}
leftoverLock.Lock()
leftOver[h] = true
leftOver[id] = true
leftoverLock.Unlock()
}
}
@@ -139,17 +141,18 @@ func (c *AttCaches) SaveAggregatedAttestation(att ethpb.Att) error {
return nil
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
copiedAtt := att.Copy()
c.aggregatedAttLock.Lock()
defer c.aggregatedAttLock.Unlock()
atts, ok := c.aggregatedAtt[r]
atts, ok := c.aggregatedAtt[id]
if !ok {
atts := []ethpb.Att{copiedAtt}
c.aggregatedAtt[r] = atts
c.aggregatedAtt[id] = atts
return nil
}
@@ -157,7 +160,7 @@ func (c *AttCaches) SaveAggregatedAttestation(att ethpb.Att) error {
if err != nil {
return err
}
c.aggregatedAtt[r] = atts
c.aggregatedAtt[id] = atts
return nil
}
@@ -191,17 +194,56 @@ func (c *AttCaches) AggregatedAttestations() []ethpb.Att {
// AggregatedAttestationsBySlotIndex returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att {
func (c *AttCaches) AggregatedAttestationsBySlotIndex(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.Attestation {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]ethpb.Att, 0)
atts := make([]*ethpb.Attestation, 0)
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, a := range c.aggregatedAtt {
if slot == a[0].GetData().Slot && committeeIndex == a[0].GetData().CommitteeIndex {
atts = append(atts, a...)
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Phase0 && slot == as[0].GetData().Slot && committeeIndex == as[0].GetData().CommitteeIndex {
for _, a := range as {
att, ok := a.(*ethpb.Attestation)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
}
return atts
}
// AggregatedAttestationsBySlotIndexElectra returns the aggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) AggregatedAttestationsBySlotIndexElectra(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.AttestationElectra {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.AggregatedAttestationsBySlotIndexElectra")
defer span.End()
atts := make([]*ethpb.AttestationElectra, 0)
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
for _, as := range c.aggregatedAtt {
if as[0].Version() == version.Electra && slot == as[0].GetData().Slot && as[0].CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
for _, a := range as {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
}
@@ -216,18 +258,19 @@ func (c *AttCaches) DeleteAggregatedAttestation(att ethpb.Att) error {
if !helpers.IsAggregated(att) {
return errors.New("attestation is not aggregated")
}
r, err := hashFn(att.GetData())
if err != nil {
return errors.Wrap(err, "could not tree hash attestation data")
}
if err := c.insertSeenBit(att); err != nil {
return err
}
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")
}
c.aggregatedAttLock.Lock()
defer c.aggregatedAttLock.Unlock()
attList, ok := c.aggregatedAtt[r]
attList, ok := c.aggregatedAtt[id]
if !ok {
return nil
}
@@ -241,9 +284,9 @@ func (c *AttCaches) DeleteAggregatedAttestation(att ethpb.Att) error {
}
}
if len(filtered) == 0 {
delete(c.aggregatedAtt, r)
delete(c.aggregatedAtt, id)
} else {
c.aggregatedAtt[r] = filtered
c.aggregatedAtt[id] = filtered
}
return nil
@@ -254,14 +297,15 @@ func (c *AttCaches) HasAggregatedAttestation(att ethpb.Att) (bool, error) {
if err := helpers.ValidateNilAttestation(att); err != nil {
return false, err
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return false, errors.Wrap(err, "could not tree hash attestation")
return false, errors.Wrap(err, "could not create attestation ID")
}
c.aggregatedAttLock.RLock()
defer c.aggregatedAttLock.RUnlock()
if atts, ok := c.aggregatedAtt[r]; ok {
if atts, ok := c.aggregatedAtt[id]; ok {
for _, a := range atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return false, err
@@ -273,7 +317,7 @@ func (c *AttCaches) HasAggregatedAttestation(att ethpb.Att) (bool, error) {
c.blockAttLock.RLock()
defer c.blockAttLock.RUnlock()
if atts, ok := c.blockAtt[r]; ok {
if atts, ok := c.blockAtt[id]; ok {
for _, a := range atts {
if c, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return false, err

View File

@@ -7,10 +7,11 @@ import (
c "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -69,7 +70,7 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
}),
AggregationBits: bitfield.Bitlist{0b10111},
},
wantErrString: "could not tree hash attestation: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")",
wantErrString: "could not create attestation ID",
},
{
name: "already seen",
@@ -92,15 +93,13 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
count: 1,
},
}
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{
Slot: 100,
}))
id, err := attestation.NewId(util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100}}), attestation.Data)
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewAttCaches()
cache.seenAtt.Set(string(r[:]), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
cache.seenAtt.Set(id.String(), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
assert.Equal(t, 0, len(cache.unAggregatedAtt), "Invalid start pool, atts: %d", len(cache.unAggregatedAtt))
err := cache.SaveAggregatedAttestation(tt.att)
@@ -230,7 +229,7 @@ func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
},
}
err := cache.DeleteAggregatedAttestation(att)
wantErr := "could not tree hash attestation data: --.BeaconBlockRoot (" + fssz.ErrBytesLength.Error() + ")"
wantErr := "could not create attestation ID"
assert.ErrorContains(t, wantErr, err)
})
@@ -500,3 +499,49 @@ func TestKV_Aggregated_DuplicateAggregatedAttestations(t *testing.T) {
assert.DeepSSZEqual(t, att2, returned[0], "Did not receive correct aggregated atts")
assert.Equal(t, 1, len(returned), "Did not receive correct aggregated atts")
}
func TestKV_Aggregated_AggregatedAttestationsBySlotIndex(t *testing.T) {
cache := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b1011}})
att2 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 2}, AggregationBits: bitfield.Bitlist{0b1101}})
att3 := util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}, AggregationBits: bitfield.Bitlist{0b1101}})
atts := []*ethpb.Attestation{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveAggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.AggregatedAttestationsBySlotIndex(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att1}, returned)
returned = cache.AggregatedAttestationsBySlotIndex(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.Attestation{att2}, returned)
returned = cache.AggregatedAttestationsBySlotIndex(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.Attestation{att3}, returned)
}
func TestKV_Aggregated_AggregatedAttestationsBySlotIndexElectra(t *testing.T) {
cache := NewAttCaches()
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1011}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(2, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b1101}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att3 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b1101}, CommitteeBits: committeeBits})
atts := []*ethpb.AttestationElectra{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveAggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.AggregatedAttestationsBySlotIndexElectra(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att1}, returned)
returned = cache.AggregatedAttestationsBySlotIndexElectra(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att2}, returned)
returned = cache.AggregatedAttestationsBySlotIndexElectra(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att3}, returned)
}

View File

@@ -3,6 +3,7 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
// SaveBlockAttestation saves an block attestation in cache.
@@ -10,14 +11,15 @@ func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.blockAttLock.Lock()
defer c.blockAttLock.Unlock()
atts, ok := c.blockAtt[r]
atts, ok := c.blockAtt[id]
if !ok {
atts = make([]ethpb.Att, 0, 1)
}
@@ -31,7 +33,7 @@ func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
}
}
c.blockAtt[r] = append(atts, att.Copy())
c.blockAtt[id] = append(atts, att.Copy())
return nil
}
@@ -54,14 +56,15 @@ func (c *AttCaches) DeleteBlockAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.blockAttLock.Lock()
defer c.blockAttLock.Unlock()
delete(c.blockAtt, r)
delete(c.blockAtt, id)
return nil
}

View File

@@ -3,6 +3,7 @@ package kv
import (
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
@@ -10,14 +11,15 @@ func (c *AttCaches) SaveForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.forkchoiceAttLock.Lock()
defer c.forkchoiceAttLock.Unlock()
c.forkchoiceAtt[r] = att.Copy()
c.forkchoiceAtt[id] = att
return nil
}
@@ -51,14 +53,15 @@ func (c *AttCaches) DeleteForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
return nil
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.forkchoiceAttLock.Lock()
defer c.forkchoiceAttLock.Unlock()
delete(c.forkchoiceAtt, r)
delete(c.forkchoiceAtt, id)
return nil
}

View File

@@ -9,24 +9,22 @@ import (
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
var hashFn = hash.Proto
// AttCaches defines the caches used to satisfy attestation pool interface.
// These caches are KV store for various attestations
// such are unaggregated, aggregated or attestations within a block.
type AttCaches struct {
aggregatedAttLock sync.RWMutex
aggregatedAtt map[[32]byte][]ethpb.Att
aggregatedAtt map[attestation.Id][]ethpb.Att
unAggregateAttLock sync.RWMutex
unAggregatedAtt map[[32]byte]ethpb.Att
unAggregatedAtt map[attestation.Id]ethpb.Att
forkchoiceAttLock sync.RWMutex
forkchoiceAtt map[[32]byte]ethpb.Att
forkchoiceAtt map[attestation.Id]ethpb.Att
blockAttLock sync.RWMutex
blockAtt map[[32]byte][]ethpb.Att
blockAtt map[attestation.Id][]ethpb.Att
seenAtt *cache.Cache
}
@@ -36,10 +34,10 @@ func NewAttCaches() *AttCaches {
secsInEpoch := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
c := cache.New(secsInEpoch*time.Second, 2*secsInEpoch*time.Second)
pool := &AttCaches{
unAggregatedAtt: make(map[[32]byte]ethpb.Att),
aggregatedAtt: make(map[[32]byte][]ethpb.Att),
forkchoiceAtt: make(map[[32]byte]ethpb.Att),
blockAtt: make(map[[32]byte][]ethpb.Att),
unAggregatedAtt: make(map[attestation.Id]ethpb.Att),
aggregatedAtt: make(map[attestation.Id][]ethpb.Att),
forkchoiceAtt: make(map[attestation.Id]ethpb.Att),
blockAtt: make(map[attestation.Id][]ethpb.Att),
seenAtt: c,
}

View File

@@ -1,21 +1,20 @@
package kv
import (
"fmt"
"github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
)
func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
v, ok := c.seenAtt.Get(string(r[:]))
v, ok := c.seenAtt.Get(id.String())
if ok {
seenBits, ok := v.([]bitfield.Bitlist)
if !ok {
@@ -24,7 +23,7 @@ func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
alreadyExists := false
for _, bit := range seenBits {
if c, err := bit.Contains(att.GetAggregationBits()); err != nil {
return fmt.Errorf("failed to check seen bits on attestation when inserting bit: %w", err)
return err
} else if c {
alreadyExists = true
break
@@ -33,21 +32,21 @@ func (c *AttCaches) insertSeenBit(att ethpb.Att) error {
if !alreadyExists {
seenBits = append(seenBits, att.GetAggregationBits())
}
c.seenAtt.Set(string(r[:]), seenBits, cache.DefaultExpiration /* one epoch */)
c.seenAtt.Set(id.String(), seenBits, cache.DefaultExpiration /* one epoch */)
return nil
}
c.seenAtt.Set(string(r[:]), []bitfield.Bitlist{att.GetAggregationBits()}, cache.DefaultExpiration /* one epoch */)
c.seenAtt.Set(id.String(), []bitfield.Bitlist{att.GetAggregationBits()}, cache.DefaultExpiration /* one epoch */)
return nil
}
func (c *AttCaches) hasSeenBit(att ethpb.Att) (bool, error) {
r, err := hashFn(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return false, err
return false, errors.Wrap(err, "could not create attestation ID")
}
v, ok := c.seenAtt.Get(string(r[:]))
v, ok := c.seenAtt.Get(id.String())
if ok {
seenBits, ok := v.([]bitfield.Bitlist)
if !ok {
@@ -55,7 +54,7 @@ func (c *AttCaches) hasSeenBit(att ethpb.Att) (bool, error) {
}
for _, bit := range seenBits {
if c, err := bit.Contains(att.GetAggregationBits()); err != nil {
return false, fmt.Errorf("failed to check seen bits on attestation when reading bit: %w", err)
return false, err
} else if c {
return true, nil
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/prysmaticlabs/go-bitfield"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
@@ -39,18 +40,18 @@ func TestAttCaches_hasSeenBit(t *testing.T) {
func TestAttCaches_insertSeenBitDuplicates(t *testing.T) {
c := NewAttCaches()
att1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000011}})
r, err := hashFn(att1.Data)
id, err := attestation.NewId(att1, attestation.Data)
require.NoError(t, err)
require.NoError(t, c.insertSeenBit(att1))
require.Equal(t, 1, c.seenAtt.ItemCount())
_, expirationTime1, ok := c.seenAtt.GetWithExpiration(string(r[:]))
_, expirationTime1, ok := c.seenAtt.GetWithExpiration(id.String())
require.Equal(t, true, ok)
// Make sure that duplicates are not inserted, but expiration time gets updated.
require.NoError(t, c.insertSeenBit(att1))
require.Equal(t, 1, c.seenAtt.ItemCount())
_, expirationprysmTime, ok := c.seenAtt.GetWithExpiration(string(r[:]))
_, expirationprysmTime, ok := c.seenAtt.GetWithExpiration(id.String())
require.Equal(t, true, ok)
require.Equal(t, true, expirationprysmTime.After(expirationTime1), "Expiration time is not updated")
}

View File

@@ -7,6 +7,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"go.opencensus.io/trace"
)
@@ -27,13 +29,14 @@ func (c *AttCaches) SaveUnaggregatedAttestation(att ethpb.Att) error {
return nil
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.unAggregateAttLock.Lock()
defer c.unAggregateAttLock.Unlock()
c.unAggregatedAtt[r] = att.Copy()
c.unAggregatedAtt[id] = att
return nil
}
@@ -69,19 +72,56 @@ func (c *AttCaches) UnaggregatedAttestations() ([]ethpb.Att, error) {
// UnaggregatedAttestationsBySlotIndex returns the unaggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att {
func (c *AttCaches) UnaggregatedAttestationsBySlotIndex(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.Attestation {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.UnaggregatedAttestationsBySlotIndex")
defer span.End()
atts := make([]ethpb.Att, 0)
atts := make([]*ethpb.Attestation, 0)
c.unAggregateAttLock.RLock()
defer c.unAggregateAttLock.RUnlock()
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if slot == a.GetData().Slot && committeeIndex == a.GetData().CommitteeIndex {
atts = append(atts, a)
if a.Version() == version.Phase0 && slot == a.GetData().Slot && committeeIndex == a.GetData().CommitteeIndex {
att, ok := a.(*ethpb.Attestation)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
return atts
}
// UnaggregatedAttestationsBySlotIndexElectra returns the unaggregated attestations in cache,
// filtered by committee index and slot.
func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
ctx context.Context,
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
) []*ethpb.AttestationElectra {
_, span := trace.StartSpan(ctx, "operations.attestations.kv.UnaggregatedAttestationsBySlotIndexElectra")
defer span.End()
atts := make([]*ethpb.AttestationElectra, 0)
c.unAggregateAttLock.RLock()
defer c.unAggregateAttLock.RUnlock()
unAggregatedAtts := c.unAggregatedAtt
for _, a := range unAggregatedAtts {
if a.Version() == version.Electra && slot == a.GetData().Slot && a.CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
att, ok := a.(*ethpb.AttestationElectra)
// This will never fail in practice because we asserted the version
if ok {
atts = append(atts, att)
}
}
}
@@ -101,14 +141,14 @@ func (c *AttCaches) DeleteUnaggregatedAttestation(att ethpb.Att) error {
return err
}
r, err := hashFn(att)
id, err := attestation.NewId(att, attestation.Full)
if err != nil {
return errors.Wrap(err, "could not tree hash attestation")
return errors.Wrap(err, "could not create attestation ID")
}
c.unAggregateAttLock.Lock()
defer c.unAggregateAttLock.Unlock()
delete(c.unAggregatedAtt, r)
delete(c.unAggregatedAtt, id)
return nil
}

View File

@@ -7,10 +7,11 @@ import (
"testing"
c "github.com/patrickmn/go-cache"
fssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -39,7 +40,7 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
BeaconBlockRoot: []byte{0b0},
},
},
wantErrString: fssz.ErrBytesLength.Error(),
wantErrString: "could not create attestation ID",
},
{
name: "normal save",
@@ -57,13 +58,13 @@ func TestKV_Unaggregated_SaveUnaggregatedAttestation(t *testing.T) {
count: 0,
},
}
r, err := hashFn(util.HydrateAttestationData(&ethpb.AttestationData{Slot: 100}))
id, err := attestation.NewId(util.HydrateAttestation(&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 100}}), attestation.Data)
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cache := NewAttCaches()
cache.seenAtt.Set(string(r[:]), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
cache.seenAtt.Set(id.String(), []bitfield.Bitlist{{0xff}}, c.DefaultExpiration)
assert.Equal(t, 0, len(cache.unAggregatedAtt), "Invalid start pool, atts: %d", len(cache.unAggregatedAtt))
if tt.att != nil && tt.att.GetSignature() == nil {
@@ -246,9 +247,35 @@ func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndex(t *testing.T) {
}
ctx := context.Background()
returned := cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 1)
assert.DeepEqual(t, []ethpb.Att{att1}, returned)
assert.DeepEqual(t, []*ethpb.Attestation{att1}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndex(ctx, 1, 2)
assert.DeepEqual(t, []ethpb.Att{att2}, returned)
assert.DeepEqual(t, []*ethpb.Attestation{att2}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndex(ctx, 2, 1)
assert.DeepEqual(t, []ethpb.Att{att3}, returned)
assert.DeepEqual(t, []*ethpb.Attestation{att3}, returned)
}
func TestKV_Unaggregated_UnaggregatedAttestationsBySlotIndexElectra(t *testing.T) {
cache := NewAttCaches()
committeeBits := primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att1 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b101}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(2, true)
att2 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 1}, AggregationBits: bitfield.Bitlist{0b110}, CommitteeBits: committeeBits})
committeeBits = primitives.NewAttestationCommitteeBits()
committeeBits.SetBitAt(1, true)
att3 := util.HydrateAttestationElectra(&ethpb.AttestationElectra{Data: &ethpb.AttestationData{Slot: 2}, AggregationBits: bitfield.Bitlist{0b110}, CommitteeBits: committeeBits})
atts := []*ethpb.AttestationElectra{att1, att2, att3}
for _, att := range atts {
require.NoError(t, cache.SaveUnaggregatedAttestation(att))
}
ctx := context.Background()
returned := cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 1, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att1}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 1, 2)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att2}, returned)
returned = cache.UnaggregatedAttestationsBySlotIndexElectra(ctx, 2, 1)
assert.DeepEqual(t, []*ethpb.AttestationElectra{att3}, returned)
}

View File

@@ -18,7 +18,8 @@ type Pool interface {
SaveAggregatedAttestation(att ethpb.Att) error
SaveAggregatedAttestations(atts []ethpb.Att) error
AggregatedAttestations() []ethpb.Att
AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att
AggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation
AggregatedAttestationsBySlotIndexElectra(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.AttestationElectra
DeleteAggregatedAttestation(att ethpb.Att) error
HasAggregatedAttestation(att ethpb.Att) (bool, error)
AggregatedAttestationCount() int
@@ -26,7 +27,8 @@ type Pool interface {
SaveUnaggregatedAttestation(att ethpb.Att) error
SaveUnaggregatedAttestations(atts []ethpb.Att) error
UnaggregatedAttestations() ([]ethpb.Att, error)
UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []ethpb.Att
UnaggregatedAttestationsBySlotIndex(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.Attestation
UnaggregatedAttestationsBySlotIndexElectra(ctx context.Context, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []*ethpb.AttestationElectra
DeleteUnaggregatedAttestation(att ethpb.Att) error
DeleteSeenUnaggregatedAttestations() (int, error)
UnaggregatedAttestationCount() int

View File

@@ -3,14 +3,14 @@ package attestations
import (
"bytes"
"context"
"errors"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
attaggregation "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
@@ -67,7 +67,7 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
atts := append(s.cfg.Pool.AggregatedAttestations(), s.cfg.Pool.BlockAttestations()...)
atts = append(atts, s.cfg.Pool.ForkchoiceAttestations()...)
attsByDataRoot := make(map[[32]byte][]ethpb.Att, len(atts))
attsByVerAndDataRoot := make(map[attestation.Id][]ethpb.Att, len(atts))
// Consolidate attestations by aggregating them by similar data root.
for _, att := range atts {
@@ -79,14 +79,14 @@ func (s *Service) batchForkChoiceAtts(ctx context.Context) error {
continue
}
attDataRoot, err := att.GetData().HashTreeRoot()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return err
return errors.Wrap(err, "could not create attestation ID")
}
attsByDataRoot[attDataRoot] = append(attsByDataRoot[attDataRoot], att)
attsByVerAndDataRoot[id] = append(attsByVerAndDataRoot[id], att)
}
for _, atts := range attsByDataRoot {
for _, atts := range attsByVerAndDataRoot {
if err := s.aggregateAndSaveForkChoiceAtts(atts); err != nil {
return err
}
@@ -119,12 +119,12 @@ func (s *Service) aggregateAndSaveForkChoiceAtts(atts []ethpb.Att) error {
// This checks if the attestation has previously been aggregated for fork choice
// return true if yes, false if no.
func (s *Service) seen(att ethpb.Att) (bool, error) {
attRoot, err := hash.Proto(att.GetData())
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return false, err
return false, errors.Wrap(err, "could not create attestation ID")
}
incomingBits := att.GetAggregationBits()
savedBits, ok := s.forkChoiceProcessedRoots.Get(attRoot)
savedBits, ok := s.forkChoiceProcessedAtts.Get(id)
if ok {
savedBitlist, ok := savedBits.(bitfield.Bitlist)
if !ok {
@@ -149,6 +149,6 @@ func (s *Service) seen(att ethpb.Att) (bool, error) {
}
}
s.forkChoiceProcessedRoots.Add(attRoot, incomingBits)
s.forkChoiceProcessedAtts.Add(id, incomingBits)
return false, nil
}

View File

@@ -13,16 +13,16 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
)
var forkChoiceProcessedRootsSize = 1 << 16
var forkChoiceProcessedAttsSize = 1 << 16
// Service of attestation pool operations.
type Service struct {
cfg *Config
ctx context.Context
cancel context.CancelFunc
err error
forkChoiceProcessedRoots *lru.Cache
genesisTime uint64
cfg *Config
ctx context.Context
cancel context.CancelFunc
err error
forkChoiceProcessedAtts *lru.Cache
genesisTime uint64
}
// Config options for the service.
@@ -35,7 +35,7 @@ type Config struct {
// NewService instantiates a new attestation pool service instance that will
// be registered into a running beacon node.
func NewService(ctx context.Context, cfg *Config) (*Service, error) {
cache := lruwrpr.New(forkChoiceProcessedRootsSize)
cache := lruwrpr.New(forkChoiceProcessedAttsSize)
if cfg.pruneInterval == 0 {
// Prune expired attestations from the pool every slot interval.
@@ -44,10 +44,10 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
return &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
forkChoiceProcessedRoots: cache,
cfg: cfg,
ctx: ctx,
cancel: cancel,
forkChoiceProcessedAtts: cache,
}, nil
}

View File

@@ -7,7 +7,6 @@ go_library(
"broadcaster.go",
"config.go",
"connection_gater.go",
"custody.go",
"dial_relay_node.go",
"discovery.go",
"doc.go",
@@ -47,7 +46,6 @@ go_library(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
@@ -58,7 +56,6 @@ go_library(
"//beacon-chain/startup:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
@@ -77,12 +74,9 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_ferranbt_fastssz//:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_kr_pretty//:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",

View File

@@ -11,7 +11,6 @@ import (
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
@@ -97,12 +96,7 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
return nil
}
func (s *Service) internalBroadcastAttestation(
ctx context.Context,
subnet uint64,
att ethpb.Att,
forkDigest [fieldparams.VersionLength]byte,
) {
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -148,7 +142,7 @@ func (s *Service) internalBroadcastAttestation(
log.WithFields(logrus.Fields{
"attestationSlot": att.GetData().Slot,
"currentSlot": currSlot,
}).WithError(err).Warning("Attestation is too old to broadcast, discarding it")
}).WithError(err).Debug("Attestation is too old to broadcast, discarding it")
return
}
@@ -158,7 +152,7 @@ func (s *Service) internalBroadcastAttestation(
}
}
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [fieldparams.VersionLength]byte) {
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -234,12 +228,7 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
return nil
}
func (s *Service) internalBroadcastBlob(
ctx context.Context,
subnet uint64,
blobSidecar *ethpb.BlobSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -254,7 +243,7 @@ func (s *Service) internalBroadcastBlob(
s.subnetLocker(wrappedSubIdx).RUnlock()
if !hasPeer {
blobSidecarBroadcastAttempts.Inc()
blobSidecarCommitteeBroadcastAttempts.Inc()
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
@@ -263,7 +252,7 @@ func (s *Service) internalBroadcastBlob(
return err
}
if ok {
blobSidecarBroadcasts.Inc()
blobSidecarCommitteeBroadcasts.Inc()
return nil
}
return errors.New("failed to find peers for subnet")
@@ -279,99 +268,6 @@ func (s *Service) internalBroadcastBlob(
}
}
// BroadcastDataColumn broadcasts a data column to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input column subnet.
// TODO: Add tests
func (s *Service) BroadcastDataColumn(ctx context.Context, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error {
// Add tracing to the function.
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
// Ensure the data column sidecar is not nil.
if dataColumnSidecar == nil {
return errors.Errorf("attempted to broadcast nil data column sidecar at subnet %d", columnSubnet)
}
// Retrieve the current fork digest.
forkDigest, err := s.currentForkDigest()
if err != nil {
err := errors.Wrap(err, "current fork digest")
tracing.AnnotateError(span, err)
return err
}
// Non-blocking broadcast, with attempts to discover a column subnet peer if none available.
go s.internalBroadcastDataColumn(ctx, columnSubnet, dataColumnSidecar, forkDigest)
return nil
}
func (s *Service) internalBroadcastDataColumn(
ctx context.Context,
columnSubnet uint64,
dataColumnSidecar *ethpb.DataColumnSidecar,
forkDigest [fieldparams.VersionLength]byte,
) {
// Add tracing to the function.
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastDataColumn")
defer span.End()
// Increase the number of broadcast attempts.
dataColumnSidecarBroadcastAttempts.Inc()
// Clear parent context / deadline.
ctx = trace.NewContext(context.Background(), span)
// Define a one-slot length context timeout.
oneSlot := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
ctx, cancel := context.WithTimeout(ctx, oneSlot)
defer cancel()
// Build the topic corresponding to this column subnet and this fork digest.
topic := dataColumnSubnetToTopic(columnSubnet, forkDigest)
// Compute the wrapped subnet index.
wrappedSubIdx := columnSubnet + dataColumnSubnetVal
// Check if we have peers with this subnet.
hasPeer := func() bool {
s.subnetLocker(wrappedSubIdx).RLock()
defer s.subnetLocker(wrappedSubIdx).RUnlock()
return s.hasPeerWithSubnet(topic)
}()
// If no peers are found, attempt to find peers with this subnet.
if !hasPeer {
if err := func() error {
s.subnetLocker(wrappedSubIdx).Lock()
defer s.subnetLocker(wrappedSubIdx).Unlock()
ok, err := s.FindPeersWithSubnet(ctx, topic, columnSubnet, 1 /*threshold*/)
if err != nil {
return errors.Wrap(err, "find peers for subnet")
}
if ok {
return nil
}
return errors.New("failed to find peers for subnet")
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
tracing.AnnotateError(span, err)
}
}
// Broadcast the data column sidecar to the network.
if err := s.broadcastObject(ctx, dataColumnSidecar, topic); err != nil {
log.WithError(err).Error("Failed to broadcast data column sidecar")
tracing.AnnotateError(span, err)
}
// Increase the number of successful broadcasts.
blobSidecarBroadcasts.Inc()
}
// method to broadcast messages to other peers in our gossip mesh.
func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic string) error {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastObject")
@@ -401,18 +297,14 @@ func (s *Service) broadcastObject(ctx context.Context, obj ssz.Marshaler, topic
return nil
}
func attestationToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
func attestationToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(AttestationSubnetTopicFormat, forkDigest, subnet)
}
func syncCommitteeToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
}
func blobSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}
func dataColumnSubnetToTopic(subnet uint64, forkDigest [fieldparams.VersionLength]byte) string {
return fmt.Sprintf(DataColumnSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -336,7 +336,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
}),
}
var err error
//test with public filter
// test with public filter
cidr := "public"
ip := "212.67.10.122"
s.addrFilter, err = configureFilter(&Config{AllowListCIDR: cidr})
@@ -348,7 +348,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
t.Errorf("Expected multiaddress with ip %s to not be rejected since we allow public addresses", ip)
}
ip = "192.168.1.0" //this is private and should fail
ip = "192.168.1.0" // this is private and should fail
multiAddress, err = ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ip, 3000))
require.NoError(t, err)
valid = s.InterceptAddrDial("", multiAddress)
@@ -356,7 +356,7 @@ func TestService_InterceptAddrDial_Public(t *testing.T) {
t.Errorf("Expected multiaddress with ip %s to be rejected since we are only allowing public addresses", ip)
}
//test with public allow filter, with a public address added to the deny list
// test with public allow filter, with a public address added to the deny list
invalidPublicIp := "212.67.10.122"
validPublicIp := "91.65.69.69"
s.addrFilter, err = configureFilter(&Config{AllowListCIDR: "public", DenyListCIDR: []string{"212.67.89.112/16"}})
@@ -384,7 +384,7 @@ func TestService_InterceptAddrDial_Private(t *testing.T) {
}),
}
var err error
//test with private filter
// test with private filter
cidr := "private"
s.addrFilter, err = configureFilter(&Config{DenyListCIDR: []string{cidr}})
require.NoError(t, err)
@@ -413,7 +413,7 @@ func TestService_InterceptAddrDial_AllowPrivate(t *testing.T) {
}),
}
var err error
//test with private filter
// test with private filter
cidr := "private"
s.addrFilter, err = configureFilter(&Config{AllowListCIDR: cidr})
require.NoError(t, err)
@@ -442,7 +442,7 @@ func TestService_InterceptAddrDial_DenyPublic(t *testing.T) {
}),
}
var err error
//test with private filter
// test with private filter
cidr := "public"
s.addrFilter, err = configureFilter(&Config{DenyListCIDR: []string{cidr}})
require.NoError(t, err)
@@ -471,7 +471,7 @@ func TestService_InterceptAddrDial_AllowConflict(t *testing.T) {
}),
}
var err error
//test with private filter
// test with private filter
cidr := "public"
s.addrFilter, err = configureFilter(&Config{DenyListCIDR: []string{cidr, "192.168.0.0/16"}})
require.NoError(t, err)

View File

@@ -1,83 +0,0 @@
package p2p
import (
ssz "github.com/ferranbt/fastssz"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/sirupsen/logrus"
)
func (s *Service) GetValidCustodyPeers(peers []peer.ID) ([]peer.ID, error) {
custodiedSubnetCount := params.BeaconConfig().CustodyRequirement
if flags.Get().SubscribeToAllSubnets {
custodiedSubnetCount = params.BeaconConfig().DataColumnSidecarSubnetCount
}
custodiedColumns, err := peerdas.CustodyColumns(s.NodeID(), custodiedSubnetCount)
if err != nil {
return nil, err
}
var validPeers []peer.ID
for _, pid := range peers {
remoteCount := s.CustodyCountFromRemotePeer(pid)
nodeId, err := ConvertPeerIDToNodeID(pid)
if err != nil {
return nil, errors.Wrap(err, "convert peer ID to node ID")
}
remoteCustodiedColumns, err := peerdas.CustodyColumns(nodeId, remoteCount)
if err != nil {
return nil, errors.Wrap(err, "custody columns")
}
invalidPeer := false
for c := range custodiedColumns {
if !remoteCustodiedColumns[c] {
invalidPeer = true
break
}
}
if invalidPeer {
continue
}
copiedId := pid
// Add valid peer to list
validPeers = append(validPeers, copiedId)
}
return validPeers, nil
}
func (s *Service) CustodyCountFromRemotePeer(pid peer.ID) uint64 {
// By default, we assume the peer custodies the minimum number of subnets.
peerCustodyCountCount := params.BeaconConfig().CustodyRequirement
// Retrieve the ENR of the peer.
peerRecord, err := s.peers.ENR(pid)
if err != nil {
log.WithError(err).WithField("peerID", pid).Error("Failed to retrieve ENR for peer")
return peerCustodyCountCount
}
if peerRecord == nil {
// This is the case for inbound peers. So we don't log an error for this.
log.WithField("peerID", pid).Debug("No ENR found for peer")
return peerCustodyCountCount
}
// Load the `custody_subnet_count`
custodyObj := CustodySubnetCount(make([]byte, 8))
if err := peerRecord.Load(&custodyObj); err != nil {
log.WithField("peerID", pid).Error("Cannot load the custody_subnet_count from peer")
return peerCustodyCountCount
}
// Unmarshal the custody count from the peer's ENR.
peerCustodyCountFromRecord := ssz.UnmarshallUint64(custodyObj)
log.WithFields(logrus.Fields{
"peerID": pid,
"custodyCount": peerCustodyCountFromRecord,
}).Debug("Custody count read from peer's ENR")
return peerCustodyCountFromRecord
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
@@ -43,22 +42,16 @@ const (
udp6
)
type (
quicProtocol uint16
CustodySubnetCount []byte
)
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
// https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
func (CustodySubnetCount) ENRKey() string { return "custody_subnet_count" }
// RefreshPersistentSubnets checks that we are tracking our local persistent subnets for a variety of gossip topics.
// This routine checks for our attestation, sync committee and data column subnets and updates them if they have
// been rotated.
func (s *Service) RefreshPersistentSubnets() {
// return early if discv5 isnt running
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
func (s *Service) RefreshENR() {
// return early if discv5 isn't running
if s.dv5Listener == nil || !s.isInitialized() {
return
}
@@ -67,10 +60,6 @@ func (s *Service) RefreshPersistentSubnets() {
log.WithError(err).Error("Could not initialize persistent subnets")
return
}
if err := initializePersistentColumnSubnets(s.dv5Listener.LocalNode().ID()); err != nil {
log.WithError(err).Error("Could not initialize persistent column subnets")
return
}
bitV := bitfield.NewBitvector64()
committees := cache.SubnetIDs.GetAllSubnets()
@@ -269,20 +258,6 @@ func (s *Service) createLocalNode(
localNode.Set(quicEntry)
}
if features.Get().EnablePeerDAS {
var custodyBytes []byte
custodyBytes = ssz.MarshalUint64(custodyBytes, params.BeaconConfig().CustodyRequirement)
custodySubnetEntry := CustodySubnetCount(custodyBytes)
if flags.Get().SubscribeToAllSubnets {
var allCustodyBytes []byte
allCustodyBytes = ssz.MarshalUint64(allCustodyBytes, params.BeaconConfig().DataColumnSidecarSubnetCount)
custodySubnetEntry = CustodySubnetCount(allCustodyBytes)
}
localNode.Set(custodySubnetEntry)
}
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
@@ -371,8 +346,6 @@ func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) {
// Constantly update enr for known peers
s.peers.UpdateENR(node.Record(), peerData.ID)
return false
}

View File

@@ -601,7 +601,7 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcBuilder(t)
s.RefreshPersistentSubnets()
s.RefreshENR()
tt.postValidation(t, s)
s.dv5Listener.Close()
cache.SubnetIDs.EmptyAllCaches()

View File

@@ -121,7 +121,7 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage), strings.Contains(topic, GossipDataColumnSidecarMessage):
case strings.Contains(topic, GossipBlobSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil
default:

View File

@@ -22,13 +22,13 @@ var gossipTopicMappings = map[string]proto.Message{
SyncCommitteeSubnetTopicFormat: &ethpb.SyncCommitteeMessage{},
BlsToExecutionChangeSubnetTopicFormat: &ethpb.SignedBLSToExecutionChange{},
BlobSubnetTopicFormat: &ethpb.BlobSidecar{},
DataColumnSubnetTopicFormat: &ethpb.DataColumnSidecar{},
}
// GossipTopicMappings is a function to return the assigned data type
// versioned by epoch.
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if topic == BlockSubnetTopicFormat {
switch topic {
case BlockSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedBeaconBlockElectra{}
}
@@ -44,8 +44,25 @@ func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if epoch >= params.BeaconConfig().AltairForkEpoch {
return &ethpb.SignedBeaconBlockAltair{}
}
return gossipTopicMappings[topic]
case AttestationSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.AttestationElectra{}
}
return gossipTopicMappings[topic]
case AttesterSlashingSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.AttesterSlashingElectra{}
}
return gossipTopicMappings[topic]
case AggregateAndProofSubnetTopicFormat:
if epoch >= params.BeaconConfig().ElectraForkEpoch {
return &ethpb.SignedAggregateAttestationAndProofElectra{}
}
return gossipTopicMappings[topic]
default:
return gossipTopicMappings[topic]
}
return gossipTopicMappings[topic]
}
// AllTopics returns all topics stored in our
@@ -76,4 +93,7 @@ func init() {
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockDeneb{})] = BlockSubnetTopicFormat
// Specially handle Electra objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockElectra{})] = BlockSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttestationElectra{})] = AttestationSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.AttesterSlashingElectra{})] = AttesterSlashingSubnetTopicFormat
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedAggregateAttestationAndProofElectra{})] = AggregateAndProofSubnetTopicFormat
}

View File

@@ -22,20 +22,20 @@ func TestMappingHasNoDuplicates(t *testing.T) {
}
}
func TestGossipTopicMappings_CorrectBlockType(t *testing.T) {
func TestGossipTopicMappings_CorrectType(t *testing.T) {
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig().Copy()
altairForkEpoch := primitives.Epoch(100)
BellatrixForkEpoch := primitives.Epoch(200)
CapellaForkEpoch := primitives.Epoch(300)
DenebForkEpoch := primitives.Epoch(400)
ElectraForkEpoch := primitives.Epoch(500)
bellatrixForkEpoch := primitives.Epoch(200)
capellaForkEpoch := primitives.Epoch(300)
denebForkEpoch := primitives.Epoch(400)
electraForkEpoch := primitives.Epoch(500)
bCfg.AltairForkEpoch = altairForkEpoch
bCfg.BellatrixForkEpoch = BellatrixForkEpoch
bCfg.CapellaForkEpoch = CapellaForkEpoch
bCfg.DenebForkEpoch = DenebForkEpoch
bCfg.ElectraForkEpoch = ElectraForkEpoch
bCfg.BellatrixForkEpoch = bellatrixForkEpoch
bCfg.CapellaForkEpoch = capellaForkEpoch
bCfg.DenebForkEpoch = denebForkEpoch
bCfg.ElectraForkEpoch = electraForkEpoch
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = primitives.Epoch(100)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.BellatrixForkVersion)] = primitives.Epoch(200)
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.CapellaForkVersion)] = primitives.Epoch(300)
@@ -47,29 +47,83 @@ func TestGossipTopicMappings_CorrectBlockType(t *testing.T) {
pMessage := GossipTopicMappings(BlockSubnetTopicFormat, 0)
_, ok := pMessage.(*ethpb.SignedBeaconBlock)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, 0)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Altair Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockAltair)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, altairForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Bellatrix Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, BellatrixForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockBellatrix)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, bellatrixForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Capella Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, CapellaForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockCapella)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, capellaForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Deneb Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, DenebForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockDeneb)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.Attestation)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashing)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, denebForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProof)
assert.Equal(t, true, ok)
// Electra Fork
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, ElectraForkEpoch)
pMessage = GossipTopicMappings(BlockSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.SignedBeaconBlockElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttestationSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttestationElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AttesterSlashingSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.AttesterSlashingElectra)
assert.Equal(t, true, ok)
pMessage = GossipTopicMappings(AggregateAndProofSubnetTopicFormat, electraForkEpoch)
_, ok = pMessage.(*ethpb.SignedAggregateAttestationAndProofElectra)
assert.Equal(t, true, ok)
}

View File

@@ -3,7 +3,6 @@ package p2p
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/connmgr"
@@ -29,12 +28,6 @@ type P2P interface {
ConnectionHandler
PeersProvider
MetadataProvider
CustodyHandler
}
type Acceser interface {
Broadcaster
PeerManager
}
// Broadcaster broadcasts messages to peers over the p2p pubsub protocol.
@@ -43,7 +36,6 @@ type Broadcaster interface {
BroadcastAttestation(ctx context.Context, subnet uint64, att ethpb.Att) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastDataColumn(ctx context.Context, columnSubnet uint64, dataColumnSidecar *ethpb.DataColumnSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.
@@ -89,9 +81,8 @@ type PeerManager interface {
PeerID() peer.ID
Host() host.Host
ENR() *enr.Record
NodeID() enode.ID
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshPersistentSubnets()
RefreshENR()
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
}
@@ -111,8 +102,3 @@ type MetadataProvider interface {
Metadata() metadata.Metadata
MetadataSeq() uint64
}
type CustodyHandler interface {
CustodyCountFromRemotePeer(peer.ID) uint64
GetValidCustodyPeers([]peer.ID) ([]peer.ID, error)
}

View File

@@ -60,21 +60,17 @@ var (
"the subnet. The beacon node increments this counter when the broadcast is blocked " +
"until a subnet peer can be found.",
})
blobSidecarBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
blobSidecarCommitteeBroadcasts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_broadcasts",
Help: "The number of blob sidecar messages that were broadcast with no peer on.",
Help: "The number of blob sidecar committee messages that were broadcast with no peer on.",
})
syncCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_sync_committee_subnet_attempted_broadcasts",
Help: "The number of sync committee that were attempted to be broadcast.",
})
blobSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
blobSidecarCommitteeBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_blob_sidecar_committee_attempted_broadcasts",
Help: "The number of blob sidecar messages that were attempted to be broadcast.",
})
dataColumnSidecarBroadcastAttempts = promauto.NewCounter(prometheus.CounterOpts{
Name: "p2p_data_column_sidecar_attempted_broadcasts",
Help: "The number of data column sidecar messages that were attempted to be broadcast.",
Help: "The number of blob sidecar committee messages that were attempted to be broadcast.",
})
// Gossip Tracer Metrics

View File

@@ -159,14 +159,6 @@ func (p *Status) Add(record *enr.Record, pid peer.ID, address ma.Multiaddr, dire
p.addIpToTracker(pid)
}
func (p *Status) UpdateENR(record *enr.Record, pid peer.ID) {
p.store.Lock()
defer p.store.Unlock()
if peerData, ok := p.store.PeerData(pid); ok {
peerData.Enr = record
}
}
// Address returns the multiaddress of the given remote peer.
// This will error if the peer does not exist.
func (p *Status) Address(pid peer.ID) (ma.Multiaddr, error) {

Some files were not shown because too many files have changed in this diff Show More