Compare commits

...

56 Commits

Author SHA1 Message Date
james-prysm
e9ba7c000e removing error package alias 2026-01-28 16:43:06 -06:00
james-prysm
2ef6a3c25e Merge branch 'develop' into gRPC-fallback 2026-01-28 13:44:29 -08:00
james-prysm
919bd5d6aa Update health endpoint to include sync and optimistic checks (#16294)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

a node that is in syncing or optimistic status isn't fully ready yet. we
don't have a is ready endpoint, but I think having the gRPC match more
closely to
[/eth/v1/node/health](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Node/getHealth)
would be good. This endpoint is only used internally as far as I can
tell.

this is prerequisite to https://github.com/OffchainLabs/prysm/pull/16215

tested via grpcurl against a syncing hoodi node

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-01-28 21:13:30 +00:00
fernantho
0476eeda57 SSZ-QL: custom Generic Merkle Proofs building the tree and collecting the hashes in one sweep (#16177)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**
Feature

**What does this PR do? Why is it needed?**
This PR replaces the previous PR
https://github.com/OffchainLabs/prysm/pull/16121, which built the entire
Merkle tree and generated proofs only after the tree was complete. In
this PR, the Merkle proof is produced by collecting hashes while the
Merkle tree is being built. This approach has proven to be more
efficient than the one in
https://github.com/OffchainLabs/prysm/pull/16121.

- **ProofCollector**: 
- New `ProofCollector` type in `encoding/ssz/query/proof_collector.go`:
Collects sibling hashes and leaves needed for Merkle proofs during
merkleization.
- Multiproof-ready design with `requiredSiblings`/`requiredLeaves` maps
for registering target gindices before merkleization.
- Thread-safe: read-only required maps during merkleization,
mutex-protected writes to `siblings`/`leaves`.
- `AddTarget(gindex)` registers a target leaf and computes all required
sibling gindices along the path to root.
- `toProof()` converts collected data into `fastssz.Proof` structure.
- Parallel execution in `merkleizeVectorBody` for composite elements
with worker pool pattern.
- Optimized container hashing: Generalized
`stateutil.OptimizedValidatorRoots` pattern for any SSZ container type:
- `optimizedContainerRoots`: Parallelized field root computation +
level-by-level vectorized hashing via `VectorizedSha256`.
- `hashContainerHelper`: Worker goroutine for processing container
subsets.
- `containerFieldRoots`: Computes field roots for a single container
using reflection and SszInfo metadata.

- **`Prove(gindex)` method** in `encoding/ssz/query/merkle_proof.go`:
Entry point for generating SSZ Merkle proofs for a given generalized
index.

- **Testing**
- Added `merkle_proof_test.go` and `proof_collector_test.go` to test and
benchmark this feature.

The main outcomes of the optimizations are here:
```
❯ go test ./encoding/ssz/query -run=^$ -bench='Benchmark(OptimizedContainerRoots|OptimizedValidatorRoots|ProofCollectorMerkleize)$' -benchmem
goos: darwin
goarch: arm64
pkg: github.com/OffchainLabs/prysm/v7/encoding/ssz/query
cpu: Apple M2 Pro
BenchmarkOptimizedValidatorRoots-10         3237            361029 ns/op          956858 B/op       6024 allocs/op
BenchmarkOptimizedContainerRoots-10         1138            969002 ns/op         3245223 B/op      11024 allocs/op
BenchmarkProofCollectorMerkleize-10          522           2262066 ns/op         3216000 B/op      19000 allocs/op
PASS
ok      github.com/OffchainLabs/prysm/v7/encoding/ssz/query     4.619s
```
Knowing that `OptimizedValidatorRoots` implements very effective
optimizations, `OptimizedContainerRoots` mimics them.
In the benchmark we can see that `OptimizedValidatorRoots` remain as the
most performant and tit the baseline here:
- `ProofCollectorMerkleize` is **~6.3× slower**, uses **~3.4× more
memory** (B/op), and performs **~3.2× more allocations**.
- `OptimizedContainerRoots` sits in between: it’s **~2.7× slower** than
`OptimizedValidatorRoots` (and **~3.4× higher B/op**, **~1.8× more
allocations**), but it is a clear win over `ProofCollectorMerkleize` for
lists/vectors: **~2.3× faster** with **~1.7× fewer allocations** (and
essentially the same memory footprint).

The main drawback is that `OptimizedContainerRoots` can only be applied
to vector/list subtrees where we don’t need to collect any sibling/leaf
data (i.e., no proof targets within that subtree); integrating it into
the recursive merkleize(...) flow when targets are outside the subtree
is expected to land in a follow-up PR.

**Which issues(s) does this PR fix?**
Partially https://github.com/OffchainLabs/prysm/issues/15598

**Other notes for review**
In this [write-up](https://hackmd.io/@fernantho/BJbZ1xmmbg), I depict
the process to come up with this solution.

Future improvements:
- Defensive check that the gindex is not too big, depicted [here](
https://github.com/OffchainLabs/prysm/pull/16177#discussion_r2671684100).
- Integrate optimizedContainerRoots into the recursive merkleize(...)
flow when proof targets are not within the subtree (skip full traversal
for container lists).
- Add multiproofs.
- Connect `proofCollector` to SSZ-QL endpoints (direct integration of
`proofCollector` for BeaconBlock endpoint and "hybrid" approach for
BeaconState endpoint).

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
2026-01-28 13:01:22 +00:00
james-prysm
e6e199fa0d making old connection close async 2026-01-27 15:31:21 -06:00
james-prysm
c8eec6ec24 move mock connection to test package 2026-01-27 15:06:56 -06:00
james-prysm
4f659b236e removing node connection builder 2026-01-27 14:24:59 -06:00
james-prysm
d448ebef45 simplifying locks, and interface for unit test 2026-01-27 14:10:50 -06:00
james-prysm
87739d92ad fixing test and places renaming set host to switch host 2026-01-27 10:34:41 -06:00
james-prysm
c3bfc611a2 manu's lock comment 2026-01-27 10:20:14 -06:00
james-prysm
ed2e58ad6c resolving manu's suggestion on endpoints 2026-01-27 09:49:18 -06:00
james-prysm
dcc67bb61e Update api/grpc/grpc_connection_provider.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-01-27 08:49:29 -06:00
james-prysm
922aaa4f0f Merge branch 'develop' into gRPC-fallback 2026-01-26 09:09:54 -08:00
james-prysm
63cd7ac88e most of radek's feedback 2026-01-26 11:00:02 -06:00
james-prysm
a030d88d42 Update rest_connection_provider.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-24 21:33:10 -08:00
james-prysm
d30ef5a30f Update rest_connection_provider.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-24 21:32:55 -08:00
james-prysm
545f450f70 Update grpc_connection_provider.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-24 21:32:06 -08:00
james-prysm
9a5f5ce733 Update grpc_connection_provider.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-24 21:31:59 -08:00
james-prysm
fe7b2bf20e removing unneeded function from gRPC provider 2026-01-20 23:10:08 -06:00
james-prysm
7cc6ded31a use lazy grpc connections instead of cached 2026-01-20 16:22:43 -06:00
james-prysm
5fd3300fdb use functional parameters for new node connection 2026-01-20 15:47:08 -06:00
james-prysm
f74a9cb3ec self review 2026-01-20 15:00:19 -06:00
james-prysm
3d903d5d75 migrating rest to api package 2026-01-20 14:26:48 -06:00
james-prysm
5f335b1b58 refactoring node connection to have a rest connection provider 2026-01-20 14:10:12 -06:00
james-prysm
1f7f7c6833 migrating grpc connection provider to api package 2026-01-20 13:17:17 -06:00
james-prysm
4e952354d1 self review items 2026-01-20 11:52:17 -06:00
james-prysm
5268da43f1 fixing suffixes 2026-01-20 10:28:09 -06:00
james-prysm
a6fc327cfb lock feedback 2026-01-20 10:24:37 -06:00
james-prysm
cf04b457a6 radek's idea to just use grpc connection provider whether you have 1 host or 2 2026-01-20 10:17:46 -06:00
james-prysm
4586b0accf Update validator/client/validator.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-20 09:29:10 -06:00
james-prysm
3a71ad2ec1 Update validator/client/validator.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-20 09:28:58 -06:00
james-prysm
588766e520 Update changelog/james-prysm_grpc-fallback.md
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2026-01-20 09:28:18 -06:00
james-prysm
ecc19bc6ed Merge branch 'develop' into gRPC-fallback 2026-01-16 08:51:58 -08:00
james-prysm
164d2d50fd improving logs based on feedback 2026-01-16 10:30:39 -06:00
james-prysm
1355a9ff4d adding more unit tests 2026-01-15 21:01:52 -06:00
james-prysm
34478f30c8 Merge branch 'develop' into gRPC-fallback 2026-01-15 14:26:10 -08:00
james-prysm
214b4428e6 improving log condition 2026-01-15 16:24:17 -06:00
james-prysm
db82f3cc9d bad commit 2026-01-14 15:47:53 -06:00
james-prysm
9c874037d1 Merge branch 'develop' into gRPC-fallback 2026-01-14 13:13:24 -08:00
james-prysm
9bb231fb3b gaz 2026-01-12 16:37:16 -06:00
james-prysm
6432140603 fixing test 2026-01-12 16:17:26 -06:00
james-prysm
5e0a9ff992 removing multipleEndpointsGrpcResolverBuilder resolver 2026-01-12 15:55:20 -06:00
james-prysm
0bfd661baf changelog 2026-01-12 15:13:39 -06:00
james-prysm
b21acc0bbb gofmt 2026-01-12 14:34:47 -06:00
james-prysm
f6f65987c6 Fix compilation error in grpc_node_client.go
Use getClient() instead of undefined nodeClient field to ensure
client stub is properly recreated when connection provider switches hosts.

Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
2026-01-12 13:51:47 -06:00
james-prysm
d26cdd74ee linting 2026-01-12 13:39:51 -06:00
james-prysm
d1905cb018 Merge branch 'develop' into gRPC-fallback 2026-01-12 11:29:45 -08:00
james-prysm
9f828bdd88 Merge branch 'develop' into gRPC-fallback 2026-01-07 12:26:17 -08:00
james-prysm
17413b52ed linting 2026-01-05 23:03:16 -06:00
james-prysm
a651e7f0ac gaz 2026-01-05 22:56:47 -06:00
james-prysm
3e1cb45e92 more feedback 2026-01-05 20:40:26 -06:00
james-prysm
fc2dcb0e88 having gRPC map more closely to rest implementation as well as only providing sync access to synced nodes 2026-01-05 16:08:45 -06:00
james-prysm
888db581dd more cleanup 2026-01-05 15:33:09 -06:00
james-prysm
f1d2ee72e2 adding cleanup 2026-01-05 15:12:52 -06:00
james-prysm
31f18b9f60 adding cleanup 2026-01-05 15:00:23 -06:00
james-prysm
6462c997e9 poc grpc fallback improvements 2026-01-05 11:50:14 -06:00
69 changed files with 3167 additions and 580 deletions

View File

@@ -3,13 +3,16 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"grpc_connection_provider.go",
"grpcutils.go",
"log.go",
"mock_grpc_provider.go",
"parameters.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/grpc",
visibility = ["//visibility:public"],
deps = [
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
@@ -18,12 +21,17 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["grpcutils_test.go"],
srcs = [
"grpc_connection_provider_test.go",
"grpcutils_test.go",
],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//credentials/insecure:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
],
)

View File

@@ -0,0 +1,173 @@
package grpc
import (
"context"
"strings"
"sync"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
)
// GrpcConnectionProvider manages gRPC connections for failover support.
// It allows switching between different beacon node endpoints when the current one becomes unavailable.
// Only one connection is maintained at a time - when switching hosts, the old connection is closed.
type GrpcConnectionProvider interface {
// CurrentConn returns the currently active gRPC connection.
// The connection is created lazily on first call.
// Returns nil if the provider has been closed.
CurrentConn() *grpc.ClientConn
// CurrentHost returns the address of the currently active endpoint.
CurrentHost() string
// Hosts returns all configured endpoint addresses.
Hosts() []string
// SwitchHost switches to the endpoint at the given index.
// The new connection is created lazily on next CurrentConn() call.
SwitchHost(index int) error
// Close closes the current connection.
Close()
}
type grpcConnectionProvider struct {
// Immutable after construction - no lock needed for reads
endpoints []string
ctx context.Context
dialOpts []grpc.DialOption
// Current connection state (protected by mutex)
currentIndex uint64
conn *grpc.ClientConn
mu sync.Mutex
closed bool
}
// NewGrpcConnectionProvider creates a new connection provider that manages gRPC connections.
// The endpoint parameter can be a comma-separated list of addresses (e.g., "host1:4000,host2:4000").
// Only one connection is maintained at a time, created lazily on first use.
func NewGrpcConnectionProvider(
ctx context.Context,
endpoint string,
dialOpts []grpc.DialOption,
) (GrpcConnectionProvider, error) {
endpoints := parseEndpoints(endpoint)
if len(endpoints) == 0 {
return nil, errors.New("no gRPC endpoints provided")
}
log.WithFields(logrus.Fields{
"endpoints": endpoints,
"count": len(endpoints),
}).Info("Initialized gRPC connection provider")
return &grpcConnectionProvider{
endpoints: endpoints,
ctx: ctx,
dialOpts: dialOpts,
}, nil
}
// parseEndpoints splits a comma-separated endpoint string into individual endpoints.
func parseEndpoints(endpoint string) []string {
if endpoint == "" {
return nil
}
endpoints := make([]string, 0, 1)
for p := range strings.SplitSeq(endpoint, ",") {
if p = strings.TrimSpace(p); p != "" {
endpoints = append(endpoints, p)
}
}
return endpoints
}
func (p *grpcConnectionProvider) CurrentConn() *grpc.ClientConn {
p.mu.Lock()
defer p.mu.Unlock()
if p.closed {
return nil
}
// Return existing connection if available
if p.conn != nil {
return p.conn
}
// Create connection lazily
ep := p.endpoints[p.currentIndex]
conn, err := grpc.DialContext(p.ctx, ep, p.dialOpts...)
if err != nil {
log.WithError(err).WithField("endpoint", ep).Error("Failed to create gRPC connection")
return nil
}
p.conn = conn
log.WithField("endpoint", ep).Debug("Created gRPC connection")
return conn
}
func (p *grpcConnectionProvider) CurrentHost() string {
p.mu.Lock()
defer p.mu.Unlock()
return p.endpoints[p.currentIndex]
}
func (p *grpcConnectionProvider) Hosts() []string {
// Return a copy to maintain immutability
hosts := make([]string, len(p.endpoints))
copy(hosts, p.endpoints)
return hosts
}
func (p *grpcConnectionProvider) SwitchHost(index int) error {
if index < 0 || index >= len(p.endpoints) {
return errors.Errorf("invalid host index %d, must be between 0 and %d", index, len(p.endpoints)-1)
}
p.mu.Lock()
defer p.mu.Unlock()
if uint64(index) == p.currentIndex {
return nil // Already on this host
}
oldHost := p.endpoints[p.currentIndex]
oldConn := p.conn
p.conn = nil // Clear immediately - new connection created lazily
p.currentIndex = uint64(index)
// Close old connection asynchronously to avoid blocking the caller
if oldConn != nil {
go func() {
if err := oldConn.Close(); err != nil {
log.WithError(err).WithField("endpoint", oldHost).Debug("Failed to close previous connection")
}
}()
}
log.WithFields(logrus.Fields{
"previousHost": oldHost,
"newHost": p.endpoints[index],
}).Debug("Switched gRPC endpoint")
return nil
}
func (p *grpcConnectionProvider) Close() {
p.mu.Lock()
defer p.mu.Unlock()
if p.closed {
return
}
p.closed = true
if p.conn != nil {
if err := p.conn.Close(); err != nil {
log.WithError(err).WithField("endpoint", p.endpoints[p.currentIndex]).Debug("Failed to close gRPC connection")
}
p.conn = nil
}
}

View File

@@ -0,0 +1,207 @@
package grpc
import (
"context"
"net"
"reflect"
"strings"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
func TestParseEndpoints(t *testing.T) {
tests := []struct {
name string
input string
expected []string
}{
{"single endpoint", "localhost:4000", []string{"localhost:4000"}},
{"multiple endpoints", "host1:4000,host2:4000,host3:4000", []string{"host1:4000", "host2:4000", "host3:4000"}},
{"endpoints with spaces", "host1:4000, host2:4000 , host3:4000", []string{"host1:4000", "host2:4000", "host3:4000"}},
{"empty string", "", nil},
{"only commas", ",,,", []string{}},
{"trailing comma", "host1:4000,host2:4000,", []string{"host1:4000", "host2:4000"}},
{"leading comma", ",host1:4000,host2:4000", []string{"host1:4000", "host2:4000"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := parseEndpoints(tt.input)
if !reflect.DeepEqual(tt.expected, got) {
t.Errorf("parseEndpoints(%q) = %v, want %v", tt.input, got, tt.expected)
}
})
}
}
func TestNewGrpcConnectionProvider_Errors(t *testing.T) {
t.Run("no endpoints", func(t *testing.T) {
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
_, err := NewGrpcConnectionProvider(context.Background(), "", dialOpts)
require.ErrorContains(t, "no gRPC endpoints provided", err)
})
}
func TestGrpcConnectionProvider_LazyConnection(t *testing.T) {
// Start only one server but configure provider with two endpoints
lis, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
server := grpc.NewServer()
go func() { _ = server.Serve(lis) }()
defer server.Stop()
validAddr := lis.Addr().String()
invalidAddr := "127.0.0.1:1" // Port 1 is unlikely to be listening
// Provider should succeed even though second endpoint is invalid (lazy connections)
endpoint := validAddr + "," + invalidAddr
ctx := context.Background()
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
provider, err := NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
require.NoError(t, err, "Provider creation should succeed with lazy connections")
defer func() { provider.Close() }()
// First endpoint should work
conn := provider.CurrentConn()
assert.NotNil(t, conn, "First connection should be created lazily")
}
func TestGrpcConnectionProvider_SingleConnectionModel(t *testing.T) {
// Create provider with 3 endpoints
var addrs []string
var servers []*grpc.Server
for range 3 {
lis, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
server := grpc.NewServer()
go func() { _ = server.Serve(lis) }()
addrs = append(addrs, lis.Addr().String())
servers = append(servers, server)
}
defer func() {
for _, s := range servers {
s.Stop()
}
}()
endpoint := strings.Join(addrs, ",")
ctx := context.Background()
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
provider, err := NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
require.NoError(t, err)
defer func() { provider.Close() }()
// Access the internal state to verify single connection behavior
p := provider.(*grpcConnectionProvider)
// Initially no connection
p.mu.Lock()
assert.Equal(t, (*grpc.ClientConn)(nil), p.conn, "Connection should be nil before access")
p.mu.Unlock()
// Access connection - should create one
conn0 := provider.CurrentConn()
assert.NotNil(t, conn0)
p.mu.Lock()
assert.NotNil(t, p.conn, "Connection should be created after CurrentConn()")
firstConn := p.conn
p.mu.Unlock()
// Call CurrentConn again - should return same connection
conn0Again := provider.CurrentConn()
assert.Equal(t, conn0, conn0Again, "Should return same connection")
// Switch to different host - old connection should be closed, new one created lazily
require.NoError(t, provider.SwitchHost(1))
p.mu.Lock()
assert.Equal(t, (*grpc.ClientConn)(nil), p.conn, "Connection should be nil after SwitchHost (lazy)")
p.mu.Unlock()
// Get new connection
conn1 := provider.CurrentConn()
assert.NotNil(t, conn1)
assert.NotEqual(t, firstConn, conn1, "Should be a different connection after switching hosts")
}
// testProvider creates a provider with n test servers and returns cleanup function.
func testProvider(t *testing.T, n int) (GrpcConnectionProvider, []string, func()) {
var addrs []string
var cleanups []func()
for range n {
lis, err := net.Listen("tcp", "127.0.0.1:0")
require.NoError(t, err)
server := grpc.NewServer()
go func() { _ = server.Serve(lis) }()
addrs = append(addrs, lis.Addr().String())
cleanups = append(cleanups, server.Stop)
}
endpoint := strings.Join(addrs, ",")
ctx := context.Background()
dialOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
provider, err := NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
require.NoError(t, err)
cleanup := func() {
provider.Close()
for _, c := range cleanups {
c()
}
}
return provider, addrs, cleanup
}
func TestGrpcConnectionProvider(t *testing.T) {
provider, addrs, cleanup := testProvider(t, 3)
defer cleanup()
t.Run("initial state", func(t *testing.T) {
assert.Equal(t, 3, len(provider.Hosts()))
assert.Equal(t, addrs[0], provider.CurrentHost())
assert.NotNil(t, provider.CurrentConn())
})
t.Run("SwitchHost", func(t *testing.T) {
require.NoError(t, provider.SwitchHost(1))
assert.Equal(t, addrs[1], provider.CurrentHost())
assert.NotNil(t, provider.CurrentConn()) // New connection created lazily
require.NoError(t, provider.SwitchHost(0))
assert.Equal(t, addrs[0], provider.CurrentHost())
require.ErrorContains(t, "invalid host index", provider.SwitchHost(-1))
require.ErrorContains(t, "invalid host index", provider.SwitchHost(3))
})
t.Run("SwitchHost circular", func(t *testing.T) {
// Test round-robin style switching using SwitchHost with manual index
indices := []int{1, 2, 0, 1} // Simulate circular switching
for i, idx := range indices {
require.NoError(t, provider.SwitchHost(idx))
assert.Equal(t, addrs[idx], provider.CurrentHost(), "iteration %d", i)
}
})
t.Run("Hosts returns copy", func(t *testing.T) {
hosts := provider.Hosts()
original := hosts[0]
hosts[0] = "modified"
assert.Equal(t, original, provider.Hosts()[0])
})
}
func TestGrpcConnectionProvider_Close(t *testing.T) {
provider, _, cleanup := testProvider(t, 1)
defer cleanup()
assert.NotNil(t, provider.CurrentConn())
provider.Close()
assert.Equal(t, (*grpc.ClientConn)(nil), provider.CurrentConn())
provider.Close() // Double close is safe
}

View File

@@ -0,0 +1,20 @@
package grpc
import "google.golang.org/grpc"
// MockGrpcProvider implements GrpcConnectionProvider for testing.
type MockGrpcProvider struct {
MockConn *grpc.ClientConn
MockHosts []string
}
func (m *MockGrpcProvider) CurrentConn() *grpc.ClientConn { return m.MockConn }
func (m *MockGrpcProvider) CurrentHost() string {
if len(m.MockHosts) > 0 {
return m.MockHosts[0]
}
return ""
}
func (m *MockGrpcProvider) Hosts() []string { return m.MockHosts }
func (m *MockGrpcProvider) SwitchHost(int) error { return nil }
func (m *MockGrpcProvider) Close() {}

34
api/rest/BUILD.bazel Normal file
View File

@@ -0,0 +1,34 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"mock_rest_provider.go",
"rest_connection_provider.go",
"rest_handler.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/rest",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/apiutil:go_default_library",
"//api/client:go_default_library",
"//config/params:go_default_library",
"//network/httputil:go_default_library",
"//runtime/version:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["rest_connection_provider_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

9
api/rest/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package rest
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/rest")

View File

@@ -0,0 +1,49 @@
package rest
import (
"bytes"
"context"
"net/http"
)
// MockRestProvider implements RestConnectionProvider for testing.
type MockRestProvider struct {
MockClient *http.Client
MockHandler RestHandler
MockHosts []string
HostIndex int
}
func (m *MockRestProvider) HttpClient() *http.Client { return m.MockClient }
func (m *MockRestProvider) RestHandler() RestHandler { return m.MockHandler }
func (m *MockRestProvider) CurrentHost() string {
if len(m.MockHosts) > 0 {
return m.MockHosts[m.HostIndex%len(m.MockHosts)]
}
return ""
}
func (m *MockRestProvider) Hosts() []string { return m.MockHosts }
func (m *MockRestProvider) SwitchHost(index int) error { m.HostIndex = index; return nil }
// MockRestHandler implements RestHandler for testing.
type MockRestHandler struct {
MockHost string
MockClient *http.Client
}
func (m *MockRestHandler) Get(_ context.Context, _ string, _ any) error { return nil }
func (m *MockRestHandler) GetStatusCode(_ context.Context, _ string) (int, error) {
return http.StatusOK, nil
}
func (m *MockRestHandler) GetSSZ(_ context.Context, _ string) ([]byte, http.Header, error) {
return nil, nil, nil
}
func (m *MockRestHandler) Post(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer, _ any) error {
return nil
}
func (m *MockRestHandler) PostSSZ(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer) ([]byte, http.Header, error) {
return nil, nil, nil
}
func (m *MockRestHandler) HttpClient() *http.Client { return m.MockClient }
func (m *MockRestHandler) Host() string { return m.MockHost }
func (m *MockRestHandler) SwitchHost(host string) { m.MockHost = host }

View File

@@ -0,0 +1,158 @@
package rest
import (
"net/http"
"strings"
"sync/atomic"
"time"
"github.com/OffchainLabs/prysm/v7/api/client"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
)
// RestConnectionProvider manages HTTP client configuration for REST API with failover support.
// It allows switching between different beacon node REST endpoints when the current one becomes unavailable.
type RestConnectionProvider interface {
// HttpClient returns the configured HTTP client with headers, timeout, and optional tracing.
HttpClient() *http.Client
// RestHandler returns the REST handler for making API requests.
RestHandler() RestHandler
// CurrentHost returns the current REST API endpoint URL.
CurrentHost() string
// Hosts returns all configured REST API endpoint URLs.
Hosts() []string
// SwitchHost switches to the endpoint at the given index.
SwitchHost(index int) error
}
// RestConnectionProviderOption is a functional option for configuring the REST connection provider.
type RestConnectionProviderOption func(*restConnectionProvider)
// WithHttpTimeout sets the HTTP client timeout.
func WithHttpTimeout(timeout time.Duration) RestConnectionProviderOption {
return func(p *restConnectionProvider) {
p.timeout = timeout
}
}
// WithHttpHeaders sets custom HTTP headers to include in all requests.
func WithHttpHeaders(headers map[string][]string) RestConnectionProviderOption {
return func(p *restConnectionProvider) {
p.headers = headers
}
}
// WithTracing enables OpenTelemetry tracing for HTTP requests.
func WithTracing() RestConnectionProviderOption {
return func(p *restConnectionProvider) {
p.enableTracing = true
}
}
type restConnectionProvider struct {
endpoints []string
httpClient *http.Client
restHandler RestHandler
currentIndex atomic.Uint64
timeout time.Duration
headers map[string][]string
enableTracing bool
}
// NewRestConnectionProvider creates a new REST connection provider that manages HTTP client configuration.
// The endpoint parameter can be a comma-separated list of URLs (e.g., "http://host1:3500,http://host2:3500").
func NewRestConnectionProvider(endpoint string, opts ...RestConnectionProviderOption) (RestConnectionProvider, error) {
endpoints := parseEndpoints(endpoint)
if len(endpoints) == 0 {
return nil, errors.New("no REST API endpoints provided")
}
p := &restConnectionProvider{
endpoints: endpoints,
}
for _, opt := range opts {
opt(p)
}
// Build the HTTP transport chain
var transport http.RoundTripper = http.DefaultTransport
// Add custom headers if configured
if len(p.headers) > 0 {
transport = client.NewCustomHeadersTransport(transport, p.headers)
}
// Add tracing if enabled
if p.enableTracing {
transport = otelhttp.NewTransport(transport)
}
p.httpClient = &http.Client{
Timeout: p.timeout,
Transport: transport,
}
// Create the REST handler with the HTTP client and initial host
p.restHandler = newRestHandler(*p.httpClient, endpoints[0])
log.WithFields(logrus.Fields{
"endpoints": endpoints,
"count": len(endpoints),
}).Info("Initialized REST connection provider")
return p, nil
}
// parseEndpoints splits a comma-separated endpoint string into individual endpoints.
func parseEndpoints(endpoint string) []string {
if endpoint == "" {
return nil
}
endpoints := make([]string, 0, 1)
for p := range strings.SplitSeq(endpoint, ",") {
if p = strings.TrimSpace(p); p != "" {
endpoints = append(endpoints, p)
}
}
return endpoints
}
func (p *restConnectionProvider) HttpClient() *http.Client {
return p.httpClient
}
func (p *restConnectionProvider) RestHandler() RestHandler {
return p.restHandler
}
func (p *restConnectionProvider) CurrentHost() string {
return p.endpoints[p.currentIndex.Load()]
}
func (p *restConnectionProvider) Hosts() []string {
// Return a copy to maintain immutability
hosts := make([]string, len(p.endpoints))
copy(hosts, p.endpoints)
return hosts
}
func (p *restConnectionProvider) SwitchHost(index int) error {
if index < 0 || index >= len(p.endpoints) {
return errors.Errorf("invalid host index %d, must be between 0 and %d", index, len(p.endpoints)-1)
}
oldIdx := p.currentIndex.Load()
p.currentIndex.Store(uint64(index))
// Update the rest handler's host
p.restHandler.SwitchHost(p.endpoints[index])
log.WithFields(logrus.Fields{
"previousHost": p.endpoints[oldIdx],
"newHost": p.endpoints[index],
}).Debug("Switched REST endpoint")
return nil
}

View File

@@ -0,0 +1,80 @@
package rest
import (
"reflect"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestParseEndpoints(t *testing.T) {
tests := []struct {
name string
input string
expected []string
}{
{"single endpoint", "http://localhost:3500", []string{"http://localhost:3500"}},
{"multiple endpoints", "http://host1:3500,http://host2:3500,http://host3:3500", []string{"http://host1:3500", "http://host2:3500", "http://host3:3500"}},
{"endpoints with spaces", "http://host1:3500, http://host2:3500 , http://host3:3500", []string{"http://host1:3500", "http://host2:3500", "http://host3:3500"}},
{"empty string", "", nil},
{"only commas", ",,,", []string{}},
{"trailing comma", "http://host1:3500,http://host2:3500,", []string{"http://host1:3500", "http://host2:3500"}},
{"leading comma", ",http://host1:3500,http://host2:3500", []string{"http://host1:3500", "http://host2:3500"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := parseEndpoints(tt.input)
if !reflect.DeepEqual(tt.expected, got) {
t.Errorf("parseEndpoints(%q) = %v, want %v", tt.input, got, tt.expected)
}
})
}
}
func TestNewRestConnectionProvider_Errors(t *testing.T) {
t.Run("no endpoints", func(t *testing.T) {
_, err := NewRestConnectionProvider("")
require.ErrorContains(t, "no REST API endpoints provided", err)
})
}
func TestRestConnectionProvider(t *testing.T) {
provider, err := NewRestConnectionProvider("http://host1:3500,http://host2:3500,http://host3:3500")
require.NoError(t, err)
t.Run("initial state", func(t *testing.T) {
assert.Equal(t, 3, len(provider.Hosts()))
assert.Equal(t, "http://host1:3500", provider.CurrentHost())
assert.NotNil(t, provider.HttpClient())
})
t.Run("SwitchHost", func(t *testing.T) {
require.NoError(t, provider.SwitchHost(1))
assert.Equal(t, "http://host2:3500", provider.CurrentHost())
require.NoError(t, provider.SwitchHost(0))
assert.Equal(t, "http://host1:3500", provider.CurrentHost())
require.ErrorContains(t, "invalid host index", provider.SwitchHost(-1))
require.ErrorContains(t, "invalid host index", provider.SwitchHost(3))
})
t.Run("Hosts returns copy", func(t *testing.T) {
hosts := provider.Hosts()
original := hosts[0]
hosts[0] = "modified"
assert.Equal(t, original, provider.Hosts()[0])
})
}
func TestRestConnectionProvider_WithOptions(t *testing.T) {
headers := map[string][]string{"Authorization": {"Bearer token"}}
provider, err := NewRestConnectionProvider(
"http://localhost:3500",
WithHttpHeaders(headers),
WithHttpTimeout(30000000000), // 30 seconds in nanoseconds
WithTracing(),
)
require.NoError(t, err)
assert.NotNil(t, provider.HttpClient())
assert.Equal(t, "http://localhost:3500", provider.CurrentHost())
}

View File

@@ -1,4 +1,4 @@
package beacon_api
package rest
import (
"bytes"
@@ -21,6 +21,7 @@ import (
type reqOption func(*http.Request)
// RestHandler defines the interface for making REST API requests.
type RestHandler interface {
Get(ctx context.Context, endpoint string, resp any) error
GetStatusCode(ctx context.Context, endpoint string) (int, error)
@@ -29,29 +30,34 @@ type RestHandler interface {
PostSSZ(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer) ([]byte, http.Header, error)
HttpClient() *http.Client
Host() string
SetHost(host string)
SwitchHost(host string)
}
type BeaconApiRestHandler struct {
type restHandler struct {
client http.Client
host string
reqOverrides []reqOption
}
// NewBeaconApiRestHandler returns a RestHandler
func NewBeaconApiRestHandler(client http.Client, host string) RestHandler {
brh := &BeaconApiRestHandler{
// newRestHandler returns a RestHandler (internal use)
func newRestHandler(client http.Client, host string) RestHandler {
return NewRestHandler(client, host)
}
// NewRestHandler returns a RestHandler
func NewRestHandler(client http.Client, host string) RestHandler {
rh := &restHandler{
client: client,
host: host,
}
brh.appendAcceptOverride()
return brh
rh.appendAcceptOverride()
return rh
}
// appendAcceptOverride enables the Accept header to be customized at runtime via an environment variable.
// This is specified as an env var because it is a niche option that prysm may use for performance testing or debugging
// bug which users are unlikely to need. Using an env var keeps the set of user-facing flags cleaner.
func (c *BeaconApiRestHandler) appendAcceptOverride() {
func (c *restHandler) appendAcceptOverride() {
if accept := os.Getenv(params.EnvNameOverrideAccept); accept != "" {
c.reqOverrides = append(c.reqOverrides, func(req *http.Request) {
req.Header.Set("Accept", accept)
@@ -60,18 +66,18 @@ func (c *BeaconApiRestHandler) appendAcceptOverride() {
}
// HttpClient returns the underlying HTTP client of the handler
func (c *BeaconApiRestHandler) HttpClient() *http.Client {
func (c *restHandler) HttpClient() *http.Client {
return &c.client
}
// Host returns the underlying HTTP host
func (c *BeaconApiRestHandler) Host() string {
func (c *restHandler) Host() string {
return c.host
}
// Get sends a GET request and decodes the response body as a JSON object into the passed in object.
// If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value.
func (c *BeaconApiRestHandler) Get(ctx context.Context, endpoint string, resp any) error {
func (c *restHandler) Get(ctx context.Context, endpoint string, resp any) error {
url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
@@ -94,7 +100,7 @@ func (c *BeaconApiRestHandler) Get(ctx context.Context, endpoint string, resp an
// GetStatusCode sends a GET request and returns only the HTTP status code.
// This is useful for endpoints like /eth/v1/node/health that communicate status via HTTP codes
// (200 = ready, 206 = syncing, 503 = unavailable) rather than response bodies.
func (c *BeaconApiRestHandler) GetStatusCode(ctx context.Context, endpoint string) (int, error) {
func (c *restHandler) GetStatusCode(ctx context.Context, endpoint string) (int, error) {
url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
@@ -113,7 +119,7 @@ func (c *BeaconApiRestHandler) GetStatusCode(ctx context.Context, endpoint strin
return httpResp.StatusCode, nil
}
func (c *BeaconApiRestHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) {
func (c *restHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) {
url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
@@ -168,7 +174,7 @@ func (c *BeaconApiRestHandler) GetSSZ(ctx context.Context, endpoint string) ([]b
// Post sends a POST request and decodes the response body as a JSON object into the passed in object.
// If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value.
func (c *BeaconApiRestHandler) Post(
func (c *restHandler) Post(
ctx context.Context,
apiEndpoint string,
headers map[string]string,
@@ -204,7 +210,7 @@ func (c *BeaconApiRestHandler) Post(
}
// PostSSZ sends a POST request and prefers an SSZ (application/octet-stream) response body.
func (c *BeaconApiRestHandler) PostSSZ(
func (c *restHandler) PostSSZ(
ctx context.Context,
apiEndpoint string,
headers map[string]string,
@@ -305,6 +311,6 @@ func decodeResp(httpResp *http.Response, resp any) error {
return nil
}
func (c *BeaconApiRestHandler) SetHost(host string) {
func (c *restHandler) SwitchHost(host string) {
c.host = host
}

View File

@@ -48,6 +48,7 @@ go_test(
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
"@org_golang_google_grpc//reflection:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
"@org_golang_google_protobuf//types/known/timestamppb:go_default_library",

View File

@@ -35,18 +35,19 @@ import (
// providing RPC endpoints for verifying a beacon node's sync status, genesis and
// version information, and services the node implements and runs.
type Server struct {
LogsStreamer logs.Streamer
StreamLogsBufferSize int
SyncChecker sync.Checker
Server *grpc.Server
BeaconDB db.ReadOnlyDatabase
PeersFetcher p2p.PeersProvider
PeerManager p2p.PeerManager
GenesisTimeFetcher blockchain.TimeFetcher
GenesisFetcher blockchain.GenesisFetcher
POWChainInfoFetcher execution.ChainInfoFetcher
BeaconMonitoringHost string
BeaconMonitoringPort int
LogsStreamer logs.Streamer
StreamLogsBufferSize int
SyncChecker sync.Checker
Server *grpc.Server
BeaconDB db.ReadOnlyDatabase
PeersFetcher p2p.PeersProvider
PeerManager p2p.PeerManager
GenesisTimeFetcher blockchain.TimeFetcher
GenesisFetcher blockchain.GenesisFetcher
POWChainInfoFetcher execution.ChainInfoFetcher
BeaconMonitoringHost string
BeaconMonitoringPort int
OptimisticModeFetcher blockchain.OptimisticModeFetcher
}
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.
@@ -61,21 +62,28 @@ func (ns *Server) GetHealth(ctx context.Context, request *ethpb.HealthRequest) (
ctx, cancel := context.WithTimeout(ctx, timeoutDuration)
defer cancel() // Important to avoid a context leak
if ns.SyncChecker.Synced() {
// Check optimistic status - validators should not participate when optimistic
isOptimistic, err := ns.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not check optimistic status: %v", err)
}
if ns.SyncChecker.Synced() && !isOptimistic {
return &empty.Empty{}, nil
}
if ns.SyncChecker.Syncing() || ns.SyncChecker.Initialized() {
if request.SyncingStatus != 0 {
// override the 200 success with the provided request status
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(request.SyncingStatus, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set custom success code header: %v", err)
}
return &empty.Empty{}, nil
}
// Set header for REST API clients (via gRPC-gateway)
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set custom success code header: %v", err)
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set status code header: %v", err)
}
return &empty.Empty{}, nil
return &empty.Empty{}, status.Error(codes.Unavailable, "node is syncing")
}
if isOptimistic {
// Set header for REST API clients (via gRPC-gateway)
if err := grpc.SetHeader(ctx, metadata.Pairs("x-http-code", strconv.FormatUint(http.StatusPartialContent, 10))); err != nil {
return &empty.Empty{}, status.Errorf(codes.Internal, "Could not set status code header: %v", err)
}
return &empty.Empty{}, status.Error(codes.Unavailable, "node is optimistic")
}
return &empty.Empty{}, status.Errorf(codes.Unavailable, "service unavailable")
}

View File

@@ -2,6 +2,7 @@ package node
import (
"errors"
"maps"
"testing"
"time"
@@ -21,6 +22,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/reflection"
"google.golang.org/protobuf/types/known/emptypb"
"google.golang.org/protobuf/types/known/timestamppb"
@@ -187,32 +189,71 @@ func TestNodeServer_GetETH1ConnectionStatus(t *testing.T) {
assert.Equal(t, errStr, res.CurrentConnectionError)
}
// mockServerTransportStream implements grpc.ServerTransportStream for testing
type mockServerTransportStream struct {
headers map[string][]string
}
func (m *mockServerTransportStream) Method() string { return "" }
func (m *mockServerTransportStream) SetHeader(md metadata.MD) error {
maps.Copy(m.headers, md)
return nil
}
func (m *mockServerTransportStream) SendHeader(metadata.MD) error { return nil }
func (m *mockServerTransportStream) SetTrailer(metadata.MD) error { return nil }
func TestNodeServer_GetHealth(t *testing.T) {
tests := []struct {
name string
input *mockSync.Sync
customStatus uint64
isOptimistic bool
wantedErr string
}{
{
name: "happy path",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
name: "happy path - synced and not optimistic",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
isOptimistic: false,
},
{
name: "syncing",
input: &mockSync.Sync{IsSyncing: false},
wantedErr: "service unavailable",
name: "returns error when not synced and not syncing",
input: &mockSync.Sync{IsSyncing: false, IsSynced: false},
isOptimistic: false,
wantedErr: "service unavailable",
},
{
name: "returns error when syncing",
input: &mockSync.Sync{IsSyncing: true, IsSynced: false},
isOptimistic: false,
wantedErr: "node is syncing",
},
{
name: "returns error when synced but optimistic",
input: &mockSync.Sync{IsSyncing: false, IsSynced: true},
isOptimistic: true,
wantedErr: "node is optimistic",
},
{
name: "returns error when syncing and optimistic",
input: &mockSync.Sync{IsSyncing: true, IsSynced: false},
isOptimistic: true,
wantedErr: "node is syncing",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
server := grpc.NewServer()
ns := &Server{
SyncChecker: tt.input,
SyncChecker: tt.input,
OptimisticModeFetcher: &mock.ChainService{Optimistic: tt.isOptimistic},
}
ethpb.RegisterNodeServer(server, ns)
reflection.Register(server)
_, err := ns.GetHealth(t.Context(), &ethpb.HealthRequest{SyncingStatus: tt.customStatus})
// Create context with mock transport stream so grpc.SetHeader works
stream := &mockServerTransportStream{headers: make(map[string][]string)}
ctx := grpc.NewContextWithServerTransportStream(t.Context(), stream)
_, err := ns.GetHealth(ctx, &ethpb.HealthRequest{})
if tt.wantedErr == "" {
require.NoError(t, err)
return

View File

@@ -259,18 +259,19 @@ func NewService(ctx context.Context, cfg *Config) *Service {
}
s.validatorServer = validatorServer
nodeServer := &nodev1alpha1.Server{
LogsStreamer: logs.NewStreamServer(),
StreamLogsBufferSize: 1000, // Enough to handle bursts of beacon node logs for gRPC streaming.
BeaconDB: s.cfg.BeaconDB,
Server: s.grpcServer,
SyncChecker: s.cfg.SyncService,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
PeersFetcher: s.cfg.PeersFetcher,
PeerManager: s.cfg.PeerManager,
GenesisFetcher: s.cfg.GenesisFetcher,
POWChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
BeaconMonitoringHost: s.cfg.BeaconMonitoringHost,
BeaconMonitoringPort: s.cfg.BeaconMonitoringPort,
LogsStreamer: logs.NewStreamServer(),
StreamLogsBufferSize: 1000, // Enough to handle bursts of beacon node logs for gRPC streaming.
BeaconDB: s.cfg.BeaconDB,
Server: s.grpcServer,
SyncChecker: s.cfg.SyncService,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
PeersFetcher: s.cfg.PeersFetcher,
PeerManager: s.cfg.PeerManager,
GenesisFetcher: s.cfg.GenesisFetcher,
POWChainInfoFetcher: s.cfg.ExecutionChainInfoFetcher,
BeaconMonitoringHost: s.cfg.BeaconMonitoringHost,
BeaconMonitoringPort: s.cfg.BeaconMonitoringPort,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
}
beaconChainServer := &beaconv1alpha1.Server{
Ctx: s.ctx,

View File

@@ -0,0 +1,6 @@
### Added
- Added new proofCollector type to ssz-query
### Ignored
- Added testing covering the production of Merkle proof from Phase0 beacon state and benchmarked against real Hoodi beacon state (Fulu version)

View File

@@ -0,0 +1,7 @@
### Changed
- gRPC fallback now matches rest api implementation and will also check and connect to only synced nodes.
### Removed
- gRPC resolver for load balancing, the new implementation matches rest api's so we should remove the resolver so it's handled the same way for consistency.

View File

@@ -0,0 +1,3 @@
### Changed
- gRPC health endpoint will now return an error on syncing or optimistic status showing that it's unavailable.

View File

@@ -163,3 +163,18 @@ func Uint256ToSSZBytes(num string) ([]byte, error) {
}
return PadTo(ReverseByteOrder(uint256.Bytes()), 32), nil
}
// PutLittleEndian writes an unsigned integer value in little-endian format.
// Supports sizes 1, 2, 4, or 8 bytes for uint8/16/32/64 respectively.
func PutLittleEndian(dst []byte, val uint64, size int) {
switch size {
case 1:
dst[0] = byte(val)
case 2:
binary.LittleEndian.PutUint16(dst, uint16(val))
case 4:
binary.LittleEndian.PutUint32(dst, uint32(val))
case 8:
binary.LittleEndian.PutUint64(dst, val)
}
}

View File

@@ -9,7 +9,9 @@ go_library(
"container.go",
"generalized_index.go",
"list.go",
"merkle_proof.go",
"path.go",
"proof_collector.go",
"query.go",
"ssz_info.go",
"ssz_object.go",
@@ -20,7 +22,12 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v7/encoding/ssz/query",
visibility = ["//visibility:public"],
deps = [
"//container/trie:go_default_library",
"//crypto/hash/htr:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)
@@ -29,15 +36,24 @@ go_test(
name = "go_default_test",
srcs = [
"generalized_index_test.go",
"merkle_proof_test.go",
"path_test.go",
"proof_collector_test.go",
"query_test.go",
"tag_parser_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//encoding/ssz/query/testutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/ssz_query/testing:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
],
)

View File

@@ -0,0 +1,34 @@
package query
import (
"fmt"
"reflect"
fastssz "github.com/prysmaticlabs/fastssz"
)
// Prove is the entrypoint to generate an SSZ Merkle proof for the given generalized index.
// Parameters:
// - gindex: the generalized index of the node to prove inclusion for.
// Returns:
// - fastssz.Proof: the Merkle proof containing the leaf, index, and sibling hashes.
// - error: any error encountered during proof generation.
func (info *SszInfo) Prove(gindex uint64) (*fastssz.Proof, error) {
if info == nil {
return nil, fmt.Errorf("nil SszInfo")
}
collector := newProofCollector()
collector.addTarget(gindex)
// info.source is guaranteed to be valid and dereferenced by AnalyzeObject
v := reflect.ValueOf(info.source).Elem()
// Start the merkleization and proof collection process.
// In SSZ generalized indices, the root is always at index 1.
if _, err := collector.merkleize(info, v, 1); err != nil {
return nil, err
}
return collector.toProof()
}

View File

@@ -0,0 +1,163 @@
package query_test
import (
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/ssz/query"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
ssz "github.com/prysmaticlabs/fastssz"
)
func TestProve_FixedTestContainer(t *testing.T) {
obj := createFixedTestContainer()
tests := []string{
".field_uint32",
".nested.value2",
".vector_field[3]",
".bitvector64_field",
".trailing_field",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_VariableTestContainer(t *testing.T) {
obj := createVariableTestContainer()
tests := []string{
".leading_field",
".field_list_uint64[2]",
"len(field_list_uint64)",
".nested.nested_list_field[1]",
".variable_container_list[0].inner_1.field_list_uint64[1]",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_BeaconBlock(t *testing.T) {
randaoReveal := make([]byte, 96)
for i := range randaoReveal {
randaoReveal[i] = 0x42
}
root32 := make([]byte, 32)
for i := range root32 {
root32[i] = 0x24
}
sig := make([]byte, 96)
for i := range sig {
sig[i] = 0x99
}
att := &eth.Attestation{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root32,
Source: &eth.Checkpoint{
Epoch: 1,
Root: root32,
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: root32,
},
},
Signature: sig,
}
b := util.NewBeaconBlock()
b.Block.Slot = 123
b.Block.Body.RandaoReveal = randaoReveal
b.Block.Body.Attestations = []*eth.Attestation{att}
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
protoBlock, err := sb.Block().Proto()
require.NoError(t, err)
obj, ok := protoBlock.(query.SSZObject)
require.Equal(t, true, ok, "block proto does not implement query.SSZObject")
tests := []string{
".slot",
".body.randao_reveal",
".body.attestations[0].data.slot",
"len(body.attestations)",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, obj, tc)
})
}
}
func TestProve_BeaconState(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 16)
require.NoError(t, st.SetSlot(primitives.Slot(42)))
sszObj, ok := st.ToProtoUnsafe().(query.SSZObject)
require.Equal(t, true, ok, "state proto does not implement query.SSZObject")
tests := []string{
".slot",
".latest_block_header",
".validators[0].effective_balance",
"len(validators)",
}
for _, tc := range tests {
t.Run(tc, func(t *testing.T) {
proveAndVerify(t, sszObj, tc)
})
}
}
// proveAndVerify helper to analyze an object, generate a merkle proof for the given path,
// and verify the proof against the object's root.
func proveAndVerify(t *testing.T, obj query.SSZObject, pathStr string) {
t.Helper()
info, err := query.AnalyzeObject(obj)
require.NoError(t, err)
path, err := query.ParsePath(pathStr)
require.NoError(t, err)
gi, err := query.GetGeneralizedIndexFromPath(info, path)
require.NoError(t, err)
proof, err := info.Prove(gi)
require.NoError(t, err)
require.Equal(t, int(gi), proof.Index)
root, err := obj.HashTreeRoot()
require.NoError(t, err)
ok, err := ssz.VerifyProof(root[:], proof)
require.NoError(t, err)
require.Equal(t, true, ok, "merkle proof verification failed")
require.Equal(t, 32, len(proof.Leaf))
for i, h := range proof.Hashes {
require.Equal(t, 32, len(h), "proof hash %d is not 32 bytes", i)
}
}

View File

@@ -0,0 +1,672 @@
package query
import (
"encoding/binary"
"errors"
"fmt"
"math/bits"
"reflect"
"runtime"
"slices"
"sync"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/container/trie"
"github.com/OffchainLabs/prysm/v7/crypto/hash/htr"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
ssz "github.com/OffchainLabs/prysm/v7/encoding/ssz"
"github.com/OffchainLabs/prysm/v7/math"
fastssz "github.com/prysmaticlabs/fastssz"
)
// proofCollector collects sibling hashes and leaves needed for Merkle proofs.
//
// Multiproof-ready design:
// - requiredSiblings/requiredLeaves store which gindices we want to collect (registered before merkleization).
// - siblings/leaves store the actual collected hashes.
//
// Concurrency:
// - required* maps are read-only during merkleization.
// - siblings/leaves writes are protected by mutex.
type proofCollector struct {
sync.Mutex
// Required gindices (registered before merkleization)
requiredSiblings map[uint64]struct{}
requiredLeaves map[uint64]struct{}
// Collected hashes
siblings map[uint64][32]byte
leaves map[uint64][32]byte
}
func newProofCollector() *proofCollector {
return &proofCollector{
requiredSiblings: make(map[uint64]struct{}),
requiredLeaves: make(map[uint64]struct{}),
siblings: make(map[uint64][32]byte),
leaves: make(map[uint64][32]byte),
}
}
func (pc *proofCollector) reset() {
pc.Lock()
defer pc.Unlock()
pc.requiredSiblings = make(map[uint64]struct{})
pc.requiredLeaves = make(map[uint64]struct{})
pc.siblings = make(map[uint64][32]byte)
pc.leaves = make(map[uint64][32]byte)
}
// addTarget register the target leaf and its required sibling nodes for proof construction.
// Registration should happen before merkleization begins.
func (pc *proofCollector) addTarget(gindex uint64) {
pc.Lock()
defer pc.Unlock()
pc.requiredLeaves[gindex] = struct{}{}
// Walk from the target leaf up to (but not including) the root (gindex=1).
// At each step, register the sibling node required to prove inclusion.
nodeGindex := gindex
for nodeGindex > 1 {
siblingGindex := nodeGindex ^ 1 // flip the last bit: left<->right sibling
pc.requiredSiblings[siblingGindex] = struct{}{}
// Move to parent
nodeGindex /= 2
}
}
// toProof converts the collected siblings and leaves into a fastssz.Proof structure.
// Current behavior expects a single target leaf (single proof).
func (pc *proofCollector) toProof() (*fastssz.Proof, error) {
pc.Lock()
defer pc.Unlock()
proof := &fastssz.Proof{}
if len(pc.leaves) == 0 {
return nil, errors.New("no leaves collected: add target leaves before merkleization")
}
leafGindices := make([]uint64, 0, len(pc.leaves))
for g := range pc.leaves {
leafGindices = append(leafGindices, g)
}
slices.Sort(leafGindices)
// single proof resides in leafGindices[0]
targetGindex := leafGindices[0]
proofIndex, err := math.Int(targetGindex)
if err != nil {
return nil, fmt.Errorf("gindex %d overflows int: %w", targetGindex, err)
}
proof.Index = proofIndex
// store the leaf
leaf := pc.leaves[targetGindex]
leafBuf := make([]byte, 32)
copy(leafBuf, leaf[:])
proof.Leaf = leafBuf
// Walk from target up to root, collecting siblings.
steps := bits.Len64(targetGindex) - 1
proof.Hashes = make([][]byte, 0, steps)
for targetGindex > 1 {
sib := targetGindex ^ 1
h, ok := pc.siblings[sib]
if !ok {
return nil, fmt.Errorf("missing sibling hash for gindex %d", sib)
}
proof.Hashes = append(proof.Hashes, h[:])
targetGindex /= 2
}
return proof, nil
}
// collectLeaf checks if the given gindex is a required leaf for the proof,
// and if so, stores the provided leaf hash in the collector.
func (pc *proofCollector) collectLeaf(gindex uint64, leaf [32]byte) {
if _, ok := pc.requiredLeaves[gindex]; !ok {
return
}
pc.Lock()
pc.leaves[gindex] = leaf
pc.Unlock()
}
// collectSibling stores the hash for a sibling node identified by gindex.
// It only stores the hash if gindex was pre-registered via addTarget (present in requiredSiblings).
// Writes to the collected siblings map are protected by the collector mutex.
func (pc *proofCollector) collectSibling(gindex uint64, hash [32]byte) {
if _, ok := pc.requiredSiblings[gindex]; !ok {
return
}
pc.Lock()
pc.siblings[gindex] = hash
pc.Unlock()
}
// Merkleizers and proof collection methods
// merkleize recursively traverses an SSZ info and computes the Merkle root of the subtree.
//
// Proof collection:
// - During traversal it calls collectLeaf/collectSibling with the SSZ generalized indices (gindices)
// of visited nodes.
// - The collector only stores hashes for gindices that were pre-registered via addTarget
// (requiredLeaves/requiredSiblings). This makes the traversal multiproof-ready: you can register
// multiple targets before calling merkleize.
//
// SSZ types handled: basic types, containers, lists, vectors, bitlists, and bitvectors.
//
// Parameters:
// - info: SSZ type metadata for the current value.
// - v: reflect.Value of the current value.
// - currentGindex: generalized index of the current subtree root.
//
// Returns:
// - [32]byte: Merkle root of the current subtree.
// - error: any error encountered during traversal/merkleization.
func (pc *proofCollector) merkleize(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
if info.sszType.isBasic() {
return pc.merkleizeBasicType(info.sszType, v, currentGindex)
}
switch info.sszType {
case Container:
return pc.merkleizeContainer(info, v, currentGindex)
case List:
return pc.merkleizeList(info, v, currentGindex)
case Vector:
return pc.merkleizeVector(info, v, currentGindex)
case Bitlist:
return pc.merkleizeBitlist(info, v, currentGindex)
case Bitvector:
return pc.merkleizeBitvector(info, v, currentGindex)
default:
return [32]byte{}, fmt.Errorf("unsupported SSZ type: %v", info.sszType)
}
}
// merkleizeBasicType serializes a basic SSZ value into a 32-byte leaf chunk (little-endian, zero-padded).
//
// Proof collection:
// - It calls collectLeaf(currentGindex, leaf) and stores the leaf if currentGindex was pre-registered via addTarget.
//
// Parameters:
// - t: the SSZType (basic).
// - v: the reflect.Value of the basic value.
// - currentGindex: the generalized index (gindex) of this leaf.
//
// Returns:
// - [32]byte: the 32-byte SSZ leaf chunk.
// - error: if the SSZType is not a supported basic type.
func (pc *proofCollector) merkleizeBasicType(t SSZType, v reflect.Value, currentGindex uint64) ([32]byte, error) {
var leaf [32]byte
// Serialize the value into a 32-byte chunk (little-endian, zero-padded)
switch t {
case Uint8:
leaf[0] = uint8(v.Uint())
case Uint16:
binary.LittleEndian.PutUint16(leaf[:2], uint16(v.Uint()))
case Uint32:
binary.LittleEndian.PutUint32(leaf[:4], uint32(v.Uint()))
case Uint64:
binary.LittleEndian.PutUint64(leaf[:8], v.Uint())
case Boolean:
if v.Bool() {
leaf[0] = 1
}
default:
return [32]byte{}, fmt.Errorf("unexpected basic type: %v", t)
}
pc.collectLeaf(currentGindex, leaf)
return leaf, nil
}
// merkleizeContainer computes the Merkle root of an SSZ container by:
// 1. Merkleizing each field into a 32-byte subtree root
// 2. Merkleizing the field roots into the container root (padding to the next power-of-2)
//
// Generalized indices (gindices): depth = ssz.Depth(uint64(N)) and field i has gindex = (currentGindex << depth) + uint64(i).
// Proof collection: merkleize() computes each field root, merkleizeVectorAndCollect collects required siblings, and collectLeaf stores the container root if registered.
//
// Parameters:
// - info: SSZ type metadata for the container.
// - v: reflect.Value of the container value.
// - currentGindex: generalized index (gindex) of the container root.
//
// Returns:
// - [32]byte: Merkle root of the container.
// - error: any error encountered while merkleizing fields.
func (pc *proofCollector) merkleizeContainer(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
// If the container root itself is the target, compute directly and return early.
// This avoids full subtree merkleization when we only need the root.
if _, ok := pc.requiredLeaves[currentGindex]; ok {
root, err := info.HashTreeRoot()
if err != nil {
return [32]byte{}, err
}
pc.collectLeaf(currentGindex, root)
return root, nil
}
ci, err := info.ContainerInfo()
if err != nil {
return [32]byte{}, err
}
v = dereferencePointer(v)
// Calculate depth: how many levels from container root to field leaves
numFields := len(ci.order)
depth := ssz.Depth(uint64(numFields))
// Step 1: Compute HTR for each subtree (field)
fieldRoots := make([][32]byte, numFields)
for i, name := range ci.order {
fieldInfo := ci.fields[name]
fieldVal := v.FieldByName(fieldInfo.goFieldName)
// Field i's gindex: shift currentGindex left by depth, then OR with field index
fieldGindex := currentGindex<<depth + uint64(i)
htr, err := pc.merkleize(fieldInfo.sszInfo, fieldVal, fieldGindex)
if err != nil {
return [32]byte{}, fmt.Errorf("field %s: %w", name, err)
}
fieldRoots[i] = htr
}
// Step 2: Merkleize the field hashes into the container root,
// collecting sibling hashes if target is within this subtree
root := pc.merkleizeVectorAndCollect(fieldRoots, currentGindex, uint64(depth))
return root, nil
}
// merkleizeVectorBody computes the Merkle root of the "data" subtree for vector-like SSZ types
// (vectors and the data-part of lists/bitlists).
//
// Generalized indices (gindices): depth = ssz.Depth(limit); leafBase = subtreeRootGindex << depth; element/chunk i gindex = leafBase + uint64(i).
// Proof collection: merkleize() is called for composite elements; merkleizeVectorAndCollect collects required siblings at this layer.
// Padding: merkleizeVectorAndCollect uses trie.ZeroHashes as needed.
//
// Parameters:
// - elemInfo: SSZ type metadata for the element.
// - v: reflect.Value of the vector/list data.
// - length: number of actual elements present.
// - limit: virtual leaf capacity used for padding/Depth (fixed length for vectors, limit for lists).
// - subtreeRootGindex: gindex of the data subtree root.
//
// Returns:
// - [32]byte: Merkle root of the data subtree.
// - error: any error encountered while merkleizing composite elements.
func (pc *proofCollector) merkleizeVectorBody(elemInfo *SszInfo, v reflect.Value, length int, limit uint64, subtreeRootGindex uint64) ([32]byte, error) {
depth := uint64(ssz.Depth(limit))
var chunks [][32]byte
if elemInfo.sszType.isBasic() {
// Serialize basic elements and pack into 32-byte chunks using ssz.PackByChunk.
elemSize, err := math.Int(itemLength(elemInfo))
if err != nil {
return [32]byte{}, fmt.Errorf("element size %d overflows int: %w", itemLength(elemInfo), err)
}
serialized := make([][]byte, length)
// Single contiguous allocation for all element data
allData := make([]byte, length*elemSize)
for i := range length {
buf := allData[i*elemSize : (i+1)*elemSize]
elem := v.Index(i)
if elemInfo.sszType == Boolean && elem.Bool() {
buf[0] = 1
} else {
bytesutil.PutLittleEndian(buf, elem.Uint(), elemSize)
}
serialized[i] = buf
}
chunks, err = ssz.PackByChunk(serialized)
if err != nil {
return [32]byte{}, err
}
} else {
// Composite elements: compute each element root (no padding here; merkleizeVectorAndCollect pads).
chunks = make([][32]byte, length)
// Fall back to per-element merkleization with proper gindices for proof collection.
// Parallel execution
workerCount := min(runtime.GOMAXPROCS(0), length)
jobs := make(chan int, workerCount*16)
errCh := make(chan error, 1) // only need the first error
stopCh := make(chan struct{})
var stopOnce sync.Once
var wg sync.WaitGroup
worker := func() {
defer wg.Done()
for idx := range jobs {
select {
case <-stopCh:
return
default:
}
elemGindex := subtreeRootGindex<<depth + uint64(idx)
htr, err := pc.merkleize(elemInfo, v.Index(idx), elemGindex)
if err != nil {
stopOnce.Do(func() { close(stopCh) })
select {
case errCh <- fmt.Errorf("index %d: %w", idx, err):
default:
}
return
}
chunks[idx] = htr
}
}
wg.Add(workerCount)
for range workerCount {
go worker()
}
// Enqueue jobs; stop early if any worker reports an error.
enqueue:
for i := range length {
select {
case <-stopCh:
break enqueue
case jobs <- i:
}
}
close(jobs)
wg.Wait()
select {
case err := <-errCh:
return [32]byte{}, err
default:
}
}
root := pc.merkleizeVectorAndCollect(chunks, subtreeRootGindex, depth)
return root, nil
}
// merkleizeVector computes the Merkle root of an SSZ vector (fixed-length).
//
// Generalized indices (gindices): currentGindex is the gindex of the vector root; element/chunk gindices are derived
// inside merkleizeVectorBody using leafBase = currentGindex << ssz.Depth(leaves).
//
// Proof collection: merkleizeVectorBody performs element/chunk merkleization and collects required siblings at the
// vector layer; collectLeaf stores the vector root if currentGindex was registered via addTarget.
//
// Parameters:
// - info: SSZ type metadata for the vector.
// - v: reflect.Value of the vector value.
// - currentGindex: generalized index (gindex) of the vector root.
//
// Returns:
// - [32]byte: Merkle root of the vector.
// - error: any error encountered while merkleizing composite elements.
func (pc *proofCollector) merkleizeVector(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
vi, err := info.VectorInfo()
if err != nil {
return [32]byte{}, err
}
length, err := math.Int(vi.Length())
if err != nil {
return [32]byte{}, fmt.Errorf("vector length %d overflows int: %w", vi.Length(), err)
}
elemInfo := vi.element
// Determine the virtual leaf capacity for the vector.
leaves, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
root, err := pc.merkleizeVectorBody(elemInfo, v, length, leaves, currentGindex)
if err != nil {
return [32]byte{}, err
}
// If the vector root itself is the target
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeList computes the Merkle root of an SSZ list by merkleizing its data subtree and mixing in the length.
//
// Generalized indices (gindices): dataRoot is the left child of the list root (dataRootGindex = currentGindex*2); the length mixin is the right child (currentGindex*2+1).
// Proof collection: merkleizeVectorBody computes the data root (collecting required siblings in the data subtree), and mixinLengthAndCollect collects required siblings at the length-mixin level; collectLeaf stores the list root if registered.
//
// Parameters:
// - info: SSZ type metadata for the list.
// - v: reflect.Value of the list value.
// - currentGindex: generalized index (gindex) of the list root.
//
// Returns:
// - [32]byte: Merkle root of the list.
// - error: any error encountered while merkleizing the data subtree.
func (pc *proofCollector) merkleizeList(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
li, err := info.ListInfo()
if err != nil {
return [32]byte{}, err
}
length := v.Len()
elemInfo := li.element
chunks := make([][32]byte, 2)
// Compute the length hash (little-endian uint256)
binary.LittleEndian.PutUint64(chunks[1][:8], uint64(length))
// Data subtree root is the left child of the list root.
dataRootGindex := currentGindex * 2
// Compute virtual leaf capacity for the data subtree.
leaves, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
chunks[0], err = pc.merkleizeVectorBody(elemInfo, v, length, leaves, dataRootGindex)
if err != nil {
return [32]byte{}, err
}
// Handle the length mixin level (and proof bookkeeping at this level).
// Compute the final list root: hash(dataRoot || lengthHash)
root := pc.mixinLengthAndCollect(currentGindex, chunks)
// If the list root itself is the target
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeBitvectorBody computes the Merkle root of a bitvector-like byte sequence by packing it into 32-byte chunks
// and merkleizing those chunks as a fixed-capacity vector (padding with trie.ZeroHashes as needed).
//
// Generalized indices (gindices): depth = ssz.Depth(chunkLimit); leafBase = subtreeRootGindex << depth; chunk i uses gindex = leafBase + uint64(i).
// Proof collection: merkleizeVectorAndCollect collects required sibling hashes at the chunk-merkleization layer.
//
// Parameters:
// - data: raw byte sequence representing the bitvector payload.
// - chunkLimit: fixed/limit number of 32-byte chunks (used for padding/Depth).
// - subtreeRootGindex: gindex of the bitvector data subtree root.
//
// Returns:
// - [32]byte: Merkle root of the bitvector data subtree.
// - error: any error encountered while packing data into chunks.
func (pc *proofCollector) merkleizeBitvectorBody(data []byte, chunkLimit uint64, subtreeRootGindex uint64) ([32]byte, error) {
depth := ssz.Depth(chunkLimit)
chunks, err := ssz.PackByChunk([][]byte{data})
if err != nil {
return [32]byte{}, err
}
root := pc.merkleizeVectorAndCollect(chunks, subtreeRootGindex, uint64(depth))
return root, nil
}
// merkleizeBitvector computes the Merkle root of a fixed-length SSZ bitvector and collects proof nodes for targets.
//
// Parameters:
// - info: SSZ type metadata for the bitvector.
// - v: reflect.Value of the bitvector value.
// - currentGindex: generalized index (gindex) of the bitvector root.
//
// Returns:
// - [32]byte: Merkle root of the bitvector.
// - error: any error encountered during packing or merkleization.
func (pc *proofCollector) merkleizeBitvector(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
bitvectorBytes := v.Bytes()
if len(bitvectorBytes) == 0 {
return [32]byte{}, fmt.Errorf("bitvector field is uninitialized (nil or empty slice)")
}
// Compute virtual leaf capacity for the bitvector.
numChunks, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
root, err := pc.merkleizeBitvectorBody(bitvectorBytes, numChunks, currentGindex)
if err != nil {
return [32]byte{}, err
}
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeBitlist computes the Merkle root of an SSZ bitlist by merkleizing its data chunks and mixing in the bit length.
//
// Generalized indices (gindices): dataRoot is the left child (dataRootGindex = currentGindex*2) and the length mixin is the right child (currentGindex*2+1).
// Proof collection: merkleizeBitvectorBody computes the data root (collecting required siblings under dataRootGindex), and mixinLengthAndCollect collects required siblings at the length-mixin level; collectLeaf stores the bitlist root if registered.
//
// Parameters:
// - info: SSZ type metadata for the bitlist.
// - v: reflect.Value of the bitlist value.
// - currentGindex: generalized index (gindex) of the bitlist root.
//
// Returns:
// - [32]byte: Merkle root of the bitlist.
// - error: any error encountered while merkleizing the data subtree.
func (pc *proofCollector) merkleizeBitlist(info *SszInfo, v reflect.Value, currentGindex uint64) ([32]byte, error) {
bi, err := info.BitlistInfo()
if err != nil {
return [32]byte{}, err
}
bitlistBytes := v.Bytes()
// Use go-bitfield to get bytes with termination bit cleared
bl := bitfield.Bitlist(bitlistBytes)
data := bl.BytesNoTrim()
// Get the bit length from bitlistInfo
bitLength := bi.Length()
// Get the chunk limit from getChunkCount
limitChunks, err := getChunkCount(info)
if err != nil {
return [32]byte{}, err
}
chunks := make([][32]byte, 2)
// Compute the length hash (little-endian uint256)
binary.LittleEndian.PutUint64(chunks[1][:8], uint64(bitLength))
dataRootGindex := currentGindex * 2
chunks[0], err = pc.merkleizeBitvectorBody(data, limitChunks, dataRootGindex)
if err != nil {
return [32]byte{}, err
}
// Handle the length mixin level (and proof bookkeeping at this level).
root := pc.mixinLengthAndCollect(currentGindex, chunks)
pc.collectLeaf(currentGindex, root)
return root, nil
}
// merkleizeVectorAndCollect merkleizes a slice of 32-byte leaf nodes into a subtree root, padding to a virtual size of 2^depth.
//
// Generalized indices (gindices): at layer i (0-based), nodes have gindices levelBase = subtreeGeneralizedIndex << (depth-i) and node gindex = levelBase + idx.
// Proof collection: for each layer it calls collectSibling(nodeGindex, nodeHash) and stores only those gindices registered via addTarget.
//
// Parameters:
// - elements: leaf-level hashes (may be shorter than 2^depth; padding is applied with trie.ZeroHashes).
// - subtreeGeneralizedIndex: gindex of the subtree root.
// - depth: number of merkleization layers from subtree root to leaves.
//
// Returns:
// - [32]byte: Merkle root of the subtree.
func (pc *proofCollector) merkleizeVectorAndCollect(elements [][32]byte, subtreeGeneralizedIndex uint64, depth uint64) [32]byte {
// Return zerohash at depth
if len(elements) == 0 {
return trie.ZeroHashes[depth]
}
for i := range depth {
layerLen := len(elements)
oddNodeLength := layerLen%2 == 1
if oddNodeLength {
zerohash := trie.ZeroHashes[i]
elements = append(elements, zerohash)
}
levelBaseGindex := subtreeGeneralizedIndex << (depth - i)
for idx := range elements {
gindex := levelBaseGindex + uint64(idx)
pc.collectSibling(gindex, elements[idx])
pc.collectLeaf(gindex, elements[idx])
}
elements = htr.VectorizedSha256(elements)
}
return elements[0]
}
// mixinLengthAndCollect computes the final mix-in root for list/bitlist values:
//
// root = hash(dataRoot, lengthHash)
//
// where chunks[0] is dataRoot and chunks[1] is the 32-byte length hash.
//
// Generalized indices (gindices): dataRoot is the left child (dataRootGindex = currentGindex*2) and lengthHash is the right child (lengthHashGindex = currentGindex*2+1).
// Proof collection: it calls collectSibling/collectLeaf for both child gindices; the collector stores them only if they were registered via addTarget.
//
// Parameters:
// - currentGindex: gindex of the parent node (list/bitlist root).
// - chunks: two 32-byte nodes: [dataRoot, lengthHash].
//
// Returns:
// - [32]byte: mixed-in Merkle root (or zero value on hashing error).
// - error: any error encountered during hashing.
func (pc *proofCollector) mixinLengthAndCollect(currentGindex uint64, chunks [][32]byte) [32]byte {
dataRoot, lengthHash := chunks[0], chunks[1]
dataRootGindex, lengthHashGindex := currentGindex*2, currentGindex*2+1
pc.collectSibling(dataRootGindex, dataRoot)
pc.collectSibling(lengthHashGindex, lengthHash)
pc.collectLeaf(dataRootGindex, dataRoot)
pc.collectLeaf(lengthHashGindex, lengthHash)
return ssz.MixInLength(dataRoot, lengthHash[:])
}

View File

@@ -0,0 +1,531 @@
package query
import (
"crypto/sha256"
"encoding/binary"
"reflect"
"slices"
"testing"
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state/stateutil"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ssz "github.com/OffchainLabs/prysm/v7/encoding/ssz"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
sszquerypb "github.com/OffchainLabs/prysm/v7/proto/ssz_query/testing"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestProofCollector_New(t *testing.T) {
pc := newProofCollector()
require.NotNil(t, pc)
require.Equal(t, 0, len(pc.requiredSiblings))
require.Equal(t, 0, len(pc.requiredLeaves))
require.Equal(t, 0, len(pc.siblings))
require.Equal(t, 0, len(pc.leaves))
}
func TestProofCollector_Reset(t *testing.T) {
pc := newProofCollector()
pc.requiredSiblings[3] = struct{}{}
pc.requiredLeaves[5] = struct{}{}
pc.siblings[3] = [32]byte{1}
pc.leaves[5] = [32]byte{2}
pc.reset()
require.Equal(t, 0, len(pc.requiredSiblings))
require.Equal(t, 0, len(pc.requiredLeaves))
require.Equal(t, 0, len(pc.siblings))
require.Equal(t, 0, len(pc.leaves))
}
func TestProofCollector_AddTarget(t *testing.T) {
pc := newProofCollector()
pc.addTarget(5)
_, hasLeaf := pc.requiredLeaves[5]
_, hasSibling4 := pc.requiredSiblings[4]
_, hasSibling3 := pc.requiredSiblings[3]
_, hasSibling1 := pc.requiredSiblings[1] // GI 1 is the root
require.Equal(t, true, hasLeaf)
require.Equal(t, true, hasSibling4)
require.Equal(t, true, hasSibling3)
require.Equal(t, false, hasSibling1)
}
func TestProofCollector_ToProof(t *testing.T) {
pc := newProofCollector()
pc.addTarget(5)
leaf := [32]byte{9}
sibling4 := [32]byte{4}
sibling3 := [32]byte{3}
pc.collectLeaf(5, leaf)
pc.collectSibling(4, sibling4)
pc.collectSibling(3, sibling3)
proof, err := pc.toProof()
require.NoError(t, err)
require.Equal(t, 5, proof.Index)
require.DeepEqual(t, leaf[:], proof.Leaf)
require.Equal(t, 2, len(proof.Hashes))
require.DeepEqual(t, sibling4[:], proof.Hashes[0])
require.DeepEqual(t, sibling3[:], proof.Hashes[1])
}
func TestProofCollector_ToProof_NoLeaves(t *testing.T) {
pc := newProofCollector()
_, err := pc.toProof()
require.NotNil(t, err)
}
func TestProofCollector_CollectLeaf(t *testing.T) {
pc := newProofCollector()
leaf := [32]byte{7}
pc.collectLeaf(10, leaf)
require.Equal(t, 0, len(pc.leaves))
pc.addTarget(10)
pc.collectLeaf(10, leaf)
stored, ok := pc.leaves[10]
require.Equal(t, true, ok)
require.Equal(t, leaf, stored)
}
func TestProofCollector_CollectSibling(t *testing.T) {
pc := newProofCollector()
hash := [32]byte{5}
pc.collectSibling(4, hash)
require.Equal(t, 0, len(pc.siblings))
pc.addTarget(5)
pc.collectSibling(4, hash)
stored, ok := pc.siblings[4]
require.Equal(t, true, ok)
require.Equal(t, hash, stored)
}
func TestProofCollector_Merkleize_BasicTypes(t *testing.T) {
testCases := []struct {
name string
sszType SSZType
value any
expected [32]byte
}{
{
name: "uint8",
sszType: Uint8,
value: uint8(0x11),
expected: func() [32]byte {
var leaf [32]byte
leaf[0] = 0x11
return leaf
}(),
},
{
name: "uint16",
sszType: Uint16,
value: uint16(0x2211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint16(leaf[:2], 0x2211)
return leaf
}(),
},
{
name: "uint32",
sszType: Uint32,
value: uint32(0x44332211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint32(leaf[:4], 0x44332211)
return leaf
}(),
},
{
name: "uint64",
sszType: Uint64,
value: uint64(0x8877665544332211),
expected: func() [32]byte {
var leaf [32]byte
binary.LittleEndian.PutUint64(leaf[:8], 0x8877665544332211)
return leaf
}(),
},
{
name: "bool",
sszType: Boolean,
value: true,
expected: func() [32]byte {
var leaf [32]byte
leaf[0] = 1
return leaf
}(),
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
pc := newProofCollector()
gindex := uint64(3)
pc.addTarget(gindex)
leaf, err := pc.merkleizeBasicType(tc.sszType, reflect.ValueOf(tc.value), gindex)
require.NoError(t, err)
require.Equal(t, tc.expected, leaf)
stored, ok := pc.leaves[gindex]
require.Equal(t, true, ok)
require.Equal(t, tc.expected, stored)
})
}
}
func TestProofCollector_Merkleize_Container(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
pc := newProofCollector()
pc.addTarget(1)
root, err := pc.merkleize(info, reflect.ValueOf(container), 1)
require.NoError(t, err)
expected, err := container.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, expected, root)
stored, ok := pc.leaves[1]
require.Equal(t, true, ok)
require.Equal(t, expected, stored)
}
func TestProofCollector_Merkleize_Vector(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["vector_field"]
pc := newProofCollector()
root, err := pc.merkleizeVector(field.sszInfo, reflect.ValueOf(container.VectorField), 1)
require.NoError(t, err)
serialized := make([][]byte, len(container.VectorField))
for i, v := range container.VectorField {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, v)
serialized[i] = buf
}
chunks, err := ssz.PackByChunk(serialized)
require.NoError(t, err)
limit, err := getChunkCount(field.sszInfo)
require.NoError(t, err)
expected := ssz.MerkleizeVector(chunks, limit)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_List(t *testing.T) {
list := []*sszquerypb.FixedNestedContainer{
makeFixedNestedContainer(1),
makeFixedNestedContainer(2),
}
container := makeVariableTestContainer(list, bitfield.NewBitlist(1))
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["field_list_container"]
pc := newProofCollector()
root, err := pc.merkleizeList(field.sszInfo, reflect.ValueOf(list), 1)
require.NoError(t, err)
listInfo, err := field.sszInfo.ListInfo()
require.NoError(t, err)
expected, err := ssz.MerkleizeListSSZ(list, listInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_Bitvector(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["bitvector64_field"]
pc := newProofCollector()
root, err := pc.merkleizeBitvector(field.sszInfo, reflect.ValueOf(container.Bitvector64Field), 1)
require.NoError(t, err)
expected, err := ssz.MerkleizeByteSliceSSZ([]byte(container.Bitvector64Field))
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_Merkleize_Bitlist(t *testing.T) {
bitlist := bitfield.NewBitlist(16)
bitlist.SetBitAt(3, true)
bitlist.SetBitAt(8, true)
container := makeVariableTestContainer(nil, bitlist)
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["bitlist_field"]
pc := newProofCollector()
root, err := pc.merkleizeBitlist(field.sszInfo, reflect.ValueOf(container.BitlistField), 1)
require.NoError(t, err)
bitlistInfo, err := field.sszInfo.BitlistInfo()
require.NoError(t, err)
expected, err := ssz.BitlistRoot(bitfield.Bitlist(bitlist), bitlistInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
}
func TestProofCollector_MerkleizeVectorBody_Basic(t *testing.T) {
container := makeFixedTestContainer()
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["vector_field"]
vectorInfo, err := field.sszInfo.VectorInfo()
require.NoError(t, err)
length := len(container.VectorField)
limit, err := getChunkCount(field.sszInfo)
require.NoError(t, err)
pc := newProofCollector()
root, err := pc.merkleizeVectorBody(vectorInfo.element, reflect.ValueOf(container.VectorField), length, limit, 2)
require.NoError(t, err)
serialized := make([][]byte, len(container.VectorField))
for i, v := range container.VectorField {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, v)
serialized[i] = buf
}
chunks, err := ssz.PackByChunk(serialized)
require.NoError(t, err)
expected := ssz.MerkleizeVector(chunks, limit)
require.Equal(t, expected, root)
}
func TestProofCollector_MerkleizeVectorAndCollect(t *testing.T) {
pc := newProofCollector()
pc.addTarget(6)
elements := [][32]byte{{1}, {2}}
expected := ssz.MerkleizeVector(slices.Clone(elements), 2)
root := pc.merkleizeVectorAndCollect(elements, 3, 1)
storedLeaf, hasLeaf := pc.leaves[6]
storedSibling, hasSibling := pc.siblings[7]
require.Equal(t, true, hasLeaf)
require.Equal(t, true, hasSibling)
require.Equal(t, elements[0], storedLeaf)
require.Equal(t, elements[1], storedSibling)
require.Equal(t, expected, root)
}
func TestProofCollector_MixinLengthAndCollect(t *testing.T) {
list := []*sszquerypb.FixedNestedContainer{
makeFixedNestedContainer(1),
makeFixedNestedContainer(2),
}
container := makeVariableTestContainer(list, bitfield.NewBitlist(1))
info, err := AnalyzeObject(container)
require.NoError(t, err)
ci, err := info.ContainerInfo()
require.NoError(t, err)
field := ci.fields["field_list_container"]
// Target gindex 2 (data root) - sibling at gindex 3 (length hash) should be collected
pc := newProofCollector()
pc.addTarget(2)
root, err := pc.merkleizeList(field.sszInfo, reflect.ValueOf(list), 1)
require.NoError(t, err)
listInfo, err := field.sszInfo.ListInfo()
require.NoError(t, err)
expected, err := ssz.MerkleizeListSSZ(list, listInfo.Limit())
require.NoError(t, err)
require.Equal(t, expected, root)
// Verify data root is collected as leaf at gindex 2
storedLeaf, hasLeaf := pc.leaves[2]
require.Equal(t, true, hasLeaf)
// Verify length hash is collected as sibling at gindex 3
storedSibling, hasSibling := pc.siblings[3]
require.Equal(t, true, hasSibling)
// Verify the root is hash(dataRoot || lengthHash)
expectedBuf := append(storedLeaf[:], storedSibling[:]...)
expectedRoot := sha256.Sum256(expectedBuf)
require.Equal(t, expectedRoot, root)
}
func BenchmarkOptimizedValidatorRoots(b *testing.B) {
validators := make([]*ethpb.Validator, 1000)
for i := range validators {
validators[i] = makeTestValidator(i)
}
b.ResetTimer()
for b.Loop() {
_, err := stateutil.OptimizedValidatorRoots(validators)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkProofCollectorMerkleize(b *testing.B) {
validators := make([]*ethpb.Validator, 1000)
for i := range validators {
validators[i] = makeTestValidator(i)
}
info, err := AnalyzeObject(validators[0])
require.NoError(b, err)
b.ResetTimer()
for b.Loop() {
for _, val := range validators {
pc := newProofCollector()
v := reflect.ValueOf(val)
_, err := pc.merkleize(info, v, 1)
if err != nil {
b.Fatal(err)
}
}
}
}
func makeTestValidator(i int) *ethpb.Validator {
pubkey := make([]byte, 48)
for j := range pubkey {
pubkey[j] = byte(i + j)
}
withdrawalCredentials := make([]byte, 32)
for j := range withdrawalCredentials {
withdrawalCredentials[j] = byte(255 - ((i + j) % 256))
}
return &ethpb.Validator{
PublicKey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
EffectiveBalance: uint64(32000000000 + i),
Slashed: i%2 == 0,
ActivationEligibilityEpoch: primitives.Epoch(i),
ActivationEpoch: primitives.Epoch(i + 1),
ExitEpoch: primitives.Epoch(i + 2),
WithdrawableEpoch: primitives.Epoch(i + 3),
}
}
func makeFixedNestedContainer(value uint64) *sszquerypb.FixedNestedContainer {
value2 := make([]byte, 32)
for i := range value2 {
value2[i] = byte(i)
}
return &sszquerypb.FixedNestedContainer{
Value1: value,
Value2: value2,
}
}
func makeFixedTestContainer() *sszquerypb.FixedTestContainer {
fieldBytes32 := make([]byte, 32)
for i := range fieldBytes32 {
fieldBytes32[i] = byte(i)
}
vectorField := make([]uint64, 24)
for i := range vectorField {
vectorField[i] = uint64(i)
}
rows := make([][]byte, 5)
for i := range rows {
row := make([]byte, 32)
for j := range row {
row[j] = byte(i) + byte(j)
}
rows[i] = row
}
bitvector64 := bitfield.NewBitvector64()
bitvector64.SetBitAt(1, true)
bitvector512 := bitfield.NewBitvector512()
bitvector512.SetBitAt(10, true)
trailing := make([]byte, 56)
for i := range trailing {
trailing[i] = byte(i)
}
return &sszquerypb.FixedTestContainer{
FieldUint32: 1,
FieldUint64: 2,
FieldBool: true,
FieldBytes32: fieldBytes32,
Nested: makeFixedNestedContainer(3),
VectorField: vectorField,
TwoDimensionBytesField: rows,
Bitvector64Field: bitvector64,
Bitvector512Field: bitvector512,
TrailingField: trailing,
}
}
func makeVariableTestContainer(list []*sszquerypb.FixedNestedContainer, bitlist bitfield.Bitlist) *sszquerypb.VariableTestContainer {
leading := make([]byte, 32)
for i := range leading {
leading[i] = byte(i)
}
trailing := make([]byte, 56)
for i := range trailing {
trailing[i] = byte(255 - i)
}
if bitlist == nil {
bitlist = bitfield.NewBitlist(0)
}
return &sszquerypb.VariableTestContainer{
LeadingField: leading,
FieldListContainer: list,
BitlistField: bitlist,
TrailingField: trailing,
}
}

View File

@@ -389,6 +389,7 @@ func TestHashTreeRoot(t *testing.T) {
require.NoError(t, err, "HashTreeRoot should not return an error")
expectedHashTreeRoot, err := tt.obj.HashTreeRoot()
require.NoError(t, err, "HashTreeRoot on original object should not return an error")
// Verify the Merkle tree root matches with the SSZ generated HashTreeRoot
require.Equal(t, expectedHashTreeRoot, hashTreeRoot, "HashTreeRoot from sszInfo should match original object's HashTreeRoot")
})
}

View File

@@ -283,16 +283,16 @@ func (mr *MockValidatorClientMockRecorder) ProposeExit(ctx, in any) *gomock.Call
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ProposeExit", reflect.TypeOf((*MockValidatorClient)(nil).ProposeExit), ctx, in)
}
// SetHost mocks base method.
func (m *MockValidatorClient) SetHost(host string) {
// SwitchHost mocks base method.
func (m *MockValidatorClient) SwitchHost(host string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetHost", host)
m.ctrl.Call(m, "SwitchHost", host)
}
// SetHost indicates an expected call of SetHost.
func (mr *MockValidatorClientMockRecorder) SetHost(host any) *gomock.Call {
// SwitchHost indicates an expected call of SwitchHost.
func (mr *MockValidatorClientMockRecorder) SwitchHost(host any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetHost", reflect.TypeOf((*MockValidatorClient)(nil).SetHost), host)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SwitchHost", reflect.TypeOf((*MockValidatorClient)(nil).SwitchHost), host)
}
// StartEventStream mocks base method.

View File

@@ -25,6 +25,7 @@ go_library(
],
deps = [
"//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//cmd/validator/flags:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -3,14 +3,13 @@ package accounts
import (
"context"
"io"
"net/http"
"os"
"time"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/validator/accounts/wallet"
beaconApi "github.com/OffchainLabs/prysm/v7/validator/client/beacon-api"
iface "github.com/OffchainLabs/prysm/v7/validator/client/iface"
nodeClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/node-client-factory"
validatorClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/validator-client-factory"
@@ -77,22 +76,17 @@ func (acm *CLIManager) prepareBeaconClients(ctx context.Context) (*iface.Validat
}
ctx = grpcutil.AppendHeaders(ctx, acm.grpcHeaders)
grpcConn, err := grpc.DialContext(ctx, acm.beaconRPCProvider, acm.dialOpts...)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not dial endpoint %s", acm.beaconRPCProvider)
}
conn := validatorHelpers.NewNodeConnection(
grpcConn,
acm.beaconApiEndpoint,
validatorHelpers.WithBeaconApiTimeout(acm.beaconApiTimeout),
)
restHandler := beaconApi.NewBeaconApiRestHandler(
http.Client{Timeout: acm.beaconApiTimeout},
acm.beaconApiEndpoint,
conn, err := validatorHelpers.NewNodeConnection(
validatorHelpers.WithGRPC(ctx, acm.beaconRPCProvider, acm.dialOpts),
validatorHelpers.WithREST(acm.beaconApiEndpoint, rest.WithHttpTimeout(acm.beaconApiTimeout)),
)
validatorClient := validatorClientFactory.NewValidatorClient(conn, restHandler)
nodeClient := nodeClientFactory.NewNodeClient(conn, restHandler)
if err != nil {
return nil, nil, err
}
validatorClient := validatorClientFactory.NewValidatorClient(conn)
nodeClient := nodeClientFactory.NewNodeClient(conn)
return &validatorClient, &nodeClient, nil
}

View File

@@ -10,7 +10,6 @@ go_library(
"log.go",
"log_helpers.go",
"metrics.go",
"multiple_endpoints_grpc_resolver.go",
"propose.go",
"registration.go",
"runner.go",
@@ -29,6 +28,7 @@ go_library(
"//api/client:go_default_library",
"//api/client/event:go_default_library",
"//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library",
"//async:go_default_library",
"//async/event:go_default_library",
@@ -58,7 +58,6 @@ go_library(
"//time/slots:go_default_library",
"//validator/accounts/iface:go_default_library",
"//validator/accounts/wallet:go_default_library",
"//validator/client/beacon-api:go_default_library",
"//validator/client/beacon-chain-client-factory:go_default_library",
"//validator/client/iface:go_default_library",
"//validator/client/node-client-factory:go_default_library",
@@ -86,13 +85,11 @@ go_library(
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_google_golang_org_grpc_otelgrpc//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
"@io_opentelemetry_go_otel_trace//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
"@org_golang_google_grpc//resolver:go_default_library",
"@org_golang_google_grpc//status:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
@@ -124,6 +121,8 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library",
"//async/event:go_default_library",
"//beacon-chain/core/signing:go_default_library",

View File

@@ -26,7 +26,6 @@ go_library(
"propose_exit.go",
"prysm_beacon_chain_client.go",
"registration.go",
"rest_handler_client.go",
"state_validators.go",
"status.go",
"stream_blocks.go",
@@ -43,6 +42,7 @@ go_library(
"//api:go_default_library",
"//api/apiutil:go_default_library",
"//api/client/event:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
@@ -111,6 +111,7 @@ go_test(
deps = [
"//api:go_default_library",
"//api/apiutil:go_default_library",
"//api/rest:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared/testing:go_default_library",

View File

@@ -5,6 +5,7 @@ import (
"reflect"
"strconv"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -17,7 +18,7 @@ import (
type beaconApiChainClient struct {
fallbackClient iface.ChainClient
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
stateValidatorsProvider StateValidatorsProvider
}
@@ -327,7 +328,7 @@ func (c beaconApiChainClient) ValidatorParticipation(ctx context.Context, in *et
return nil, errors.New("beaconApiChainClient.ValidatorParticipation is not implemented. To use a fallback client, pass a fallback client as the last argument of NewBeaconApiChainClientWithFallback.")
}
func NewBeaconApiChainClientWithFallback(jsonRestHandler RestHandler, fallbackClient iface.ChainClient) iface.ChainClient {
func NewBeaconApiChainClientWithFallback(jsonRestHandler rest.RestHandler, fallbackClient iface.ChainClient) iface.ChainClient {
return &beaconApiChainClient{
jsonRestHandler: jsonRestHandler,
fallbackClient: fallbackClient,

View File

@@ -5,6 +5,7 @@ import (
"net/http"
"strconv"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
@@ -20,7 +21,7 @@ var (
type beaconApiNodeClient struct {
fallbackClient iface.NodeClient
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
genesisProvider GenesisProvider
}
@@ -115,7 +116,7 @@ func (c *beaconApiNodeClient) IsReady(ctx context.Context) bool {
return statusCode == http.StatusOK
}
func NewNodeClientWithFallback(jsonRestHandler RestHandler, fallbackClient iface.NodeClient) iface.NodeClient {
func NewNodeClientWithFallback(jsonRestHandler rest.RestHandler, fallbackClient iface.NodeClient) iface.NodeClient {
b := &beaconApiNodeClient{
jsonRestHandler: jsonRestHandler,
fallbackClient: fallbackClient,

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v7/api/client/event"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
@@ -22,13 +23,13 @@ type beaconApiValidatorClient struct {
genesisProvider GenesisProvider
dutiesProvider dutiesProvider
stateValidatorsProvider StateValidatorsProvider
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
beaconBlockConverter BeaconBlockConverter
prysmChainClient iface.PrysmChainClient
isEventStreamRunning bool
}
func NewBeaconApiValidatorClient(jsonRestHandler RestHandler, opts ...ValidatorClientOpt) iface.ValidatorClient {
func NewBeaconApiValidatorClient(jsonRestHandler rest.RestHandler, opts ...ValidatorClientOpt) iface.ValidatorClient {
c := &beaconApiValidatorClient{
genesisProvider: &beaconApiGenesisProvider{jsonRestHandler: jsonRestHandler},
dutiesProvider: beaconApiDutiesProvider{jsonRestHandler: jsonRestHandler},
@@ -331,6 +332,6 @@ func (c *beaconApiValidatorClient) Host() string {
return c.jsonRestHandler.Host()
}
func (c *beaconApiValidatorClient) SetHost(host string) {
c.jsonRestHandler.SetHost(host)
func (c *beaconApiValidatorClient) SwitchHost(host string) {
c.jsonRestHandler.SwitchHost(host)
}

View File

@@ -549,7 +549,7 @@ func TestBeaconApiValidatorClient_Host(t *testing.T) {
hosts := []string{"http://localhost:8080", "http://localhost:8081"}
jsonRestHandler := mock.NewMockJsonRestHandler(ctrl)
jsonRestHandler.EXPECT().SetHost(
jsonRestHandler.EXPECT().SwitchHost(
hosts[0],
).Times(1)
jsonRestHandler.EXPECT().Host().Return(
@@ -557,17 +557,17 @@ func TestBeaconApiValidatorClient_Host(t *testing.T) {
).Times(1)
validatorClient := beaconApiValidatorClient{jsonRestHandler: jsonRestHandler}
validatorClient.SetHost(hosts[0])
validatorClient.SwitchHost(hosts[0])
host := validatorClient.Host()
require.Equal(t, hosts[0], host)
jsonRestHandler.EXPECT().SetHost(
jsonRestHandler.EXPECT().SwitchHost(
hosts[1],
).Times(1)
jsonRestHandler.EXPECT().Host().Return(
hosts[1],
).Times(1)
validatorClient.SetHost(hosts[1])
validatorClient.SwitchHost(hosts[1])
host = validatorClient.Host()
require.Equal(t, hosts[1], host)
}

View File

@@ -9,6 +9,7 @@ import (
"strconv"
"github.com/OffchainLabs/prysm/v7/api/apiutil"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -27,7 +28,7 @@ type dutiesProvider interface {
}
type beaconApiDutiesProvider struct {
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
}
type attesterDuty struct {

View File

@@ -7,6 +7,7 @@ import (
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -20,7 +21,7 @@ type GenesisProvider interface {
}
type beaconApiGenesisProvider struct {
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
genesis *structs.Genesis
once sync.Once
}

View File

@@ -150,14 +150,14 @@ func (mr *MockRestHandlerMockRecorder) PostSSZ(ctx, endpoint, headers, data any)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "PostSSZ", reflect.TypeOf((*MockRestHandler)(nil).PostSSZ), ctx, endpoint, headers, data)
}
// SetHost mocks base method.
func (m *MockRestHandler) SetHost(host string) {
// SwitchHost mocks base method.
func (m *MockRestHandler) SwitchHost(host string) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetHost", host)
m.ctrl.Call(m, "SwitchHost", host)
}
// SetHost indicates an expected call of SetHost.
func (mr *MockRestHandlerMockRecorder) SetHost(host any) *gomock.Call {
// SwitchHost indicates an expected call of SwitchHost.
func (mr *MockRestHandlerMockRecorder) SwitchHost(host any) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetHost", reflect.TypeOf((*MockRestHandler)(nil).SetHost), host)
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SwitchHost", reflect.TypeOf((*MockRestHandler)(nil).SwitchHost), host)
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"github.com/OffchainLabs/prysm/v7/api/apiutil"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
validator2 "github.com/OffchainLabs/prysm/v7/consensus-types/validator"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
@@ -18,7 +19,7 @@ import (
)
// NewPrysmChainClient returns implementation of iface.PrysmChainClient.
func NewPrysmChainClient(jsonRestHandler RestHandler, nodeClient iface.NodeClient) iface.PrysmChainClient {
func NewPrysmChainClient(jsonRestHandler rest.RestHandler, nodeClient iface.NodeClient) iface.PrysmChainClient {
return prysmChainClient{
jsonRestHandler: jsonRestHandler,
nodeClient: nodeClient,
@@ -26,7 +27,7 @@ func NewPrysmChainClient(jsonRestHandler RestHandler, nodeClient iface.NodeClien
}
type prysmChainClient struct {
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
nodeClient iface.NodeClient
}

View File

@@ -12,13 +12,12 @@ import (
"time"
"github.com/OffchainLabs/prysm/v7/api"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/network/httputil"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/sirupsen/logrus/hooks/test"
)
@@ -45,10 +44,7 @@ func TestGet(t *testing.T) {
server := httptest.NewServer(mux)
defer server.Close()
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Second * 5},
host: server.URL,
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, server.URL)
resp := &structs.GetGenesisResponse{}
require.NoError(t, jsonRestHandler.Get(ctx, endpoint+"?arg1=abc&arg2=def", resp))
assert.DeepEqual(t, genesisJson, resp)
@@ -79,10 +75,7 @@ func TestGetSSZ(t *testing.T) {
server := httptest.NewServer(mux)
defer server.Close()
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Second * 5},
host: server.URL,
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, server.URL)
body, header, err := jsonRestHandler.GetSSZ(ctx, endpoint)
require.NoError(t, err)
@@ -108,10 +101,7 @@ func TestGetSSZ(t *testing.T) {
server := httptest.NewServer(mux)
defer server.Close()
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Second * 5},
host: server.URL,
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, server.URL)
body, header, err := jsonRestHandler.GetSSZ(ctx, endpoint)
require.NoError(t, err)
@@ -136,10 +126,7 @@ func TestGetSSZ(t *testing.T) {
server := httptest.NewServer(mux)
defer server.Close()
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Second * 5},
host: server.URL,
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, server.URL)
_, _, err := jsonRestHandler.GetSSZ(ctx, endpoint)
require.NoError(t, err)
@@ -161,7 +148,7 @@ func TestAcceptOverrideSSZ(t *testing.T) {
require.NoError(t, err)
}))
defer srv.Close()
c := NewBeaconApiRestHandler(http.Client{Timeout: time.Second * 5}, srv.URL)
c := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, srv.URL)
_, _, err := c.GetSSZ(t.Context(), "/test")
require.NoError(t, err)
}
@@ -204,162 +191,12 @@ func TestPost(t *testing.T) {
server := httptest.NewServer(mux)
defer server.Close()
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Second * 5},
host: server.URL,
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, server.URL)
resp := &structs.GetGenesisResponse{}
require.NoError(t, jsonRestHandler.Post(ctx, endpoint, headers, bytes.NewBuffer(dataBytes), resp))
assert.DeepEqual(t, genesisJson, resp)
}
func Test_decodeResp(t *testing.T) {
type j struct {
Foo string `json:"foo"`
}
t.Run("200 JSON with charset", func(t *testing.T) {
body := bytes.Buffer{}
r := &http.Response{
Status: "200",
StatusCode: http.StatusOK,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {"application/json; charset=utf-8"}},
}
require.NoError(t, decodeResp(r, nil))
})
t.Run("200 non-JSON", func(t *testing.T) {
body := bytes.Buffer{}
r := &http.Response{
Status: "200",
StatusCode: http.StatusOK,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.OctetStreamMediaType}},
}
require.NoError(t, decodeResp(r, nil))
})
t.Run("204 non-JSON", func(t *testing.T) {
body := bytes.Buffer{}
r := &http.Response{
Status: "204",
StatusCode: http.StatusNoContent,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.OctetStreamMediaType}},
}
require.NoError(t, decodeResp(r, nil))
})
t.Run("500 non-JSON", func(t *testing.T) {
body := bytes.Buffer{}
_, err := body.WriteString("foo")
require.NoError(t, err)
r := &http.Response{
Status: "500",
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.OctetStreamMediaType}},
}
err = decodeResp(r, nil)
errJson := &httputil.DefaultJsonError{}
require.Equal(t, true, errors.As(err, &errJson))
assert.Equal(t, http.StatusInternalServerError, errJson.Code)
assert.Equal(t, "foo", errJson.Message)
})
t.Run("200 JSON with resp", func(t *testing.T) {
body := bytes.Buffer{}
b, err := json.Marshal(&j{Foo: "foo"})
require.NoError(t, err)
body.Write(b)
r := &http.Response{
Status: "200",
StatusCode: http.StatusOK,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.JsonMediaType}},
}
resp := &j{}
require.NoError(t, decodeResp(r, resp))
assert.Equal(t, "foo", resp.Foo)
})
t.Run("200 JSON without resp", func(t *testing.T) {
body := bytes.Buffer{}
r := &http.Response{
Status: "200",
StatusCode: http.StatusOK,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.JsonMediaType}},
}
require.NoError(t, decodeResp(r, nil))
})
t.Run("204 JSON", func(t *testing.T) {
body := bytes.Buffer{}
r := &http.Response{
Status: "204",
StatusCode: http.StatusNoContent,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.JsonMediaType}},
}
require.NoError(t, decodeResp(r, nil))
})
t.Run("500 JSON", func(t *testing.T) {
body := bytes.Buffer{}
b, err := json.Marshal(&httputil.DefaultJsonError{Code: http.StatusInternalServerError, Message: "error"})
require.NoError(t, err)
body.Write(b)
r := &http.Response{
Status: "500",
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.JsonMediaType}},
}
err = decodeResp(r, nil)
errJson := &httputil.DefaultJsonError{}
require.Equal(t, true, errors.As(err, &errJson))
assert.Equal(t, http.StatusInternalServerError, errJson.Code)
assert.Equal(t, "error", errJson.Message)
})
t.Run("200 JSON cannot decode", func(t *testing.T) {
body := bytes.Buffer{}
_, err := body.WriteString("foo")
require.NoError(t, err)
r := &http.Response{
Status: "200",
StatusCode: http.StatusOK,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.JsonMediaType}},
Request: &http.Request{},
}
resp := &j{}
err = decodeResp(r, resp)
assert.ErrorContains(t, "failed to decode response body into json", err)
})
t.Run("500 JSON cannot decode", func(t *testing.T) {
body := bytes.Buffer{}
_, err := body.WriteString("foo")
require.NoError(t, err)
r := &http.Response{
Status: "500",
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {api.JsonMediaType}},
Request: &http.Request{},
}
err = decodeResp(r, nil)
assert.ErrorContains(t, "failed to decode response body into error json", err)
})
t.Run("500 not JSON", func(t *testing.T) {
body := bytes.Buffer{}
_, err := body.WriteString("foo")
require.NoError(t, err)
r := &http.Response{
Status: "500",
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(&body),
Header: map[string][]string{"Content-Type": {"text/plain"}},
Request: &http.Request{},
}
err = decodeResp(r, nil)
assert.ErrorContains(t, "HTTP request unsuccessful (500: foo)", err)
})
}
func TestGetStatusCode(t *testing.T) {
ctx := t.Context()
const endpoint = "/eth/v1/node/health"
@@ -401,10 +238,7 @@ func TestGetStatusCode(t *testing.T) {
server := httptest.NewServer(mux)
defer server.Close()
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Second * 5},
host: server.URL,
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Second * 5}, server.URL)
statusCode, err := jsonRestHandler.GetStatusCode(ctx, endpoint)
require.NoError(t, err)
@@ -413,10 +247,7 @@ func TestGetStatusCode(t *testing.T) {
}
t.Run("returns error on connection failure", func(t *testing.T) {
jsonRestHandler := BeaconApiRestHandler{
client: http.Client{Timeout: time.Millisecond * 100},
host: "http://localhost:99999", // Invalid port
}
jsonRestHandler := rest.NewRestHandler(http.Client{Timeout: time.Millisecond * 100}, "http://localhost:99999")
_, err := jsonRestHandler.GetStatusCode(ctx, endpoint)
require.ErrorContains(t, "failed to perform request", err)

View File

@@ -9,6 +9,7 @@ import (
"strconv"
"github.com/OffchainLabs/prysm/v7/api/apiutil"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/pkg/errors"
@@ -21,7 +22,7 @@ type StateValidatorsProvider interface {
}
type beaconApiStateValidatorsProvider struct {
jsonRestHandler RestHandler
jsonRestHandler rest.RestHandler
}
func (c beaconApiStateValidatorsProvider) StateValidators(

View File

@@ -9,19 +9,17 @@ import (
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
)
func NewChainClient(validatorConn validatorHelpers.NodeConnection, jsonRestHandler beaconApi.RestHandler) iface.ChainClient {
grpcClient := grpcApi.NewGrpcChainClient(validatorConn.GetGrpcClientConn())
func NewChainClient(validatorConn validatorHelpers.NodeConnection) iface.ChainClient {
grpcClient := grpcApi.NewGrpcChainClient(validatorConn)
if features.Get().EnableBeaconRESTApi {
return beaconApi.NewBeaconApiChainClientWithFallback(jsonRestHandler, grpcClient)
} else {
return grpcClient
return beaconApi.NewBeaconApiChainClientWithFallback(validatorConn.GetRestHandler(), grpcClient)
}
return grpcClient
}
func NewPrysmChainClient(validatorConn validatorHelpers.NodeConnection, jsonRestHandler beaconApi.RestHandler) iface.PrysmChainClient {
func NewPrysmChainClient(validatorConn validatorHelpers.NodeConnection) iface.PrysmChainClient {
if features.Get().EnableBeaconRESTApi {
return beaconApi.NewPrysmChainClient(jsonRestHandler, nodeClientFactory.NewNodeClient(validatorConn, jsonRestHandler))
} else {
return grpcApi.NewGrpcPrysmChainClient(validatorConn.GetGrpcClientConn())
return beaconApi.NewPrysmChainClient(validatorConn.GetRestHandler(), nodeClientFactory.NewNodeClient(validatorConn))
}
return grpcApi.NewGrpcPrysmChainClient(validatorConn)
}

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"grpc_beacon_chain_client.go",
"grpc_client_manager.go",
"grpc_node_client.go",
"grpc_prysm_beacon_chain_client.go",
"grpc_validator_client.go",
@@ -25,6 +26,7 @@ go_library(
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//validator/client/iface:go_default_library",
"//validator/helpers:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_golang_protobuf//ptypes/empty",
"@com_github_pkg_errors//:go_default_library",
@@ -39,6 +41,8 @@ go_test(
name = "go_default_test",
size = "small",
srcs = [
"grpc_client_manager_test.go",
"grpc_node_client_test.go",
"grpc_prysm_beacon_chain_client_test.go",
"grpc_validator_client_test.go",
],
@@ -56,7 +60,11 @@ go_test(
"//testing/util:go_default_library",
"//testing/validator-mock:go_default_library",
"//validator/client/iface:go_default_library",
"//validator/helpers:go_default_library",
"//validator/testing:go_default_library",
"@com_github_golang_protobuf//ptypes/empty",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_protobuf//types/known/emptypb:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],

View File

@@ -5,38 +5,42 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/golang/protobuf/ptypes/empty"
"google.golang.org/grpc"
)
type grpcChainClient struct {
beaconChainClient ethpb.BeaconChainClient
*grpcClientManager[ethpb.BeaconChainClient]
}
func (c *grpcChainClient) ChainHead(ctx context.Context, in *empty.Empty) (*ethpb.ChainHead, error) {
return c.beaconChainClient.GetChainHead(ctx, in)
return c.getClient().GetChainHead(ctx, in)
}
func (c *grpcChainClient) ValidatorBalances(ctx context.Context, in *ethpb.ListValidatorBalancesRequest) (*ethpb.ValidatorBalances, error) {
return c.beaconChainClient.ListValidatorBalances(ctx, in)
return c.getClient().ListValidatorBalances(ctx, in)
}
func (c *grpcChainClient) Validators(ctx context.Context, in *ethpb.ListValidatorsRequest) (*ethpb.Validators, error) {
return c.beaconChainClient.ListValidators(ctx, in)
return c.getClient().ListValidators(ctx, in)
}
func (c *grpcChainClient) ValidatorQueue(ctx context.Context, in *empty.Empty) (*ethpb.ValidatorQueue, error) {
return c.beaconChainClient.GetValidatorQueue(ctx, in)
return c.getClient().GetValidatorQueue(ctx, in)
}
func (c *grpcChainClient) ValidatorPerformance(ctx context.Context, in *ethpb.ValidatorPerformanceRequest) (*ethpb.ValidatorPerformanceResponse, error) {
return c.beaconChainClient.GetValidatorPerformance(ctx, in)
return c.getClient().GetValidatorPerformance(ctx, in)
}
func (c *grpcChainClient) ValidatorParticipation(ctx context.Context, in *ethpb.GetValidatorParticipationRequest) (*ethpb.ValidatorParticipationResponse, error) {
return c.beaconChainClient.GetValidatorParticipation(ctx, in)
return c.getClient().GetValidatorParticipation(ctx, in)
}
func NewGrpcChainClient(cc grpc.ClientConnInterface) iface.ChainClient {
return &grpcChainClient{ethpb.NewBeaconChainClient(cc)}
// NewGrpcChainClient creates a new gRPC chain client that supports
// dynamic connection switching via the NodeConnection's GrpcConnectionProvider.
func NewGrpcChainClient(conn validatorHelpers.NodeConnection) iface.ChainClient {
return &grpcChainClient{
grpcClientManager: newGrpcClientManager(conn, ethpb.NewBeaconChainClient),
}
}

View File

@@ -0,0 +1,44 @@
package grpc_api
import (
"sync"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"google.golang.org/grpc"
)
// grpcClientManager handles dynamic gRPC client recreation when the connection changes.
// It uses generics to work with any gRPC client type.
type grpcClientManager[T any] struct {
mu sync.Mutex
conn validatorHelpers.NodeConnection
client T
lastHost string
newClient func(grpc.ClientConnInterface) T
}
// newGrpcClientManager creates a new client manager with the given connection and client constructor.
func newGrpcClientManager[T any](
conn validatorHelpers.NodeConnection,
newClient func(grpc.ClientConnInterface) T,
) *grpcClientManager[T] {
return &grpcClientManager[T]{
conn: conn,
newClient: newClient,
client: newClient(conn.GetGrpcClientConn()),
lastHost: conn.GetGrpcConnectionProvider().CurrentHost(),
}
}
// getClient returns the current client, recreating it if the connection has changed.
func (m *grpcClientManager[T]) getClient() T {
m.mu.Lock()
defer m.mu.Unlock()
currentHost := m.conn.GetGrpcConnectionProvider().CurrentHost()
if m.lastHost != currentHost {
m.client = m.newClient(m.conn.GetGrpcClientConn())
m.lastHost = currentHost
}
return m.client
}

View File

@@ -0,0 +1,168 @@
package grpc_api
import (
"sync"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"google.golang.org/grpc"
)
// mockProvider implements grpcutil.GrpcConnectionProvider for testing.
type mockProvider struct {
hosts []string
currentIndex int
mu sync.Mutex
}
func (m *mockProvider) CurrentConn() *grpc.ClientConn { return nil }
func (m *mockProvider) Hosts() []string { return m.hosts }
func (m *mockProvider) Close() {}
func (m *mockProvider) CurrentHost() string {
m.mu.Lock()
defer m.mu.Unlock()
return m.hosts[m.currentIndex]
}
func (m *mockProvider) SwitchHost(index int) error {
m.mu.Lock()
defer m.mu.Unlock()
m.currentIndex = index
return nil
}
// nextHost is a test helper for round-robin simulation (not part of the interface).
func (m *mockProvider) nextHost() {
m.mu.Lock()
defer m.mu.Unlock()
m.currentIndex = (m.currentIndex + 1) % len(m.hosts)
}
// testClient is a simple type for testing the generic client manager.
type testClient struct{ id int }
// testManager creates a manager with client creation counting.
func testManager(t *testing.T, provider *mockProvider) (*grpcClientManager[*testClient], *int) {
conn, err := validatorHelpers.NewNodeConnection(validatorHelpers.WithGRPCProvider(provider))
require.NoError(t, err)
clientCount := new(int)
newClient := func(grpc.ClientConnInterface) *testClient {
*clientCount++
return &testClient{id: *clientCount}
}
manager := newGrpcClientManager(conn, newClient)
require.NotNil(t, manager)
return manager, clientCount
}
func TestGrpcClientManager(t *testing.T) {
t.Run("tracks host", func(t *testing.T) {
provider := &mockProvider{hosts: []string{"host1:4000", "host2:4000"}}
manager, count := testManager(t, provider)
assert.Equal(t, 1, *count)
assert.Equal(t, "host1:4000", manager.lastHost)
})
t.Run("same host returns same client", func(t *testing.T) {
provider := &mockProvider{hosts: []string{"host1:4000", "host2:4000"}}
manager, count := testManager(t, provider)
c1, c2, c3 := manager.getClient(), manager.getClient(), manager.getClient()
assert.Equal(t, 1, *count)
assert.Equal(t, c1, c2)
assert.Equal(t, c2, c3)
})
t.Run("host change recreates client", func(t *testing.T) {
provider := &mockProvider{hosts: []string{"host1:4000", "host2:4000"}}
manager, count := testManager(t, provider)
c1 := manager.getClient()
assert.Equal(t, 1, c1.id)
provider.nextHost()
c2 := manager.getClient()
assert.Equal(t, 2, *count)
assert.Equal(t, 2, c2.id)
// Same host again - no recreation
c3 := manager.getClient()
assert.Equal(t, 2, *count)
assert.Equal(t, c2, c3)
})
t.Run("multiple host switches", func(t *testing.T) {
provider := &mockProvider{hosts: []string{"host1:4000", "host2:4000", "host3:4000"}}
manager, count := testManager(t, provider)
assert.Equal(t, 1, *count)
for expected := 2; expected <= 4; expected++ {
provider.nextHost()
_ = manager.getClient()
assert.Equal(t, expected, *count)
}
})
}
func TestGrpcClientManager_Concurrent(t *testing.T) {
t.Run("concurrent access same host", func(t *testing.T) {
provider := &mockProvider{hosts: []string{"host1:4000", "host2:4000"}}
manager, _ := testManager(t, provider)
var clientCount int
var countMu sync.Mutex
// Override with thread-safe counter
manager.newClient = func(grpc.ClientConnInterface) *testClient {
countMu.Lock()
clientCount++
id := clientCount
countMu.Unlock()
return &testClient{id: id}
}
manager.client = manager.newClient(nil)
clientCount = 1
var wg sync.WaitGroup
for range 100 {
wg.Go(func() { _ = manager.getClient() })
}
wg.Wait()
countMu.Lock()
assert.Equal(t, 1, clientCount)
countMu.Unlock()
})
t.Run("concurrent with host changes", func(t *testing.T) {
provider := &mockProvider{hosts: []string{"host1:4000", "host2:4000"}}
manager, _ := testManager(t, provider)
var clientCount int
var countMu sync.Mutex
manager.newClient = func(grpc.ClientConnInterface) *testClient {
countMu.Lock()
clientCount++
id := clientCount
countMu.Unlock()
return &testClient{id: id}
}
manager.client = manager.newClient(nil)
clientCount = 1
var wg sync.WaitGroup
for range 50 {
wg.Go(func() { _ = manager.getClient() })
wg.Go(func() { provider.nextHost() })
}
wg.Wait()
countMu.Lock()
assert.NotEqual(t, 0, clientCount, "Should have created at least one client")
countMu.Unlock()
})
}

View File

@@ -5,8 +5,8 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/golang/protobuf/ptypes/empty"
"google.golang.org/grpc"
)
var (
@@ -14,35 +14,40 @@ var (
)
type grpcNodeClient struct {
nodeClient ethpb.NodeClient
*grpcClientManager[ethpb.NodeClient]
}
func (c *grpcNodeClient) SyncStatus(ctx context.Context, in *empty.Empty) (*ethpb.SyncStatus, error) {
return c.nodeClient.GetSyncStatus(ctx, in)
return c.getClient().GetSyncStatus(ctx, in)
}
func (c *grpcNodeClient) Genesis(ctx context.Context, in *empty.Empty) (*ethpb.Genesis, error) {
return c.nodeClient.GetGenesis(ctx, in)
return c.getClient().GetGenesis(ctx, in)
}
func (c *grpcNodeClient) Version(ctx context.Context, in *empty.Empty) (*ethpb.Version, error) {
return c.nodeClient.GetVersion(ctx, in)
return c.getClient().GetVersion(ctx, in)
}
func (c *grpcNodeClient) Peers(ctx context.Context, in *empty.Empty) (*ethpb.Peers, error) {
return c.nodeClient.ListPeers(ctx, in)
return c.getClient().ListPeers(ctx, in)
}
func (c *grpcNodeClient) IsReady(ctx context.Context) bool {
_, err := c.nodeClient.GetHealth(ctx, &ethpb.HealthRequest{})
// GetHealth returns 200 OK only if node is synced and not optimistic.
// otherwise it will throw an error
_, err := c.getClient().GetHealth(ctx, &ethpb.HealthRequest{})
if err != nil {
log.WithError(err).Error("Failed to get health of node")
log.WithError(err).Debug("Node is not ready")
return false
}
return true
}
func NewNodeClient(cc grpc.ClientConnInterface) iface.NodeClient {
g := &grpcNodeClient{nodeClient: ethpb.NewNodeClient(cc)}
return g
// NewNodeClient creates a new gRPC node client that supports
// dynamic connection switching via the NodeConnection's GrpcConnectionProvider.
func NewNodeClient(conn validatorHelpers.NodeConnection) iface.NodeClient {
return &grpcNodeClient{
grpcClientManager: newGrpcClientManager(conn, ethpb.NewNodeClient),
}
}

View File

@@ -0,0 +1,72 @@
package grpc_api
import (
"errors"
"testing"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/mock"
"github.com/OffchainLabs/prysm/v7/testing/require"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/golang/protobuf/ptypes/empty"
"go.uber.org/mock/gomock"
"google.golang.org/grpc"
)
func TestGrpcNodeClient_IsReady(t *testing.T) {
// The IsReady function now relies on GetHealth which returns:
// - 200 OK (nil error) only if node is synced AND not optimistic
// - 206 Partial Content (error) if syncing or optimistic
// - 503 Unavailable (error) if unavailable
testCases := []struct {
name string
healthErr error
expectedResult bool
}{
{
name: "returns true when health check succeeds (synced and not optimistic)",
healthErr: nil,
expectedResult: true,
},
{
name: "returns false when health check fails (syncing, optimistic, or unavailable)",
healthErr: errors.New("node not ready"),
expectedResult: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
ctx := t.Context()
mockNodeClient := mock.NewMockNodeClient(ctrl)
// Set up health check expectation
mockNodeClient.EXPECT().GetHealth(
gomock.Any(),
gomock.Any(),
).Return(&empty.Empty{}, tc.healthErr)
// Create a mock provider
provider := &mockProvider{hosts: []string{"host1:4000"}}
conn, err := validatorHelpers.NewNodeConnection(validatorHelpers.WithGRPCProvider(provider))
require.NoError(t, err)
// Create client with injected mock
client := &grpcNodeClient{
grpcClientManager: &grpcClientManager[ethpb.NodeClient]{
conn: conn,
client: mockNodeClient,
lastHost: "host1:4000",
newClient: func(grpc.ClientConnInterface) ethpb.NodeClient { return mockNodeClient },
},
}
result := client.IsReady(ctx)
assert.Equal(t, tc.expectedResult, result)
})
}
}

View File

@@ -12,9 +12,9 @@ import (
eth "github.com/OffchainLabs/prysm/v7/proto/eth/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"google.golang.org/grpc"
)
type grpcPrysmChainClient struct {
@@ -95,6 +95,8 @@ func (c *grpcPrysmChainClient) ValidatorPerformance(ctx context.Context, in *eth
return c.chainClient.ValidatorPerformance(ctx, in)
}
func NewGrpcPrysmChainClient(cc grpc.ClientConnInterface) iface.PrysmChainClient {
return &grpcPrysmChainClient{chainClient: &grpcChainClient{ethpb.NewBeaconChainClient(cc)}}
// NewGrpcPrysmChainClient creates a new gRPC Prysm chain client that supports
// dynamic connection switching via the NodeConnection's GrpcConnectionProvider.
func NewGrpcPrysmChainClient(conn validatorHelpers.NodeConnection) iface.PrysmChainClient {
return &grpcPrysmChainClient{chainClient: NewGrpcChainClient(conn)}
}

View File

@@ -14,24 +14,24 @@ import (
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type grpcValidatorClient struct {
beaconNodeValidatorClient ethpb.BeaconNodeValidatorClient
isEventStreamRunning bool
*grpcClientManager[ethpb.BeaconNodeValidatorClient]
isEventStreamRunning bool
}
func (c *grpcValidatorClient) Duties(ctx context.Context, in *ethpb.DutiesRequest) (*ethpb.ValidatorDutiesContainer, error) {
if features.Get().DisableDutiesV2 {
return c.getDuties(ctx, in)
}
dutiesResponse, err := c.beaconNodeValidatorClient.GetDutiesV2(ctx, in)
dutiesResponse, err := c.getClient().GetDutiesV2(ctx, in)
if err != nil {
if status.Code(err) == codes.Unimplemented {
log.Warn("GetDutiesV2 returned status code unavailable, falling back to GetDuties")
@@ -47,7 +47,7 @@ func (c *grpcValidatorClient) Duties(ctx context.Context, in *ethpb.DutiesReques
// getDuties is calling the v1 of get duties
func (c *grpcValidatorClient) getDuties(ctx context.Context, in *ethpb.DutiesRequest) (*ethpb.ValidatorDutiesContainer, error) {
dutiesResponse, err := c.beaconNodeValidatorClient.GetDuties(ctx, in)
dutiesResponse, err := c.getClient().GetDuties(ctx, in)
if err != nil {
return nil, errors.Wrap(
client.ErrConnectionIssue,
@@ -147,108 +147,108 @@ func toValidatorDutyV2(duty *ethpb.DutiesV2Response_Duty) (*ethpb.ValidatorDuty,
}
func (c *grpcValidatorClient) CheckDoppelGanger(ctx context.Context, in *ethpb.DoppelGangerRequest) (*ethpb.DoppelGangerResponse, error) {
return c.beaconNodeValidatorClient.CheckDoppelGanger(ctx, in)
return c.getClient().CheckDoppelGanger(ctx, in)
}
func (c *grpcValidatorClient) DomainData(ctx context.Context, in *ethpb.DomainRequest) (*ethpb.DomainResponse, error) {
return c.beaconNodeValidatorClient.DomainData(ctx, in)
return c.getClient().DomainData(ctx, in)
}
func (c *grpcValidatorClient) AttestationData(ctx context.Context, in *ethpb.AttestationDataRequest) (*ethpb.AttestationData, error) {
return c.beaconNodeValidatorClient.GetAttestationData(ctx, in)
return c.getClient().GetAttestationData(ctx, in)
}
func (c *grpcValidatorClient) BeaconBlock(ctx context.Context, in *ethpb.BlockRequest) (*ethpb.GenericBeaconBlock, error) {
return c.beaconNodeValidatorClient.GetBeaconBlock(ctx, in)
return c.getClient().GetBeaconBlock(ctx, in)
}
func (c *grpcValidatorClient) FeeRecipientByPubKey(ctx context.Context, in *ethpb.FeeRecipientByPubKeyRequest) (*ethpb.FeeRecipientByPubKeyResponse, error) {
return c.beaconNodeValidatorClient.GetFeeRecipientByPubKey(ctx, in)
return c.getClient().GetFeeRecipientByPubKey(ctx, in)
}
func (c *grpcValidatorClient) SyncCommitteeContribution(ctx context.Context, in *ethpb.SyncCommitteeContributionRequest) (*ethpb.SyncCommitteeContribution, error) {
return c.beaconNodeValidatorClient.GetSyncCommitteeContribution(ctx, in)
return c.getClient().GetSyncCommitteeContribution(ctx, in)
}
func (c *grpcValidatorClient) SyncMessageBlockRoot(ctx context.Context, in *empty.Empty) (*ethpb.SyncMessageBlockRootResponse, error) {
return c.beaconNodeValidatorClient.GetSyncMessageBlockRoot(ctx, in)
return c.getClient().GetSyncMessageBlockRoot(ctx, in)
}
func (c *grpcValidatorClient) SyncSubcommitteeIndex(ctx context.Context, in *ethpb.SyncSubcommitteeIndexRequest) (*ethpb.SyncSubcommitteeIndexResponse, error) {
return c.beaconNodeValidatorClient.GetSyncSubcommitteeIndex(ctx, in)
return c.getClient().GetSyncSubcommitteeIndex(ctx, in)
}
func (c *grpcValidatorClient) MultipleValidatorStatus(ctx context.Context, in *ethpb.MultipleValidatorStatusRequest) (*ethpb.MultipleValidatorStatusResponse, error) {
return c.beaconNodeValidatorClient.MultipleValidatorStatus(ctx, in)
return c.getClient().MultipleValidatorStatus(ctx, in)
}
func (c *grpcValidatorClient) PrepareBeaconProposer(ctx context.Context, in *ethpb.PrepareBeaconProposerRequest) (*empty.Empty, error) {
return c.beaconNodeValidatorClient.PrepareBeaconProposer(ctx, in)
return c.getClient().PrepareBeaconProposer(ctx, in)
}
func (c *grpcValidatorClient) ProposeAttestation(ctx context.Context, in *ethpb.Attestation) (*ethpb.AttestResponse, error) {
return c.beaconNodeValidatorClient.ProposeAttestation(ctx, in)
return c.getClient().ProposeAttestation(ctx, in)
}
func (c *grpcValidatorClient) ProposeAttestationElectra(ctx context.Context, in *ethpb.SingleAttestation) (*ethpb.AttestResponse, error) {
return c.beaconNodeValidatorClient.ProposeAttestationElectra(ctx, in)
return c.getClient().ProposeAttestationElectra(ctx, in)
}
func (c *grpcValidatorClient) ProposeBeaconBlock(ctx context.Context, in *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
return c.beaconNodeValidatorClient.ProposeBeaconBlock(ctx, in)
return c.getClient().ProposeBeaconBlock(ctx, in)
}
func (c *grpcValidatorClient) ProposeExit(ctx context.Context, in *ethpb.SignedVoluntaryExit) (*ethpb.ProposeExitResponse, error) {
return c.beaconNodeValidatorClient.ProposeExit(ctx, in)
return c.getClient().ProposeExit(ctx, in)
}
func (c *grpcValidatorClient) StreamBlocksAltair(ctx context.Context, in *ethpb.StreamBlocksRequest) (ethpb.BeaconNodeValidator_StreamBlocksAltairClient, error) {
return c.beaconNodeValidatorClient.StreamBlocksAltair(ctx, in)
return c.getClient().StreamBlocksAltair(ctx, in)
}
func (c *grpcValidatorClient) SubmitAggregateSelectionProof(ctx context.Context, in *ethpb.AggregateSelectionRequest, _ primitives.ValidatorIndex, _ uint64) (*ethpb.AggregateSelectionResponse, error) {
return c.beaconNodeValidatorClient.SubmitAggregateSelectionProof(ctx, in)
return c.getClient().SubmitAggregateSelectionProof(ctx, in)
}
func (c *grpcValidatorClient) SubmitAggregateSelectionProofElectra(ctx context.Context, in *ethpb.AggregateSelectionRequest, _ primitives.ValidatorIndex, _ uint64) (*ethpb.AggregateSelectionElectraResponse, error) {
return c.beaconNodeValidatorClient.SubmitAggregateSelectionProofElectra(ctx, in)
return c.getClient().SubmitAggregateSelectionProofElectra(ctx, in)
}
func (c *grpcValidatorClient) SubmitSignedAggregateSelectionProof(ctx context.Context, in *ethpb.SignedAggregateSubmitRequest) (*ethpb.SignedAggregateSubmitResponse, error) {
return c.beaconNodeValidatorClient.SubmitSignedAggregateSelectionProof(ctx, in)
return c.getClient().SubmitSignedAggregateSelectionProof(ctx, in)
}
func (c *grpcValidatorClient) SubmitSignedAggregateSelectionProofElectra(ctx context.Context, in *ethpb.SignedAggregateSubmitElectraRequest) (*ethpb.SignedAggregateSubmitResponse, error) {
return c.beaconNodeValidatorClient.SubmitSignedAggregateSelectionProofElectra(ctx, in)
return c.getClient().SubmitSignedAggregateSelectionProofElectra(ctx, in)
}
func (c *grpcValidatorClient) SubmitSignedContributionAndProof(ctx context.Context, in *ethpb.SignedContributionAndProof) (*empty.Empty, error) {
return c.beaconNodeValidatorClient.SubmitSignedContributionAndProof(ctx, in)
return c.getClient().SubmitSignedContributionAndProof(ctx, in)
}
func (c *grpcValidatorClient) SubmitSyncMessage(ctx context.Context, in *ethpb.SyncCommitteeMessage) (*empty.Empty, error) {
return c.beaconNodeValidatorClient.SubmitSyncMessage(ctx, in)
return c.getClient().SubmitSyncMessage(ctx, in)
}
func (c *grpcValidatorClient) SubmitValidatorRegistrations(ctx context.Context, in *ethpb.SignedValidatorRegistrationsV1) (*empty.Empty, error) {
return c.beaconNodeValidatorClient.SubmitValidatorRegistrations(ctx, in)
return c.getClient().SubmitValidatorRegistrations(ctx, in)
}
func (c *grpcValidatorClient) SubscribeCommitteeSubnets(ctx context.Context, in *ethpb.CommitteeSubnetsSubscribeRequest, _ []*ethpb.ValidatorDuty) (*empty.Empty, error) {
return c.beaconNodeValidatorClient.SubscribeCommitteeSubnets(ctx, in)
return c.getClient().SubscribeCommitteeSubnets(ctx, in)
}
func (c *grpcValidatorClient) ValidatorIndex(ctx context.Context, in *ethpb.ValidatorIndexRequest) (*ethpb.ValidatorIndexResponse, error) {
return c.beaconNodeValidatorClient.ValidatorIndex(ctx, in)
return c.getClient().ValidatorIndex(ctx, in)
}
func (c *grpcValidatorClient) ValidatorStatus(ctx context.Context, in *ethpb.ValidatorStatusRequest) (*ethpb.ValidatorStatusResponse, error) {
return c.beaconNodeValidatorClient.ValidatorStatus(ctx, in)
return c.getClient().ValidatorStatus(ctx, in)
}
// Deprecated: Do not use.
func (c *grpcValidatorClient) WaitForChainStart(ctx context.Context, in *empty.Empty) (*ethpb.ChainStartResponse, error) {
stream, err := c.beaconNodeValidatorClient.WaitForChainStart(ctx, in)
stream, err := c.getClient().WaitForChainStart(ctx, in)
if err != nil {
return nil, errors.Wrap(
client.ErrConnectionIssue,
@@ -260,13 +260,13 @@ func (c *grpcValidatorClient) WaitForChainStart(ctx context.Context, in *empty.E
}
func (c *grpcValidatorClient) AssignValidatorToSubnet(ctx context.Context, in *ethpb.AssignValidatorToSubnetRequest) (*empty.Empty, error) {
return c.beaconNodeValidatorClient.AssignValidatorToSubnet(ctx, in)
return c.getClient().AssignValidatorToSubnet(ctx, in)
}
func (c *grpcValidatorClient) AggregatedSigAndAggregationBits(
ctx context.Context,
in *ethpb.AggregatedSigAndAggregationBitsRequest,
) (*ethpb.AggregatedSigAndAggregationBitsResponse, error) {
return c.beaconNodeValidatorClient.AggregatedSigAndAggregationBits(ctx, in)
return c.getClient().AggregatedSigAndAggregationBits(ctx, in)
}
func (*grpcValidatorClient) AggregatedSelections(context.Context, []iface.BeaconCommitteeSelection) ([]iface.BeaconCommitteeSelection, error) {
@@ -277,8 +277,12 @@ func (*grpcValidatorClient) AggregatedSyncSelections(context.Context, []iface.Sy
return nil, iface.ErrNotSupported
}
func NewGrpcValidatorClient(cc grpc.ClientConnInterface) iface.ValidatorClient {
return &grpcValidatorClient{ethpb.NewBeaconNodeValidatorClient(cc), false}
// NewGrpcValidatorClient creates a new gRPC validator client that supports
// dynamic connection switching via the NodeConnection's GrpcConnectionProvider.
func NewGrpcValidatorClient(conn validatorHelpers.NodeConnection) iface.ValidatorClient {
return &grpcValidatorClient{
grpcClientManager: newGrpcClientManager(conn, ethpb.NewBeaconNodeValidatorClient),
}
}
func (c *grpcValidatorClient) StartEventStream(ctx context.Context, topics []string, eventsChannel chan<- *eventClient.Event) {
@@ -308,7 +312,7 @@ func (c *grpcValidatorClient) StartEventStream(ctx context.Context, topics []str
log.Warn("gRPC only supports the head topic, other topics will be ignored")
}
stream, err := c.beaconNodeValidatorClient.StreamSlots(ctx, &ethpb.StreamSlotsRequest{VerifiedOnly: true})
stream, err := c.getClient().StreamSlots(ctx, &ethpb.StreamSlotsRequest{VerifiedOnly: true})
if err != nil {
eventsChannel <- &eventClient.Event{
EventType: eventClient.EventConnectionError,
@@ -374,11 +378,20 @@ func (c *grpcValidatorClient) EventStreamIsRunning() bool {
return c.isEventStreamRunning
}
func (*grpcValidatorClient) Host() string {
log.Warn(iface.ErrNotSupported)
return ""
func (c *grpcValidatorClient) Host() string {
return c.grpcClientManager.conn.GetGrpcConnectionProvider().CurrentHost()
}
func (*grpcValidatorClient) SetHost(_ string) {
log.Warn(iface.ErrNotSupported)
func (c *grpcValidatorClient) SwitchHost(host string) {
provider := c.grpcClientManager.conn.GetGrpcConnectionProvider()
// Find the index of the requested host and switch to it
for i, h := range provider.Hosts() {
if h == host {
if err := provider.SwitchHost(i); err != nil {
log.WithError(err).WithField("host", host).Error("Failed to set gRPC host")
}
return
}
}
log.WithField("host", host).Warn("Requested gRPC host not found in configured endpoints")
}

View File

@@ -14,8 +14,10 @@ import (
"github.com/OffchainLabs/prysm/v7/testing/assert"
mock2 "github.com/OffchainLabs/prysm/v7/testing/mock"
"github.com/OffchainLabs/prysm/v7/testing/require"
validatorTesting "github.com/OffchainLabs/prysm/v7/validator/testing"
logTest "github.com/sirupsen/logrus/hooks/test"
"go.uber.org/mock/gomock"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
@@ -133,7 +135,15 @@ func TestWaitForChainStart_StreamSetupFails(t *testing.T) {
gomock.Any(),
).Return(nil, errors.New("failed stream"))
validatorClient := &grpcValidatorClient{beaconNodeValidatorClient, true}
validatorClient := &grpcValidatorClient{
grpcClientManager: newGrpcClientManager(
validatorTesting.MockNodeConnection(),
func(_ grpc.ClientConnInterface) eth.BeaconNodeValidatorClient {
return beaconNodeValidatorClient
},
),
isEventStreamRunning: true,
}
_, err := validatorClient.WaitForChainStart(t.Context(), &emptypb.Empty{})
want := "could not setup beacon chain ChainStart streaming client"
assert.ErrorContains(t, want, err)
@@ -146,7 +156,15 @@ func TestStartEventStream(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
beaconNodeValidatorClient := mock2.NewMockBeaconNodeValidatorClient(ctrl)
grpcClient := &grpcValidatorClient{beaconNodeValidatorClient, true}
grpcClient := &grpcValidatorClient{
grpcClientManager: newGrpcClientManager(
validatorTesting.MockNodeConnection(),
func(_ grpc.ClientConnInterface) eth.BeaconNodeValidatorClient {
return beaconNodeValidatorClient
},
),
isEventStreamRunning: true,
}
tests := []struct {
name string
topics []string

View File

@@ -152,5 +152,5 @@ type ValidatorClient interface {
AggregatedSelections(ctx context.Context, selections []BeaconCommitteeSelection) ([]BeaconCommitteeSelection, error)
AggregatedSyncSelections(ctx context.Context, selections []SyncCommitteeSelection) ([]SyncCommitteeSelection, error)
Host() string
SetHost(host string)
SwitchHost(host string)
}

View File

@@ -1,53 +0,0 @@
package client
import (
"strings"
"google.golang.org/grpc/resolver"
)
// Modification of a default grpc passthrough resolver (google.golang.org/grpc/resolver/passthrough) allowing to use multiple addresses
// in grpc endpoint. Example:
// conn, err := grpc.DialContext(ctx, "127.0.0.1:4000,127.0.0.1:4001", grpc.WithInsecure(), grpc.WithResolvers(&multipleEndpointsGrpcResolverBuilder{}))
// It can be used with any grpc load balancer (pick_first, round_robin). Default is pick_first.
// Round robin can be used by adding the following option:
// grpc.WithDefaultServiceConfig("{\"loadBalancingConfig\":[{\"round_robin\":{}}]}")
type multipleEndpointsGrpcResolverBuilder struct{}
// Build creates and starts multiple endpoints resolver.
func (*multipleEndpointsGrpcResolverBuilder) Build(target resolver.Target, cc resolver.ClientConn, _ resolver.BuildOptions) (resolver.Resolver, error) {
r := &multipleEndpointsGrpcResolver{
target: target,
cc: cc,
}
r.start()
return r, nil
}
// Scheme returns default scheme.
func (*multipleEndpointsGrpcResolverBuilder) Scheme() string {
return resolver.GetDefaultScheme()
}
type multipleEndpointsGrpcResolver struct {
target resolver.Target
cc resolver.ClientConn
}
func (r *multipleEndpointsGrpcResolver) start() {
ep := r.target.Endpoint()
endpoints := strings.Split(ep, ",")
var addrs []resolver.Address
for _, endpoint := range endpoints {
addrs = append(addrs, resolver.Address{Addr: endpoint, ServerName: endpoint})
}
if err := r.cc.UpdateState(resolver.State{Addresses: addrs}); err != nil {
log.WithError(err).Error("Failed to update grpc connection state")
}
}
// ResolveNow --
func (*multipleEndpointsGrpcResolver) ResolveNow(_ resolver.ResolveNowOptions) {}
// Close --
func (*multipleEndpointsGrpcResolver) Close() {}

View File

@@ -8,11 +8,10 @@ import (
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
)
func NewNodeClient(validatorConn validatorHelpers.NodeConnection, jsonRestHandler beaconApi.RestHandler) iface.NodeClient {
grpcClient := grpcApi.NewNodeClient(validatorConn.GetGrpcClientConn())
func NewNodeClient(validatorConn validatorHelpers.NodeConnection) iface.NodeClient {
grpcClient := grpcApi.NewNodeClient(validatorConn)
if features.Get().EnableBeaconRESTApi {
return beaconApi.NewNodeClientWithFallback(jsonRestHandler, grpcClient)
} else {
return grpcClient
return beaconApi.NewNodeClientWithFallback(validatorConn.GetRestHandler(), grpcClient)
}
return grpcClient
}

View File

@@ -2,13 +2,11 @@ package client
import (
"context"
"net/http"
"strings"
"time"
api "github.com/OffchainLabs/prysm/v7/api/client"
eventClient "github.com/OffchainLabs/prysm/v7/api/client/event"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/async/event"
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -17,7 +15,6 @@ import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/accounts/wallet"
beaconApi "github.com/OffchainLabs/prysm/v7/validator/client/beacon-api"
beaconChainClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/beacon-chain-client-factory"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
nodeclientfactory "github.com/OffchainLabs/prysm/v7/validator/client/node-client-factory"
@@ -35,7 +32,6 @@ import (
grpcprometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
"github.com/pkg/errors"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/protobuf/proto"
@@ -72,6 +68,7 @@ type Config struct {
DB db.Database
Wallet *wallet.Wallet
WalletInitializedFeed *event.Feed
Conn validatorHelpers.NodeConnection // Optional: pre-built connection (if nil, built from endpoint configs)
MaxHealthChecks int
GRPCMaxCallRecvMsgSize int
GRPCRetries uint
@@ -122,6 +119,12 @@ func NewValidatorService(ctx context.Context, cfg *Config) (*ValidatorService, e
maxHealthChecks: cfg.MaxHealthChecks,
}
// Use pre-built connection if provided
if cfg.Conn != nil {
s.conn = cfg.Conn
return s, nil
}
dialOpts := ConstructDialOptions(
cfg.GRPCMaxCallRecvMsgSize,
cfg.BeaconNodeCert,
@@ -134,19 +137,21 @@ func NewValidatorService(ctx context.Context, cfg *Config) (*ValidatorService, e
s.ctx = grpcutil.AppendHeaders(ctx, cfg.GRPCHeaders)
grpcConn, err := grpc.DialContext(ctx, cfg.BeaconNodeGRPCEndpoint, dialOpts...)
conn, err := validatorHelpers.NewNodeConnection(
validatorHelpers.WithGRPC(s.ctx, cfg.BeaconNodeGRPCEndpoint, dialOpts),
validatorHelpers.WithREST(cfg.BeaconApiEndpoint,
rest.WithHttpHeaders(cfg.BeaconApiHeaders),
rest.WithHttpTimeout(cfg.BeaconApiTimeout),
rest.WithTracing(),
),
)
if err != nil {
return s, err
}
if cfg.BeaconNodeCert != "" {
if cfg.BeaconNodeCert != "" && cfg.BeaconNodeGRPCEndpoint != "" {
log.Info("Established secure gRPC connection")
}
s.conn = validatorHelpers.NewNodeConnection(
grpcConn,
cfg.BeaconApiEndpoint,
validatorHelpers.WithBeaconApiHeaders(cfg.BeaconApiHeaders),
validatorHelpers.WithBeaconApiTimeout(cfg.BeaconApiTimeout),
)
s.conn = conn
return s, nil
}
@@ -181,20 +186,13 @@ func (v *ValidatorService) Start() {
return
}
u := strings.ReplaceAll(v.conn.GetBeaconApiUrl(), " ", "")
hosts := strings.Split(u, ",")
if len(hosts) == 0 {
log.WithError(err).Error("No API hosts provided")
restProvider := v.conn.GetRestConnectionProvider()
if restProvider == nil || len(restProvider.Hosts()) == 0 {
log.Error("No REST API hosts provided")
return
}
headersTransport := api.NewCustomHeadersTransport(http.DefaultTransport, v.conn.GetBeaconApiHeaders())
restHandler := beaconApi.NewBeaconApiRestHandler(
http.Client{Timeout: v.conn.GetBeaconApiTimeout(), Transport: otelhttp.NewTransport(headersTransport)},
hosts[0],
)
validatorClient := validatorclientfactory.NewValidatorClient(v.conn, restHandler)
validatorClient := validatorclientfactory.NewValidatorClient(v.conn)
v.validator = &validator{
slotFeed: new(event.Feed),
@@ -208,12 +206,12 @@ func (v *ValidatorService) Start() {
graffiti: v.graffiti,
graffitiStruct: v.graffitiStruct,
graffitiOrderedIndex: graffitiOrderedIndex,
beaconNodeHosts: hosts,
conn: v.conn,
currentHostIndex: 0,
validatorClient: validatorClient,
chainClient: beaconChainClientFactory.NewChainClient(v.conn, restHandler),
nodeClient: nodeclientfactory.NewNodeClient(v.conn, restHandler),
prysmChainClient: beaconChainClientFactory.NewPrysmChainClient(v.conn, restHandler),
chainClient: beaconChainClientFactory.NewChainClient(v.conn),
nodeClient: nodeclientfactory.NewNodeClient(v.conn),
prysmChainClient: beaconChainClientFactory.NewPrysmChainClient(v.conn),
db: v.db,
km: nil,
web3SignerConfig: v.web3SignerConfig,
@@ -369,7 +367,6 @@ func ConstructDialOptions(
grpcprometheus.StreamClientInterceptor,
grpcretry.StreamClientInterceptor(),
),
grpc.WithResolvers(&multipleEndpointsGrpcResolverBuilder{}),
}
dialOpts = append(dialOpts, extraOpts...)

View File

@@ -33,7 +33,10 @@ func TestStop_CancelsContext(t *testing.T) {
func TestNew_Insecure(t *testing.T) {
hook := logTest.NewGlobal()
_, err := NewValidatorService(t.Context(), &Config{})
_, err := NewValidatorService(t.Context(), &Config{
BeaconNodeGRPCEndpoint: "localhost:4000",
BeaconApiEndpoint: "http://localhost:3500",
})
require.NoError(t, err)
require.LogsContain(t, hook, "You are using an insecure gRPC connection")
}
@@ -58,7 +61,11 @@ func TestStart_GrpcHeaders(t *testing.T) {
"Authorization", "this is a valid value",
},
} {
cfg := &Config{GRPCHeaders: strings.Split(input, ",")}
cfg := &Config{
BeaconNodeGRPCEndpoint: "localhost:4000",
BeaconApiEndpoint: "http://localhost:3500",
GRPCHeaders: strings.Split(input, ","),
}
validatorService, err := NewValidatorService(ctx, cfg)
require.NoError(t, err)
md, _ := metadata.FromOutgoingContext(validatorService.ctx)

View File

@@ -10,12 +10,10 @@ import (
func NewValidatorClient(
validatorConn validatorHelpers.NodeConnection,
jsonRestHandler beaconApi.RestHandler,
opt ...beaconApi.ValidatorClientOpt,
) iface.ValidatorClient {
if features.Get().EnableBeaconRESTApi {
return beaconApi.NewBeaconApiValidatorClient(jsonRestHandler, opt...)
} else {
return grpcApi.NewGrpcValidatorClient(validatorConn.GetGrpcClientConn())
return beaconApi.NewBeaconApiValidatorClient(validatorConn.GetRestHandler(), opt...)
}
return grpcApi.NewGrpcValidatorClient(validatorConn)
}

View File

@@ -38,6 +38,7 @@ import (
"github.com/OffchainLabs/prysm/v7/validator/db"
dbCommon "github.com/OffchainLabs/prysm/v7/validator/db/common"
"github.com/OffchainLabs/prysm/v7/validator/graffiti"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/OffchainLabs/prysm/v7/validator/keymanager"
"github.com/OffchainLabs/prysm/v7/validator/keymanager/local"
remoteweb3signer "github.com/OffchainLabs/prysm/v7/validator/keymanager/remote-web3signer"
@@ -101,9 +102,9 @@ type validator struct {
pubkeyToStatus map[[fieldparams.BLSPubkeyLength]byte]*validatorStatus
wallet *wallet.Wallet
walletInitializedChan chan *wallet.Wallet
currentHostIndex uint64
walletInitializedFeed *event.Feed
graffitiOrderedIndex uint64
conn validatorHelpers.NodeConnection
submittedAtts map[submittedAttKey]*submittedAtt
validatorsRegBatchSize int
validatorClient iface.ValidatorClient
@@ -114,7 +115,7 @@ type validator struct {
km keymanager.IKeymanager
accountChangedSub event.Subscription
ticker slots.Ticker
beaconNodeHosts []string
currentHostIndex uint64
genesisTime time.Time
graffiti []byte
voteStats voteStats
@@ -1311,34 +1312,64 @@ func (v *validator) Host() string {
}
func (v *validator) changeHost() {
next := (v.currentHostIndex + 1) % uint64(len(v.beaconNodeHosts))
hosts := v.hosts()
if len(hosts) <= 1 {
return
}
next := (v.currentHostIndex + 1) % uint64(len(hosts))
log.WithFields(logrus.Fields{
"currentHost": v.beaconNodeHosts[v.currentHostIndex],
"nextHost": v.beaconNodeHosts[next],
"currentHost": hosts[v.currentHostIndex],
"nextHost": hosts[next],
}).Warn("Beacon node is not responding, switching host")
v.validatorClient.SetHost(v.beaconNodeHosts[next])
v.validatorClient.SwitchHost(hosts[next])
v.currentHostIndex = next
}
// hosts returns the list of configured beacon node hosts.
func (v *validator) hosts() []string {
if features.Get().EnableBeaconRESTApi {
return v.conn.GetRestConnectionProvider().Hosts()
}
return v.conn.GetGrpcConnectionProvider().Hosts()
}
// numHosts returns the number of configured beacon node hosts.
func (v *validator) numHosts() int {
return len(v.hosts())
}
func (v *validator) FindHealthyHost(ctx context.Context) bool {
// Tail-recursive closure keeps retry count private.
var check func(remaining int) bool
check = func(remaining int) bool {
if v.nodeClient.IsReady(ctx) { // ready → done
numHosts := v.numHosts()
startingHost := v.Host()
attemptedHosts := []string{}
// Check all hosts for a fully synced node
for i := range numHosts {
if v.nodeClient.IsReady(ctx) {
if len(attemptedHosts) > 0 {
log.WithFields(logrus.Fields{
"previousHost": startingHost,
"newHost": v.Host(),
"failedAttempts": attemptedHosts,
}).Info("Failover succeeded: connected to healthy beacon node")
}
return true
}
if len(v.beaconNodeHosts) == 1 && features.Get().EnableBeaconRESTApi {
log.WithField("host", v.Host()).Warn("Beacon node is not responding, no backup node configured")
return false
log.WithField("host", v.Host()).Debug("Beacon node not fully synced")
attemptedHosts = append(attemptedHosts, v.Host())
// Try next host if not the last iteration
if i < numHosts-1 {
v.changeHost()
}
if remaining == 0 || !features.Get().EnableBeaconRESTApi {
return false // exhausted or REST disabled
}
v.changeHost()
return check(remaining - 1) // recurse
}
return check(len(v.beaconNodeHosts))
if numHosts == 1 {
log.WithField("host", v.Host()).Warn("Beacon node is not fully synced, no backup node configured")
} else {
log.Warn("No fully synced beacon node found")
}
return false
}
func (v *validator) filterAndCacheActiveKeys(ctx context.Context, pubkeys [][fieldparams.BLSPubkeyLength]byte, slot primitives.Slot) ([][fieldparams.BLSPubkeyLength]byte, error) {

View File

@@ -16,6 +16,8 @@ import (
"testing"
"time"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/api/server/structs"
"github.com/OffchainLabs/prysm/v7/async/event"
"github.com/OffchainLabs/prysm/v7/cmd/validator/flags"
@@ -37,6 +39,7 @@ import (
"github.com/OffchainLabs/prysm/v7/validator/accounts/wallet"
"github.com/OffchainLabs/prysm/v7/validator/client/iface"
dbTest "github.com/OffchainLabs/prysm/v7/validator/db/testing"
validatorHelpers "github.com/OffchainLabs/prysm/v7/validator/helpers"
"github.com/OffchainLabs/prysm/v7/validator/keymanager"
"github.com/OffchainLabs/prysm/v7/validator/keymanager/local"
remoteweb3signer "github.com/OffchainLabs/prysm/v7/validator/keymanager/remote-web3signer"
@@ -2792,18 +2795,27 @@ func TestValidator_Host(t *testing.T) {
}
func TestValidator_ChangeHost(t *testing.T) {
// Enable REST API mode for this test since changeHost only calls SwitchHost in REST API mode
resetCfg := features.InitWithReset(&features.Flags{EnableBeaconRESTApi: true})
defer resetCfg()
ctrl := gomock.NewController(t)
defer ctrl.Finish()
hosts := []string{"http://localhost:8080", "http://localhost:8081"}
restProvider := &rest.MockRestProvider{MockHosts: hosts}
conn, err := validatorHelpers.NewNodeConnection(validatorHelpers.WithRestProvider(restProvider))
require.NoError(t, err)
client := validatormock.NewMockValidatorClient(ctrl)
v := validator{
validatorClient: client,
beaconNodeHosts: []string{"http://localhost:8080", "http://localhost:8081"},
conn: conn,
currentHostIndex: 0,
}
client.EXPECT().SetHost(v.beaconNodeHosts[1])
client.EXPECT().SetHost(v.beaconNodeHosts[0])
client.EXPECT().SwitchHost(hosts[1])
client.EXPECT().SwitchHost(hosts[0])
v.changeHost()
assert.Equal(t, uint64(1), v.currentHostIndex)
v.changeHost()
@@ -2838,12 +2850,16 @@ func TestUpdateValidatorStatusCache(t *testing.T) {
gomock.Any(),
gomock.Any()).Return(mockResponse, nil)
mockProvider := &grpcutil.MockGrpcProvider{MockHosts: []string{"localhost:4000", "localhost:4001"}}
conn, err := validatorHelpers.NewNodeConnection(validatorHelpers.WithGRPCProvider(mockProvider))
require.NoError(t, err)
v := &validator{
validatorClient: client,
beaconNodeHosts: []string{"http://localhost:8080", "http://localhost:8081"},
conn: conn,
currentHostIndex: 0,
pubkeyToStatus: map[[fieldparams.BLSPubkeyLength]byte]*validatorStatus{
[fieldparams.BLSPubkeyLength]byte{0x03}: &validatorStatus{ // add non existent key and status to cache, should be fully removed on update
[fieldparams.BLSPubkeyLength]byte{0x03}: { // add non existent key and status to cache, should be fully removed on update
publicKey: []byte{0x03},
status: &ethpb.ValidatorStatusResponse{
Status: ethpb.ValidatorStatus_ACTIVE,
@@ -2853,7 +2869,7 @@ func TestUpdateValidatorStatusCache(t *testing.T) {
},
}
err := v.updateValidatorStatusCache(ctx, pubkeys)
err = v.updateValidatorStatusCache(ctx, pubkeys)
assert.NoError(t, err)
// make sure the nonexistent key is fully removed

View File

@@ -10,6 +10,8 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v7/validator/helpers",
visibility = ["//visibility:public"],
deps = [
"//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//validator/db/iface:go_default_library",
@@ -24,18 +26,23 @@ go_test(
srcs = [
"converts_test.go",
"metadata_test.go",
"node_connection_test.go",
],
embed = [":go_default_library"],
deps = [
"//api/grpc:go_default_library",
"//api/rest:go_default_library",
"//config/fieldparams:go_default_library",
"//config/proposer:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//validator/db/common:go_default_library",
"//validator/db/iface:go_default_library",
"//validator/slashing-protection-history/format:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@org_golang_google_grpc//:go_default_library",
],
)

View File

@@ -1,78 +1,120 @@
package helpers
import (
"time"
"context"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/pkg/errors"
"google.golang.org/grpc"
)
// Use an interface with a private dummy function to force all other packages to call NewNodeConnection
// NodeConnection provides access to both gRPC and REST API connections to a beacon node.
type NodeConnection interface {
// GetGrpcClientConn returns the current gRPC client connection.
// Returns nil if no gRPC provider is configured.
GetGrpcClientConn() *grpc.ClientConn
GetBeaconApiUrl() string
GetBeaconApiHeaders() map[string][]string
setBeaconApiHeaders(map[string][]string)
GetBeaconApiTimeout() time.Duration
setBeaconApiTimeout(time.Duration)
dummy()
// GetGrpcConnectionProvider returns the gRPC connection provider.
GetGrpcConnectionProvider() grpcutil.GrpcConnectionProvider
// GetRestConnectionProvider returns the REST connection provider.
GetRestConnectionProvider() rest.RestConnectionProvider
// GetRestHandler returns the REST handler for making API requests.
// Returns nil if no REST provider is configured.
GetRestHandler() rest.RestHandler
}
type nodeConnection struct {
grpcClientConn *grpc.ClientConn
beaconApiUrl string
beaconApiHeaders map[string][]string
beaconApiTimeout time.Duration
}
// NodeConnectionOption is a functional option for configuring the node connection.
type NodeConnectionOption func(nc NodeConnection)
// WithBeaconApiHeaders sets the HTTP headers that should be sent to the server along with each request.
func WithBeaconApiHeaders(headers map[string][]string) NodeConnectionOption {
return func(nc NodeConnection) {
nc.setBeaconApiHeaders(headers)
}
}
// WithBeaconApiTimeout sets the HTTP request timeout.
func WithBeaconApiTimeout(timeout time.Duration) NodeConnectionOption {
return func(nc NodeConnection) {
nc.setBeaconApiTimeout(timeout)
}
grpcConnectionProvider grpcutil.GrpcConnectionProvider
restConnectionProvider rest.RestConnectionProvider
}
func (c *nodeConnection) GetGrpcClientConn() *grpc.ClientConn {
return c.grpcClientConn
}
func (c *nodeConnection) GetBeaconApiUrl() string {
return c.beaconApiUrl
}
func (c *nodeConnection) GetBeaconApiHeaders() map[string][]string {
return c.beaconApiHeaders
}
func (c *nodeConnection) setBeaconApiHeaders(headers map[string][]string) {
c.beaconApiHeaders = headers
}
func (c *nodeConnection) GetBeaconApiTimeout() time.Duration {
return c.beaconApiTimeout
}
func (c *nodeConnection) setBeaconApiTimeout(timeout time.Duration) {
c.beaconApiTimeout = timeout
}
func (*nodeConnection) dummy() {}
func NewNodeConnection(grpcConn *grpc.ClientConn, beaconApiUrl string, opts ...NodeConnectionOption) NodeConnection {
conn := &nodeConnection{}
conn.grpcClientConn = grpcConn
conn.beaconApiUrl = beaconApiUrl
for _, opt := range opts {
opt(conn)
if c.grpcConnectionProvider == nil {
return nil
}
return conn
return c.grpcConnectionProvider.CurrentConn()
}
func (c *nodeConnection) GetGrpcConnectionProvider() grpcutil.GrpcConnectionProvider {
return c.grpcConnectionProvider
}
func (c *nodeConnection) GetRestConnectionProvider() rest.RestConnectionProvider {
return c.restConnectionProvider
}
func (c *nodeConnection) GetRestHandler() rest.RestHandler {
if c.restConnectionProvider == nil {
return nil
}
return c.restConnectionProvider.RestHandler()
}
// NodeConnectionOption is a functional option for configuring a NodeConnection.
type NodeConnectionOption func(*nodeConnection) error
// WithGRPC configures a gRPC connection provider for the NodeConnection.
// If endpoint is empty, this option is a no-op.
func WithGRPC(ctx context.Context, endpoint string, dialOpts []grpc.DialOption) NodeConnectionOption {
return func(c *nodeConnection) error {
if endpoint == "" {
return nil
}
provider, err := grpcutil.NewGrpcConnectionProvider(ctx, endpoint, dialOpts)
if err != nil {
return errors.Wrap(err, "failed to create gRPC connection provider")
}
c.grpcConnectionProvider = provider
return nil
}
}
// WithREST configures a REST connection provider for the NodeConnection.
// If endpoint is empty, this option is a no-op.
func WithREST(endpoint string, opts ...rest.RestConnectionProviderOption) NodeConnectionOption {
return func(c *nodeConnection) error {
if endpoint == "" {
return nil
}
provider, err := rest.NewRestConnectionProvider(endpoint, opts...)
if err != nil {
return errors.Wrap(err, "failed to create REST connection provider")
}
c.restConnectionProvider = provider
return nil
}
}
// WithGRPCProvider sets a pre-built gRPC connection provider.
func WithGRPCProvider(provider grpcutil.GrpcConnectionProvider) NodeConnectionOption {
return func(c *nodeConnection) error {
c.grpcConnectionProvider = provider
return nil
}
}
// WithRestProvider sets a pre-built REST connection provider.
func WithRestProvider(provider rest.RestConnectionProvider) NodeConnectionOption {
return func(c *nodeConnection) error {
c.restConnectionProvider = provider
return nil
}
}
// NewNodeConnection creates a new NodeConnection with the given options.
// At least one provider (gRPC or REST) must be configured via options.
// Returns an error if no providers are configured.
func NewNodeConnection(opts ...NodeConnectionOption) (NodeConnection, error) {
c := &nodeConnection{}
for _, opt := range opts {
if err := opt(c); err != nil {
return nil, err
}
}
if c.grpcConnectionProvider == nil && c.restConnectionProvider == nil {
return nil, errors.New("at least one beacon node endpoint must be provided (--beacon-rpc-provider or --beacon-rest-api-provider)")
}
return c, nil
}

View File

@@ -0,0 +1,101 @@
package helpers
import (
"context"
"testing"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"google.golang.org/grpc"
)
func TestNewNodeConnection(t *testing.T) {
t.Run("with both providers", func(t *testing.T) {
grpcProvider := &grpcutil.MockGrpcProvider{MockHosts: []string{"localhost:4000"}}
restProvider := &rest.MockRestProvider{MockHosts: []string{"http://localhost:3500"}}
conn, err := NewNodeConnection(
WithGRPCProvider(grpcProvider),
WithRestProvider(restProvider),
)
require.NoError(t, err)
assert.Equal(t, grpcProvider, conn.GetGrpcConnectionProvider())
assert.Equal(t, restProvider, conn.GetRestConnectionProvider())
})
t.Run("with only rest provider", func(t *testing.T) {
restProvider := &rest.MockRestProvider{MockHosts: []string{"http://localhost:3500"}}
conn, err := NewNodeConnection(WithRestProvider(restProvider))
require.NoError(t, err)
assert.Equal(t, (grpcutil.GrpcConnectionProvider)(nil), conn.GetGrpcConnectionProvider())
assert.Equal(t, (*grpc.ClientConn)(nil), conn.GetGrpcClientConn())
assert.Equal(t, restProvider, conn.GetRestConnectionProvider())
})
t.Run("with only grpc provider", func(t *testing.T) {
grpcProvider := &grpcutil.MockGrpcProvider{MockHosts: []string{"localhost:4000"}}
conn, err := NewNodeConnection(WithGRPCProvider(grpcProvider))
require.NoError(t, err)
assert.Equal(t, grpcProvider, conn.GetGrpcConnectionProvider())
assert.Equal(t, (rest.RestConnectionProvider)(nil), conn.GetRestConnectionProvider())
assert.Equal(t, (rest.RestHandler)(nil), conn.GetRestHandler())
})
t.Run("with no providers returns error", func(t *testing.T) {
conn, err := NewNodeConnection()
require.ErrorContains(t, "at least one beacon node endpoint must be provided", err)
assert.Equal(t, (NodeConnection)(nil), conn)
})
t.Run("with empty endpoints is no-op", func(t *testing.T) {
// Empty endpoints should be skipped, resulting in no providers
conn, err := NewNodeConnection(
WithGRPC(context.Background(), "", nil),
WithREST(""),
)
require.ErrorContains(t, "at least one beacon node endpoint must be provided", err)
assert.Equal(t, (NodeConnection)(nil), conn)
})
}
func TestNodeConnection_GetGrpcClientConn(t *testing.T) {
t.Run("delegates to provider", func(t *testing.T) {
// We can't easily create a real grpc.ClientConn in tests,
// but we can verify the delegation works with nil
grpcProvider := &grpcutil.MockGrpcProvider{MockConn: nil, MockHosts: []string{"localhost:4000"}}
conn, err := NewNodeConnection(WithGRPCProvider(grpcProvider))
require.NoError(t, err)
// Should delegate to provider.CurrentConn()
assert.Equal(t, grpcProvider.CurrentConn(), conn.GetGrpcClientConn())
})
t.Run("returns nil when provider is nil", func(t *testing.T) {
restProvider := &rest.MockRestProvider{MockHosts: []string{"http://localhost:3500"}}
conn, err := NewNodeConnection(WithRestProvider(restProvider))
require.NoError(t, err)
assert.Equal(t, (*grpc.ClientConn)(nil), conn.GetGrpcClientConn())
})
}
func TestNodeConnection_GetRestHandler(t *testing.T) {
t.Run("delegates to provider", func(t *testing.T) {
mockHandler := &rest.MockRestHandler{}
restProvider := &rest.MockRestProvider{MockHandler: mockHandler, MockHosts: []string{"http://localhost:3500"}}
conn, err := NewNodeConnection(WithRestProvider(restProvider))
require.NoError(t, err)
assert.Equal(t, mockHandler, conn.GetRestHandler())
})
t.Run("returns nil when provider is nil", func(t *testing.T) {
grpcProvider := &grpcutil.MockGrpcProvider{MockHosts: []string{"localhost:4000"}}
conn, err := NewNodeConnection(WithGRPCProvider(grpcProvider))
require.NoError(t, err)
assert.Equal(t, (rest.RestHandler)(nil), conn.GetRestHandler())
})
}

View File

@@ -41,6 +41,8 @@ func TestNode_Builds(t *testing.T) {
set.String("wallet-password-file", passwordFile, "path to wallet password")
set.String("keymanager-kind", "imported", "keymanager kind")
set.String("verbosity", "debug", "log verbosity")
set.String("beacon-rpc-provider", "localhost:4000", "beacon node RPC endpoint")
set.String("beacon-rest-api-provider", "http://localhost:3500", "beacon node REST API endpoint")
require.NoError(t, set.Set(flags.WalletPasswordFileFlag.Name, passwordFile))
ctx := cli.NewContext(&app, set, nil)
opts := []accounts.Option{

View File

@@ -23,9 +23,9 @@ go_library(
],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/grpc:go_default_library",
"//api/pagination:go_default_library",
"//api/rest:go_default_library",
"//api/server:go_default_library",
"//api/server/httprest:go_default_library",
"//api/server/middleware:go_default_library",
@@ -55,7 +55,6 @@ go_library(
"//validator/accounts/petnames:go_default_library",
"//validator/accounts/wallet:go_default_library",
"//validator/client:go_default_library",
"//validator/client/beacon-api:go_default_library",
"//validator/client/beacon-chain-client-factory:go_default_library",
"//validator/client/iface:go_default_library",
"//validator/client/node-client-factory:go_default_library",
@@ -79,7 +78,6 @@ go_library(
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_wealdtech_go_eth2_wallet_encryptor_keystorev4//:go_default_library",
"@io_opentelemetry_go_contrib_instrumentation_net_http_otelhttp//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//codes:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
@@ -106,6 +104,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//async/event:go_default_library",
"//cmd/validator/flags:go_default_library",
"//config/features:go_default_library",

View File

@@ -1,13 +1,10 @@
package rpc
import (
"net/http"
api "github.com/OffchainLabs/prysm/v7/api/client"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/api/rest"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/validator/client"
beaconApi "github.com/OffchainLabs/prysm/v7/validator/client/beacon-api"
beaconChainClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/beacon-chain-client-factory"
nodeClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/node-client-factory"
validatorClientFactory "github.com/OffchainLabs/prysm/v7/validator/client/validator-client-factory"
@@ -17,7 +14,6 @@ import (
grpcopentracing "github.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing"
grpcprometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
"github.com/pkg/errors"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"google.golang.org/grpc"
)
@@ -41,30 +37,26 @@ func (s *Server) registerBeaconClient() error {
s.ctx = grpcutil.AppendHeaders(s.ctx, s.grpcHeaders)
grpcConn, err := grpc.DialContext(s.ctx, s.beaconNodeEndpoint, dialOpts...)
conn, err := validatorHelpers.NewNodeConnection(
validatorHelpers.WithGRPC(s.ctx, s.beaconNodeEndpoint, dialOpts),
validatorHelpers.WithREST(s.beaconApiEndpoint,
rest.WithHttpHeaders(s.beaconApiHeaders),
rest.WithHttpTimeout(s.beaconApiTimeout),
rest.WithTracing(),
),
)
if err != nil {
return errors.Wrapf(err, "could not dial endpoint: %s", s.beaconNodeEndpoint)
return err
}
if s.beaconNodeCert != "" {
if s.beaconNodeCert != "" && s.beaconNodeEndpoint != "" {
log.Info("Established secure gRPC connection")
}
s.healthClient = ethpb.NewHealthClient(grpcConn)
if grpcConn := conn.GetGrpcClientConn(); grpcConn != nil {
s.healthClient = ethpb.NewHealthClient(grpcConn)
}
conn := validatorHelpers.NewNodeConnection(
grpcConn,
s.beaconApiEndpoint,
validatorHelpers.WithBeaconApiHeaders(s.beaconApiHeaders),
validatorHelpers.WithBeaconApiTimeout(s.beaconApiTimeout),
)
headersTransport := api.NewCustomHeadersTransport(http.DefaultTransport, conn.GetBeaconApiHeaders())
restHandler := beaconApi.NewBeaconApiRestHandler(
http.Client{Timeout: s.beaconApiTimeout, Transport: otelhttp.NewTransport(headersTransport)},
s.beaconApiEndpoint,
)
s.chainClient = beaconChainClientFactory.NewChainClient(conn, restHandler)
s.nodeClient = nodeClientFactory.NewNodeClient(conn, restHandler)
s.beaconNodeValidatorClient = validatorClientFactory.NewValidatorClient(conn, restHandler)
s.chainClient = beaconChainClientFactory.NewChainClient(conn)
s.nodeClient = nodeClientFactory.NewNodeClient(conn)
s.beaconNodeValidatorClient = validatorClientFactory.NewValidatorClient(conn)
return nil
}

View File

@@ -3,19 +3,17 @@ package rpc
import (
"testing"
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"google.golang.org/grpc/metadata"
)
func TestGrpcHeaders(t *testing.T) {
s := &Server{
ctx: t.Context(),
grpcHeaders: []string{"first=value1", "second=value2"},
}
err := s.registerBeaconClient()
require.NoError(t, err)
md, _ := metadata.FromOutgoingContext(s.ctx)
ctx := t.Context()
grpcHeaders := []string{"first=value1", "second=value2"}
ctx = grpcutil.AppendHeaders(ctx, grpcHeaders)
md, _ := metadata.FromOutgoingContext(ctx)
require.Equal(t, 2, md.Len(), "MetadataV0 contains wrong number of values")
assert.Equal(t, "value1", md.Get("first")[0])
assert.Equal(t, "value2", md.Get("second")[0])

View File

@@ -22,6 +22,7 @@ import (
"github.com/OffchainLabs/prysm/v7/validator/client"
"github.com/OffchainLabs/prysm/v7/validator/client/testutil"
"github.com/OffchainLabs/prysm/v7/validator/keymanager"
validatorTesting "github.com/OffchainLabs/prysm/v7/validator/testing"
"github.com/google/uuid"
"github.com/tyler-smith/go-bip39"
keystorev4 "github.com/wealdtech/go-eth2-wallet-encryptor-keystorev4"
@@ -46,6 +47,7 @@ func TestServer_CreateWallet_Local(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: validatorTesting.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -443,6 +445,7 @@ func TestServer_WalletConfig(t *testing.T) {
require.NoError(t, err)
s.wallet = w
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: validatorTesting.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,

View File

@@ -53,6 +53,7 @@ func TestServer_ListAccounts(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: constant.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -158,6 +159,7 @@ func TestServer_BackupAccounts(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: constant.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -282,6 +284,7 @@ func TestServer_VoluntaryExit(t *testing.T) {
require.NoError(t, err)
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: constant.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,

View File

@@ -52,6 +52,7 @@ func TestServer_ListKeystores(t *testing.T) {
t.Run("wallet not ready", func(t *testing.T) {
m := &testutil.FakeValidator{}
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
})
require.NoError(t, err)
@@ -81,6 +82,7 @@ func TestServer_ListKeystores(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -147,6 +149,7 @@ func TestServer_ImportKeystores(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -368,6 +371,7 @@ func TestServer_ImportKeystores_WrongKeymanagerKind(t *testing.T) {
}})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -652,6 +656,7 @@ func TestServer_DeleteKeystores_WrongKeymanagerKind(t *testing.T) {
}})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -695,6 +700,7 @@ func setupServerWithWallet(t testing.TB) *Server {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -730,6 +736,7 @@ func TestServer_SetVoluntaryExit(t *testing.T) {
m := &testutil.FakeValidator{Km: km}
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
})
require.NoError(t, err)
@@ -953,6 +960,7 @@ func TestServer_GetGasLimit(t *testing.T) {
err := m.SetProposerSettings(ctx, tt.args)
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
})
require.NoError(t, err)
@@ -1111,6 +1119,7 @@ func TestServer_SetGasLimit(t *testing.T) {
require.NoError(t, err)
validatorDB := dbtest.SetupDB(t, t.TempDir(), [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
DB: validatorDB,
})
@@ -1300,6 +1309,7 @@ func TestServer_DeleteGasLimit(t *testing.T) {
require.NoError(t, err)
validatorDB := dbtest.SetupDB(t, t.TempDir(), [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
DB: validatorDB,
})
@@ -1348,6 +1358,7 @@ func TestServer_ListRemoteKeys(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false, Web3SignerConfig: config})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -1404,6 +1415,7 @@ func TestServer_ImportRemoteKeys(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false, Web3SignerConfig: config})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -1466,6 +1478,7 @@ func TestServer_DeleteRemoteKeys(t *testing.T) {
km, err := w.InitializeKeymanager(ctx, iface.InitKeymanagerConfig{ListenForChanges: false, Web3SignerConfig: config})
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Wallet: w,
Validator: &testutil.FakeValidator{
Km: km,
@@ -1567,6 +1580,7 @@ func TestServer_ListFeeRecipientByPubkey(t *testing.T) {
require.NoError(t, err)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
})
require.NoError(t, err)
@@ -1591,6 +1605,7 @@ func TestServer_ListFeeRecipientByPubKey_NoFeeRecipientSet(t *testing.T) {
ctx := t.Context()
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: &testutil.FakeValidator{},
})
require.NoError(t, err)
@@ -1780,6 +1795,7 @@ func TestServer_FeeRecipientByPubkey(t *testing.T) {
// save a default here
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
DB: validatorDB,
})
@@ -1890,6 +1906,7 @@ func TestServer_DeleteFeeRecipientByPubkey(t *testing.T) {
require.NoError(t, err)
validatorDB := dbtest.SetupDB(t, t.TempDir(), [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
vs, err := client.NewValidatorService(ctx, &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
DB: validatorDB,
})
@@ -1940,6 +1957,7 @@ func TestServer_Graffiti(t *testing.T) {
graffiti := "graffiti"
m := &testutil.FakeValidator{}
vs, err := client.NewValidatorService(t.Context(), &client.Config{
Conn: mocks.MockNodeConnection(),
Validator: m,
})
require.NoError(t, err)

View File

@@ -5,6 +5,7 @@ go_library(
testonly = True,
srcs = [
"constants.go",
"mock_node_connection.go",
"mock_protector.go",
"protection_history.go",
],
@@ -14,6 +15,7 @@ go_library(
"//validator:__subpackages__",
],
deps = [
"//api/grpc:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -22,6 +24,7 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//validator/db/common:go_default_library",
"//validator/helpers:go_default_library",
"//validator/slashing-protection-history/format:go_default_library",
],
)

View File

@@ -0,0 +1,16 @@
package testing
import (
grpcutil "github.com/OffchainLabs/prysm/v7/api/grpc"
"github.com/OffchainLabs/prysm/v7/validator/helpers"
)
// MockNodeConnection creates a minimal NodeConnection for testing.
func MockNodeConnection() helpers.NodeConnection {
conn, _ := helpers.NewNodeConnection(
helpers.WithGRPCProvider(&grpcutil.MockGrpcProvider{
MockHosts: []string{"mock:4000"},
}),
)
return conn
}