mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-09 21:38:05 -05:00
Compare commits
50 Commits
rcmgrMetri
...
fcTesting2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0ffccc9c98 | ||
|
|
1b915b51b0 | ||
|
|
d35461affd | ||
|
|
ee159f3380 | ||
|
|
6b3d18cb77 | ||
|
|
07955c891b | ||
|
|
57f97feb84 | ||
|
|
2bf0560dc7 | ||
|
|
cf0505b8db | ||
|
|
a40f903f76 | ||
|
|
3a9764d3af | ||
|
|
d1d3edc7fe | ||
|
|
ba55ae8cea | ||
|
|
27aac105d7 | ||
|
|
115d565f49 | ||
|
|
019e0b56e2 | ||
|
|
0efb038984 | ||
|
|
63d81144e9 | ||
|
|
6edbfa3128 | ||
|
|
194b3b1c5e | ||
|
|
996ec67229 | ||
|
|
c7b2c011d8 | ||
|
|
d15122fae2 | ||
|
|
3e17dbb532 | ||
|
|
a75e78ddb4 | ||
|
|
1862422db9 | ||
|
|
152d21059e | ||
|
|
2b410893a0 | ||
|
|
826267310e | ||
|
|
d5057cfb42 | ||
|
|
8d01cf2ec1 | ||
|
|
e4e315da94 | ||
|
|
0a4e42545e | ||
|
|
6fa2d768b5 | ||
|
|
0f228896b0 | ||
|
|
6896b41963 | ||
|
|
3bf6abe27c | ||
|
|
bd0d7478b3 | ||
|
|
b6a1da21f4 | ||
|
|
180058ed48 | ||
|
|
f7a567d1d3 | ||
|
|
6d02c9ae12 | ||
|
|
6c2e6ca855 | ||
|
|
fbdccf8055 | ||
|
|
83cfe11ca0 | ||
|
|
135e9f51ec | ||
|
|
d33c1974da | ||
|
|
88a2e3d953 | ||
|
|
cea42a4b7d | ||
|
|
9971d71bc5 |
@@ -1,6 +1,6 @@
|
||||
# Contribution Guidelines
|
||||
|
||||
Note: The latest and most up to date documenation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
|
||||
Note: The latest and most up-to-date documentation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
|
||||
|
||||
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
|
||||
|
||||
@@ -10,9 +10,9 @@ You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues)
|
||||
|
||||
**1. Set up Prysm following the instructions in README.md.**
|
||||
|
||||
**2. Fork the prysm repo.**
|
||||
**2. Fork the Prysm repo.**
|
||||
|
||||
Sign in to your Github account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
|
||||
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
|
||||
|
||||
**3. Create a local clone of Prysm.**
|
||||
|
||||
@@ -23,7 +23,7 @@ $ git clone https://github.com/prysmaticlabs/prysm.git
|
||||
$ cd $GOPATH/src/github.com/prysmaticlabs/prysm
|
||||
```
|
||||
|
||||
**4. Link your local clone to the fork on your Github repo.**
|
||||
**4. Link your local clone to the fork on your GitHub repo.**
|
||||
|
||||
```
|
||||
$ git remote add myprysmrepo https://github.com/<your_github_user_name>/prysm.git
|
||||
@@ -68,7 +68,7 @@ $ go test <file_you_are_working_on>
|
||||
$ git add --all
|
||||
```
|
||||
|
||||
This command stages all of the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
|
||||
This command stages all the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
|
||||
|
||||
**11. Commit the file or files.**
|
||||
|
||||
@@ -96,8 +96,7 @@ If there are conflicts between your edits and those made by others since you sta
|
||||
$ git status
|
||||
```
|
||||
|
||||
Open those files one at a time and you
|
||||
will see lines inserted by Git that identify the conflicts:
|
||||
Open those files one at a time, and you will see lines inserted by Git that identify the conflicts:
|
||||
|
||||
```
|
||||
<<<<<< HEAD
|
||||
@@ -119,7 +118,7 @@ $ git push myrepo feature-in-progress-branch
|
||||
|
||||
**15. Check to be sure your fork of the Prysm repo contains your feature branch with the latest edits.**
|
||||
|
||||
Navigate to your fork of the repo on Github. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
|
||||
Navigate to your fork of the repo on GitHub. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
|
||||
|
||||
**16. Create a pull request.**
|
||||
|
||||
@@ -151,7 +150,7 @@ pick hash fix a bug
|
||||
pick hash add a feature
|
||||
```
|
||||
|
||||
Replace the word pick with the word “squash” for every line but the first so you end with ….
|
||||
Replace the word pick with the word “squash” for every line but the first, so you end with ….
|
||||
|
||||
```
|
||||
pick hash do some work
|
||||
@@ -178,7 +177,7 @@ We consider two types of contributions to our repo and categorize them as follow
|
||||
Anyone can become a part-time contributor and help out on implementing Ethereum consensus. The responsibilities of a part-time contributor include:
|
||||
|
||||
- Engaging in Gitter conversations, asking the questions on how to begin contributing to the project
|
||||
- Opening up github issues to express interest in code to implement
|
||||
- Opening up GitHub issues to express interest in code to implement
|
||||
- Opening up PRs referencing any open issue in the repo. PRs should include:
|
||||
- Detailed context of what would be required for merge
|
||||
- Tests that are consistent with how other tests are written in our implementation
|
||||
@@ -188,12 +187,12 @@ Anyone can become a part-time contributor and help out on implementing Ethereum
|
||||
|
||||
### Core Contributors
|
||||
|
||||
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all of the responsibilities of part-time contributors plus the majority of the following:
|
||||
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all the responsibilities of part-time contributors plus the majority of the following:
|
||||
|
||||
- Stay up to date on the latest beacon chain specification
|
||||
- Monitor github issues and PR’s to make sure owner, labels, descriptions are correct
|
||||
- Monitor GitHub issues and PR’s to make sure owner, labels, descriptions are correct
|
||||
- Formulate independent ideas, suggest new work to do, point out improvements to existing approaches
|
||||
- Participate in code review, ensure code quality is excellent, and have ensure high code coverage
|
||||
- Participate in code review, ensure code quality is excellent, and ensure high code coverage
|
||||
- Help with social media presence, write bi-weekly development update
|
||||
- Represent Prysmatic Labs at events to help spread the word on scalability research and solutions
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/detect"
|
||||
"github.com/prysmaticlabs/prysm/v4/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v4/runtime/version"
|
||||
@@ -19,6 +20,8 @@ import (
|
||||
"golang.org/x/mod/semver"
|
||||
)
|
||||
|
||||
var errCheckpointBlockMismatch = errors.New("mismatch between checkpoint sync state and block")
|
||||
|
||||
// OriginData represents the BeaconState and ReadOnlySignedBeaconBlock necessary to start an empty Beacon Node
|
||||
// using Checkpoint Sync.
|
||||
type OriginData struct {
|
||||
@@ -75,37 +78,40 @@ func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, er
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error unmarshaling finalized state to correct version")
|
||||
}
|
||||
if s.Slot() != s.LatestBlockHeader().Slot {
|
||||
return nil, fmt.Errorf("finalized state slot does not match latest block header slot %d != %d", s.Slot(), s.LatestBlockHeader().Slot)
|
||||
}
|
||||
|
||||
sr, err := s.HashTreeRoot(ctx)
|
||||
slot := s.LatestBlockHeader().Slot
|
||||
bb, err := client.GetBlock(ctx, IdFromSlot(slot))
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
|
||||
}
|
||||
header := s.LatestBlockHeader()
|
||||
header.StateRoot = sr[:]
|
||||
br, err := header.HashTreeRoot()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error while computing block root using state data")
|
||||
}
|
||||
|
||||
bb, err := client.GetBlock(ctx, IdFromRoot(br))
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error requesting block by root = %#x", br)
|
||||
return nil, errors.Wrapf(err, "error requesting block by slot = %d", slot)
|
||||
}
|
||||
b, err := vu.UnmarshalBeaconBlock(bb)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
|
||||
}
|
||||
realBlockRoot, err := b.Block().HashTreeRoot()
|
||||
br, err := b.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
|
||||
}
|
||||
bodyRoot, err := b.Block().Body().HashTreeRoot()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block body")
|
||||
}
|
||||
|
||||
log.Printf("BeaconState slot=%d, Block slot=%d", s.Slot(), b.Block().Slot())
|
||||
log.Printf("BeaconState htr=%#x, Block state_root=%#x", sr, b.Block().StateRoot())
|
||||
log.Printf("BeaconState latest_block_header htr=%#x, block htr=%#x", br, realBlockRoot)
|
||||
sbr := bytesutil.ToBytes32(s.LatestBlockHeader().BodyRoot)
|
||||
if sbr != bodyRoot {
|
||||
return nil, errors.Wrapf(errCheckpointBlockMismatch, "state body root = %#x, block body root = %#x", sbr, bodyRoot)
|
||||
}
|
||||
sr, err := s.HashTreeRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
|
||||
}
|
||||
|
||||
log.
|
||||
WithField("block_slot", b.Block().Slot()).
|
||||
WithField("state_slot", s.Slot()).
|
||||
WithField("state_root", sr).
|
||||
WithField("block_root", br).
|
||||
Info("Downloaded checkpoint sync state and block.")
|
||||
return &OriginData{
|
||||
st: s,
|
||||
b: b,
|
||||
|
||||
@@ -440,7 +440,7 @@ func TestDownloadFinalizedData(t *testing.T) {
|
||||
case renderGetStatePath(IdFinalized):
|
||||
res.StatusCode = http.StatusOK
|
||||
res.Body = io.NopCloser(bytes.NewBuffer(ms))
|
||||
case renderGetBlockPath(IdFromRoot(br)):
|
||||
case renderGetBlockPath(IdFromSlot(b.Block().Slot())):
|
||||
res.StatusCode = http.StatusOK
|
||||
res.Body = io.NopCloser(bytes.NewBuffer(mb))
|
||||
default:
|
||||
|
||||
@@ -3,7 +3,6 @@ package builder
|
||||
import (
|
||||
"math/big"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
ssz "github.com/prysmaticlabs/fastssz"
|
||||
consensus_types "github.com/prysmaticlabs/prysm/v4/consensus-types"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
|
||||
@@ -162,9 +161,6 @@ func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
|
||||
|
||||
// Header returns the execution data interface.
|
||||
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
|
||||
if b.p == nil {
|
||||
return nil, errors.New("builder bid is nil")
|
||||
}
|
||||
// We have to convert big endian to little endian because the value is coming from the execution layer.
|
||||
v := big.NewInt(0).SetBytes(bytesutil.ReverseByteOrder(b.p.Value))
|
||||
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, math.WeiToGwei(v))
|
||||
|
||||
@@ -14,14 +14,17 @@ import (
|
||||
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
)
|
||||
|
||||
// SignedValidatorRegistration a struct for signed validator registrations.
|
||||
type SignedValidatorRegistration struct {
|
||||
*eth.SignedValidatorRegistrationV1
|
||||
}
|
||||
|
||||
// ValidatorRegistration a struct for validator registrations.
|
||||
type ValidatorRegistration struct {
|
||||
*eth.ValidatorRegistrationV1
|
||||
}
|
||||
|
||||
// MarshalJSON returns a json representation copy of signed validator registration.
|
||||
func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Message *ValidatorRegistration `json:"message"`
|
||||
@@ -32,6 +35,7 @@ func (r *SignedValidatorRegistration) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// UnmarshalJSON returns a byte representation of signed validator registration from json.
|
||||
func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
|
||||
if r.SignedValidatorRegistrationV1 == nil {
|
||||
r.SignedValidatorRegistrationV1 = ð.SignedValidatorRegistrationV1{}
|
||||
@@ -48,6 +52,7 @@ func (r *SignedValidatorRegistration) UnmarshalJSON(b []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalJSON returns a json representation copy of validator registration.
|
||||
func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
|
||||
@@ -62,6 +67,7 @@ func (r *ValidatorRegistration) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// UnmarshalJSON returns a byte representation of validator registration from json.
|
||||
func (r *ValidatorRegistration) UnmarshalJSON(b []byte) error {
|
||||
if r.ValidatorRegistrationV1 == nil {
|
||||
r.ValidatorRegistrationV1 = ð.ValidatorRegistrationV1{}
|
||||
@@ -92,6 +98,7 @@ func (r *ValidatorRegistration) UnmarshalJSON(b []byte) error {
|
||||
var errInvalidUint256 = errors.New("invalid Uint256")
|
||||
var errDecodeUint256 = errors.New("unable to decode into Uint256")
|
||||
|
||||
// Uint256 a wrapper representation of big.Int
|
||||
type Uint256 struct {
|
||||
*big.Int
|
||||
}
|
||||
@@ -118,7 +125,7 @@ func sszBytesToUint256(b []byte) (Uint256, error) {
|
||||
return Uint256{Int: bi}, nil
|
||||
}
|
||||
|
||||
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256
|
||||
// SSZBytes creates an ssz-style (little-endian byte slice) representation of the Uint256.
|
||||
func (s Uint256) SSZBytes() []byte {
|
||||
if !isValidUint256(s.Int) {
|
||||
return []byte{}
|
||||
@@ -126,18 +133,19 @@ func (s Uint256) SSZBytes() []byte {
|
||||
return bytesutil.PadTo(bytesutil.ReverseByteOrder(s.Int.Bytes()), 32)
|
||||
}
|
||||
|
||||
// UnmarshalJSON takes in a byte array and unmarshals the value in Uint256
|
||||
func (s *Uint256) UnmarshalJSON(t []byte) error {
|
||||
start := 0
|
||||
end := len(t)
|
||||
if t[0] == '"' {
|
||||
start += 1
|
||||
if len(t) < 2 {
|
||||
return errors.Errorf("provided Uint256 json string is too short: %s", string(t))
|
||||
}
|
||||
if t[end-1] == '"' {
|
||||
end -= 1
|
||||
if t[0] != '"' || t[end-1] != '"' {
|
||||
return errors.Errorf("provided Uint256 json string is malformed: %s", string(t))
|
||||
}
|
||||
return s.UnmarshalText(t[start:end])
|
||||
return s.UnmarshalText(t[1 : end-1])
|
||||
}
|
||||
|
||||
// UnmarshalText takes in a byte array and unmarshals the text in Uint256
|
||||
func (s *Uint256) UnmarshalText(t []byte) error {
|
||||
if s.Int == nil {
|
||||
s.Int = big.NewInt(0)
|
||||
@@ -153,6 +161,7 @@ func (s *Uint256) UnmarshalText(t []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalJSON returns a json byte representation of Uint256.
|
||||
func (s Uint256) MarshalJSON() ([]byte, error) {
|
||||
t, err := s.MarshalText()
|
||||
if err != nil {
|
||||
@@ -163,6 +172,7 @@ func (s Uint256) MarshalJSON() ([]byte, error) {
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// MarshalText returns a text byte representation of Uint256.
|
||||
func (s Uint256) MarshalText() ([]byte, error) {
|
||||
if !isValidUint256(s.Int) {
|
||||
return nil, errors.Wrapf(errInvalidUint256, "value=%s", s.Int)
|
||||
@@ -170,22 +180,27 @@ func (s Uint256) MarshalText() ([]byte, error) {
|
||||
return []byte(s.String()), nil
|
||||
}
|
||||
|
||||
// Uint64String is a custom type that allows marshalling from text to uint64 and vice versa.
|
||||
type Uint64String uint64
|
||||
|
||||
// UnmarshalText takes a byte array and unmarshals the text in Uint64String.
|
||||
func (s *Uint64String) UnmarshalText(t []byte) error {
|
||||
u, err := strconv.ParseUint(string(t), 10, 64)
|
||||
*s = Uint64String(u)
|
||||
return err
|
||||
}
|
||||
|
||||
// MarshalText returns a byte representation of the text from Uint64String.
|
||||
func (s Uint64String) MarshalText() ([]byte, error) {
|
||||
return []byte(fmt.Sprintf("%d", s)), nil
|
||||
}
|
||||
|
||||
// VersionResponse is a JSON representation of a field in the builder API header response.
|
||||
type VersionResponse struct {
|
||||
Version string `json:"version"`
|
||||
}
|
||||
|
||||
// ExecHeaderResponse is a JSON representation of the builder API header response for Bellatrix.
|
||||
type ExecHeaderResponse struct {
|
||||
Version string `json:"version"`
|
||||
Data struct {
|
||||
@@ -194,6 +209,7 @@ type ExecHeaderResponse struct {
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
// ToProto returns a SignedBuilderBid from ExecHeaderResponse for Bellatrix.
|
||||
func (ehr *ExecHeaderResponse) ToProto() (*eth.SignedBuilderBid, error) {
|
||||
bb, err := ehr.Data.Message.ToProto()
|
||||
if err != nil {
|
||||
@@ -205,6 +221,7 @@ func (ehr *ExecHeaderResponse) ToProto() (*eth.SignedBuilderBid, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ToProto returns a BuilderBid Proto for Bellatrix.
|
||||
func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
|
||||
header, err := bb.Header.ToProto()
|
||||
if err != nil {
|
||||
@@ -217,31 +234,34 @@ func (bb *BuilderBid) ToProto() (*eth.BuilderBid, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ToProto returns a ExecutionPayloadHeader for Bellatrix.
|
||||
func (h *ExecutionPayloadHeader) ToProto() (*v1.ExecutionPayloadHeader, error) {
|
||||
return &v1.ExecutionPayloadHeader{
|
||||
ParentHash: h.ParentHash,
|
||||
FeeRecipient: h.FeeRecipient,
|
||||
StateRoot: h.StateRoot,
|
||||
ReceiptsRoot: h.ReceiptsRoot,
|
||||
LogsBloom: h.LogsBloom,
|
||||
PrevRandao: h.PrevRandao,
|
||||
ParentHash: bytesutil.SafeCopyBytes(h.ParentHash),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(h.FeeRecipient),
|
||||
StateRoot: bytesutil.SafeCopyBytes(h.StateRoot),
|
||||
ReceiptsRoot: bytesutil.SafeCopyBytes(h.ReceiptsRoot),
|
||||
LogsBloom: bytesutil.SafeCopyBytes(h.LogsBloom),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(h.PrevRandao),
|
||||
BlockNumber: uint64(h.BlockNumber),
|
||||
GasLimit: uint64(h.GasLimit),
|
||||
GasUsed: uint64(h.GasUsed),
|
||||
Timestamp: uint64(h.Timestamp),
|
||||
ExtraData: h.ExtraData,
|
||||
BaseFeePerGas: h.BaseFeePerGas.SSZBytes(),
|
||||
BlockHash: h.BlockHash,
|
||||
TransactionsRoot: h.TransactionsRoot,
|
||||
ExtraData: bytesutil.SafeCopyBytes(h.ExtraData),
|
||||
BaseFeePerGas: bytesutil.SafeCopyBytes(h.BaseFeePerGas.SSZBytes()),
|
||||
BlockHash: bytesutil.SafeCopyBytes(h.BlockHash),
|
||||
TransactionsRoot: bytesutil.SafeCopyBytes(h.TransactionsRoot),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// BuilderBid is part of ExecHeaderResponse for Bellatrix.
|
||||
type BuilderBid struct {
|
||||
Header *ExecutionPayloadHeader `json:"header"`
|
||||
Value Uint256 `json:"value"`
|
||||
Pubkey hexutil.Bytes `json:"pubkey"`
|
||||
}
|
||||
|
||||
// ExecutionPayloadHeader is a field in BuilderBid.
|
||||
type ExecutionPayloadHeader struct {
|
||||
ParentHash hexutil.Bytes `json:"parent_hash"`
|
||||
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
|
||||
@@ -260,6 +280,7 @@ type ExecutionPayloadHeader struct {
|
||||
*v1.ExecutionPayloadHeader
|
||||
}
|
||||
|
||||
// MarshalJSON returns the JSON bytes representation of ExecutionPayloadHeader.
|
||||
func (h *ExecutionPayloadHeader) MarshalJSON() ([]byte, error) {
|
||||
type MarshalCaller ExecutionPayloadHeader
|
||||
baseFeePerGas, err := sszBytesToUint256(h.ExecutionPayloadHeader.BaseFeePerGas)
|
||||
@@ -284,6 +305,7 @@ func (h *ExecutionPayloadHeader) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// UnmarshalJSON takes in a JSON byte array and sets ExecutionPayloadHeader.
|
||||
func (h *ExecutionPayloadHeader) UnmarshalJSON(b []byte) error {
|
||||
type UnmarshalCaller ExecutionPayloadHeader
|
||||
uc := &UnmarshalCaller{}
|
||||
@@ -297,11 +319,13 @@ func (h *ExecutionPayloadHeader) UnmarshalJSON(b []byte) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// ExecPayloadResponse is the builder API /eth/v1/builder/blinded_blocks for Bellatrix.
|
||||
type ExecPayloadResponse struct {
|
||||
Version string `json:"version"`
|
||||
Data ExecutionPayload `json:"data"`
|
||||
}
|
||||
|
||||
// ExecutionPayload is a field of ExecPayloadResponse
|
||||
type ExecutionPayload struct {
|
||||
ParentHash hexutil.Bytes `json:"parent_hash"`
|
||||
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
|
||||
@@ -319,29 +343,31 @@ type ExecutionPayload struct {
|
||||
Transactions []hexutil.Bytes `json:"transactions"`
|
||||
}
|
||||
|
||||
// ToProto returns a ExecutionPayload Proto from ExecPayloadResponse
|
||||
func (r *ExecPayloadResponse) ToProto() (*v1.ExecutionPayload, error) {
|
||||
return r.Data.ToProto()
|
||||
}
|
||||
|
||||
// ToProto returns a ExecutionPayload Proto
|
||||
func (p *ExecutionPayload) ToProto() (*v1.ExecutionPayload, error) {
|
||||
txs := make([][]byte, len(p.Transactions))
|
||||
for i := range p.Transactions {
|
||||
txs[i] = p.Transactions[i]
|
||||
txs[i] = bytesutil.SafeCopyBytes(p.Transactions[i])
|
||||
}
|
||||
return &v1.ExecutionPayload{
|
||||
ParentHash: p.ParentHash,
|
||||
FeeRecipient: p.FeeRecipient,
|
||||
StateRoot: p.StateRoot,
|
||||
ReceiptsRoot: p.ReceiptsRoot,
|
||||
LogsBloom: p.LogsBloom,
|
||||
PrevRandao: p.PrevRandao,
|
||||
ParentHash: bytesutil.SafeCopyBytes(p.ParentHash),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(p.FeeRecipient),
|
||||
StateRoot: bytesutil.SafeCopyBytes(p.StateRoot),
|
||||
ReceiptsRoot: bytesutil.SafeCopyBytes(p.ReceiptsRoot),
|
||||
LogsBloom: bytesutil.SafeCopyBytes(p.LogsBloom),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(p.PrevRandao),
|
||||
BlockNumber: uint64(p.BlockNumber),
|
||||
GasLimit: uint64(p.GasLimit),
|
||||
GasUsed: uint64(p.GasUsed),
|
||||
Timestamp: uint64(p.Timestamp),
|
||||
ExtraData: p.ExtraData,
|
||||
BaseFeePerGas: p.BaseFeePerGas.SSZBytes(),
|
||||
BlockHash: p.BlockHash,
|
||||
ExtraData: bytesutil.SafeCopyBytes(p.ExtraData),
|
||||
BaseFeePerGas: bytesutil.SafeCopyBytes(p.BaseFeePerGas.SSZBytes()),
|
||||
BlockHash: bytesutil.SafeCopyBytes(p.BlockHash),
|
||||
Transactions: txs,
|
||||
}, nil
|
||||
}
|
||||
@@ -355,22 +381,22 @@ func FromProto(payload *v1.ExecutionPayload) (ExecutionPayload, error) {
|
||||
}
|
||||
txs := make([]hexutil.Bytes, len(payload.Transactions))
|
||||
for i := range payload.Transactions {
|
||||
txs[i] = payload.Transactions[i]
|
||||
txs[i] = bytesutil.SafeCopyBytes(payload.Transactions[i])
|
||||
}
|
||||
return ExecutionPayload{
|
||||
ParentHash: payload.ParentHash,
|
||||
FeeRecipient: payload.FeeRecipient,
|
||||
StateRoot: payload.StateRoot,
|
||||
ReceiptsRoot: payload.ReceiptsRoot,
|
||||
LogsBloom: payload.LogsBloom,
|
||||
PrevRandao: payload.PrevRandao,
|
||||
ParentHash: bytesutil.SafeCopyBytes(payload.ParentHash),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(payload.FeeRecipient),
|
||||
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
|
||||
ReceiptsRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
|
||||
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
|
||||
BlockNumber: Uint64String(payload.BlockNumber),
|
||||
GasLimit: Uint64String(payload.GasLimit),
|
||||
GasUsed: Uint64String(payload.GasUsed),
|
||||
Timestamp: Uint64String(payload.Timestamp),
|
||||
ExtraData: payload.ExtraData,
|
||||
ExtraData: bytesutil.SafeCopyBytes(payload.ExtraData),
|
||||
BaseFeePerGas: bFee,
|
||||
BlockHash: payload.BlockHash,
|
||||
BlockHash: bytesutil.SafeCopyBytes(payload.BlockHash),
|
||||
Transactions: txs,
|
||||
}, nil
|
||||
}
|
||||
@@ -384,36 +410,37 @@ func FromProtoCapella(payload *v1.ExecutionPayloadCapella) (ExecutionPayloadCape
|
||||
}
|
||||
txs := make([]hexutil.Bytes, len(payload.Transactions))
|
||||
for i := range payload.Transactions {
|
||||
txs[i] = payload.Transactions[i]
|
||||
txs[i] = bytesutil.SafeCopyBytes(payload.Transactions[i])
|
||||
}
|
||||
withdrawals := make([]Withdrawal, len(payload.Withdrawals))
|
||||
for i, w := range payload.Withdrawals {
|
||||
withdrawals[i] = Withdrawal{
|
||||
Index: Uint256{Int: big.NewInt(0).SetUint64(w.Index)},
|
||||
ValidatorIndex: Uint256{Int: big.NewInt(0).SetUint64(uint64(w.ValidatorIndex))},
|
||||
Address: w.Address,
|
||||
Address: bytesutil.SafeCopyBytes(w.Address),
|
||||
Amount: Uint256{Int: big.NewInt(0).SetUint64(w.Amount)},
|
||||
}
|
||||
}
|
||||
return ExecutionPayloadCapella{
|
||||
ParentHash: payload.ParentHash,
|
||||
FeeRecipient: payload.FeeRecipient,
|
||||
StateRoot: payload.StateRoot,
|
||||
ReceiptsRoot: payload.ReceiptsRoot,
|
||||
LogsBloom: payload.LogsBloom,
|
||||
PrevRandao: payload.PrevRandao,
|
||||
ParentHash: bytesutil.SafeCopyBytes(payload.ParentHash),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(payload.FeeRecipient),
|
||||
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
|
||||
ReceiptsRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
|
||||
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
|
||||
BlockNumber: Uint64String(payload.BlockNumber),
|
||||
GasLimit: Uint64String(payload.GasLimit),
|
||||
GasUsed: Uint64String(payload.GasUsed),
|
||||
Timestamp: Uint64String(payload.Timestamp),
|
||||
ExtraData: payload.ExtraData,
|
||||
ExtraData: bytesutil.SafeCopyBytes(payload.ExtraData),
|
||||
BaseFeePerGas: bFee,
|
||||
BlockHash: payload.BlockHash,
|
||||
BlockHash: bytesutil.SafeCopyBytes(payload.BlockHash),
|
||||
Transactions: txs,
|
||||
Withdrawals: withdrawals,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ExecHeaderResponseCapella is the response of builder API /eth/v1/builder/header/{slot}/{parent_hash}/{pubkey} for Capella.
|
||||
type ExecHeaderResponseCapella struct {
|
||||
Data struct {
|
||||
Signature hexutil.Bytes `json:"signature"`
|
||||
@@ -421,6 +448,7 @@ type ExecHeaderResponseCapella struct {
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
// ToProto returns a SignedBuilderBidCapella Proto from ExecHeaderResponseCapella.
|
||||
func (ehr *ExecHeaderResponseCapella) ToProto() (*eth.SignedBuilderBidCapella, error) {
|
||||
bb, err := ehr.Data.Message.ToProto()
|
||||
if err != nil {
|
||||
@@ -428,10 +456,11 @@ func (ehr *ExecHeaderResponseCapella) ToProto() (*eth.SignedBuilderBidCapella, e
|
||||
}
|
||||
return ð.SignedBuilderBidCapella{
|
||||
Message: bb,
|
||||
Signature: ehr.Data.Signature,
|
||||
Signature: bytesutil.SafeCopyBytes(ehr.Data.Signature),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ToProto returns a BuilderBidCapella Proto.
|
||||
func (bb *BuilderBidCapella) ToProto() (*eth.BuilderBidCapella, error) {
|
||||
header, err := bb.Header.ToProto()
|
||||
if err != nil {
|
||||
@@ -439,37 +468,40 @@ func (bb *BuilderBidCapella) ToProto() (*eth.BuilderBidCapella, error) {
|
||||
}
|
||||
return ð.BuilderBidCapella{
|
||||
Header: header,
|
||||
Value: bb.Value.SSZBytes(),
|
||||
Pubkey: bb.Pubkey,
|
||||
Value: bytesutil.SafeCopyBytes(bb.Value.SSZBytes()),
|
||||
Pubkey: bytesutil.SafeCopyBytes(bb.Pubkey),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ToProto returns a ExecutionPayloadHeaderCapella Proto
|
||||
func (h *ExecutionPayloadHeaderCapella) ToProto() (*v1.ExecutionPayloadHeaderCapella, error) {
|
||||
return &v1.ExecutionPayloadHeaderCapella{
|
||||
ParentHash: h.ParentHash,
|
||||
FeeRecipient: h.FeeRecipient,
|
||||
StateRoot: h.StateRoot,
|
||||
ReceiptsRoot: h.ReceiptsRoot,
|
||||
LogsBloom: h.LogsBloom,
|
||||
PrevRandao: h.PrevRandao,
|
||||
ParentHash: bytesutil.SafeCopyBytes(h.ParentHash),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(h.FeeRecipient),
|
||||
StateRoot: bytesutil.SafeCopyBytes(h.StateRoot),
|
||||
ReceiptsRoot: bytesutil.SafeCopyBytes(h.ReceiptsRoot),
|
||||
LogsBloom: bytesutil.SafeCopyBytes(h.LogsBloom),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(h.PrevRandao),
|
||||
BlockNumber: uint64(h.BlockNumber),
|
||||
GasLimit: uint64(h.GasLimit),
|
||||
GasUsed: uint64(h.GasUsed),
|
||||
Timestamp: uint64(h.Timestamp),
|
||||
ExtraData: h.ExtraData,
|
||||
BaseFeePerGas: h.BaseFeePerGas.SSZBytes(),
|
||||
BlockHash: h.BlockHash,
|
||||
TransactionsRoot: h.TransactionsRoot,
|
||||
WithdrawalsRoot: h.WithdrawalsRoot,
|
||||
ExtraData: bytesutil.SafeCopyBytes(h.ExtraData),
|
||||
BaseFeePerGas: bytesutil.SafeCopyBytes(h.BaseFeePerGas.SSZBytes()),
|
||||
BlockHash: bytesutil.SafeCopyBytes(h.BlockHash),
|
||||
TransactionsRoot: bytesutil.SafeCopyBytes(h.TransactionsRoot),
|
||||
WithdrawalsRoot: bytesutil.SafeCopyBytes(h.WithdrawalsRoot),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// BuilderBidCapella is field of ExecHeaderResponseCapella.
|
||||
type BuilderBidCapella struct {
|
||||
Header *ExecutionPayloadHeaderCapella `json:"header"`
|
||||
Value Uint256 `json:"value"`
|
||||
Pubkey hexutil.Bytes `json:"pubkey"`
|
||||
}
|
||||
|
||||
// ExecutionPayloadHeaderCapella is a field in BuilderBidCapella.
|
||||
type ExecutionPayloadHeaderCapella struct {
|
||||
ParentHash hexutil.Bytes `json:"parent_hash"`
|
||||
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
|
||||
@@ -489,6 +521,7 @@ type ExecutionPayloadHeaderCapella struct {
|
||||
*v1.ExecutionPayloadHeaderCapella
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte representation of ExecutionPayloadHeaderCapella.
|
||||
func (h *ExecutionPayloadHeaderCapella) MarshalJSON() ([]byte, error) {
|
||||
type MarshalCaller ExecutionPayloadHeaderCapella
|
||||
baseFeePerGas, err := sszBytesToUint256(h.ExecutionPayloadHeaderCapella.BaseFeePerGas)
|
||||
@@ -514,6 +547,7 @@ func (h *ExecutionPayloadHeaderCapella) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// UnmarshalJSON takes a JSON byte array and sets ExecutionPayloadHeaderCapella.
|
||||
func (h *ExecutionPayloadHeaderCapella) UnmarshalJSON(b []byte) error {
|
||||
type UnmarshalCaller ExecutionPayloadHeaderCapella
|
||||
uc := &UnmarshalCaller{}
|
||||
@@ -527,11 +561,13 @@ func (h *ExecutionPayloadHeaderCapella) UnmarshalJSON(b []byte) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// ExecPayloadResponseCapella is the builder API /eth/v1/builder/blinded_blocks for Capella.
|
||||
type ExecPayloadResponseCapella struct {
|
||||
Version string `json:"version"`
|
||||
Data ExecutionPayloadCapella `json:"data"`
|
||||
}
|
||||
|
||||
// ExecutionPayloadCapella is a field of ExecPayloadResponseCapella.
|
||||
type ExecutionPayloadCapella struct {
|
||||
ParentHash hexutil.Bytes `json:"parent_hash"`
|
||||
FeeRecipient hexutil.Bytes `json:"fee_recipient"`
|
||||
@@ -550,43 +586,46 @@ type ExecutionPayloadCapella struct {
|
||||
Withdrawals []Withdrawal `json:"withdrawals"`
|
||||
}
|
||||
|
||||
// ToProto returns a ExecutionPayloadCapella Proto.
|
||||
func (r *ExecPayloadResponseCapella) ToProto() (*v1.ExecutionPayloadCapella, error) {
|
||||
return r.Data.ToProto()
|
||||
}
|
||||
|
||||
// ToProto returns a ExecutionPayloadCapella Proto.
|
||||
func (p *ExecutionPayloadCapella) ToProto() (*v1.ExecutionPayloadCapella, error) {
|
||||
txs := make([][]byte, len(p.Transactions))
|
||||
for i := range p.Transactions {
|
||||
txs[i] = p.Transactions[i]
|
||||
txs[i] = bytesutil.SafeCopyBytes(p.Transactions[i])
|
||||
}
|
||||
withdrawals := make([]*v1.Withdrawal, len(p.Withdrawals))
|
||||
for i, w := range p.Withdrawals {
|
||||
withdrawals[i] = &v1.Withdrawal{
|
||||
Index: w.Index.Uint64(),
|
||||
ValidatorIndex: types.ValidatorIndex(w.ValidatorIndex.Uint64()),
|
||||
Address: w.Address,
|
||||
Address: bytesutil.SafeCopyBytes(w.Address),
|
||||
Amount: w.Amount.Uint64(),
|
||||
}
|
||||
}
|
||||
return &v1.ExecutionPayloadCapella{
|
||||
ParentHash: p.ParentHash,
|
||||
FeeRecipient: p.FeeRecipient,
|
||||
StateRoot: p.StateRoot,
|
||||
ReceiptsRoot: p.ReceiptsRoot,
|
||||
LogsBloom: p.LogsBloom,
|
||||
PrevRandao: p.PrevRandao,
|
||||
ParentHash: bytesutil.SafeCopyBytes(p.ParentHash),
|
||||
FeeRecipient: bytesutil.SafeCopyBytes(p.FeeRecipient),
|
||||
StateRoot: bytesutil.SafeCopyBytes(p.StateRoot),
|
||||
ReceiptsRoot: bytesutil.SafeCopyBytes(p.ReceiptsRoot),
|
||||
LogsBloom: bytesutil.SafeCopyBytes(p.LogsBloom),
|
||||
PrevRandao: bytesutil.SafeCopyBytes(p.PrevRandao),
|
||||
BlockNumber: uint64(p.BlockNumber),
|
||||
GasLimit: uint64(p.GasLimit),
|
||||
GasUsed: uint64(p.GasUsed),
|
||||
Timestamp: uint64(p.Timestamp),
|
||||
ExtraData: p.ExtraData,
|
||||
BaseFeePerGas: p.BaseFeePerGas.SSZBytes(),
|
||||
BlockHash: p.BlockHash,
|
||||
ExtraData: bytesutil.SafeCopyBytes(p.ExtraData),
|
||||
BaseFeePerGas: bytesutil.SafeCopyBytes(p.BaseFeePerGas.SSZBytes()),
|
||||
BlockHash: bytesutil.SafeCopyBytes(p.BlockHash),
|
||||
Transactions: txs,
|
||||
Withdrawals: withdrawals,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Withdrawal is a field of ExecutionPayloadCapella.
|
||||
type Withdrawal struct {
|
||||
Index Uint256 `json:"index"`
|
||||
ValidatorIndex Uint256 `json:"validator_index"`
|
||||
@@ -594,18 +633,22 @@ type Withdrawal struct {
|
||||
Amount Uint256 `json:"amount"`
|
||||
}
|
||||
|
||||
// SignedBlindedBeaconBlockBellatrix is the request object for builder API /eth/v1/builder/blinded_blocks.
|
||||
type SignedBlindedBeaconBlockBellatrix struct {
|
||||
*eth.SignedBlindedBeaconBlockBellatrix
|
||||
}
|
||||
|
||||
// BlindedBeaconBlockBellatrix is a field in SignedBlindedBeaconBlockBellatrix.
|
||||
type BlindedBeaconBlockBellatrix struct {
|
||||
*eth.BlindedBeaconBlockBellatrix
|
||||
}
|
||||
|
||||
// BlindedBeaconBlockBodyBellatrix is a field in BlindedBeaconBlockBellatrix.
|
||||
type BlindedBeaconBlockBodyBellatrix struct {
|
||||
*eth.BlindedBeaconBlockBodyBellatrix
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of SignedBlindedBeaconBlockBellatrix.
|
||||
func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Message *BlindedBeaconBlockBellatrix `json:"message"`
|
||||
@@ -616,6 +659,7 @@ func (r *SignedBlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockBellatrix.
|
||||
func (b *BlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Slot string `json:"slot"`
|
||||
@@ -632,10 +676,12 @@ func (b *BlindedBeaconBlockBellatrix) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// ProposerSlashing is a field in BlindedBeaconBlockBodyCapella.
|
||||
type ProposerSlashing struct {
|
||||
*eth.ProposerSlashing
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of ProposerSlashing.
|
||||
func (s *ProposerSlashing) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
SignedHeader1 *SignedBeaconBlockHeader `json:"signed_header_1"`
|
||||
@@ -646,10 +692,12 @@ func (s *ProposerSlashing) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// SignedBeaconBlockHeader is a field of ProposerSlashing.
|
||||
type SignedBeaconBlockHeader struct {
|
||||
*eth.SignedBeaconBlockHeader
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of SignedBeaconBlockHeader.
|
||||
func (h *SignedBeaconBlockHeader) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Header *BeaconBlockHeader `json:"message"`
|
||||
@@ -660,10 +708,12 @@ func (h *SignedBeaconBlockHeader) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// BeaconBlockHeader is a field of SignedBeaconBlockHeader.
|
||||
type BeaconBlockHeader struct {
|
||||
*eth.BeaconBlockHeader
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of BeaconBlockHeader.
|
||||
func (h *BeaconBlockHeader) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Slot string `json:"slot"`
|
||||
@@ -680,10 +730,12 @@ func (h *BeaconBlockHeader) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// IndexedAttestation is a field of AttesterSlashing.
|
||||
type IndexedAttestation struct {
|
||||
*eth.IndexedAttestation
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of IndexedAttestation.
|
||||
func (a *IndexedAttestation) MarshalJSON() ([]byte, error) {
|
||||
indices := make([]string, len(a.IndexedAttestation.AttestingIndices))
|
||||
for i := range a.IndexedAttestation.AttestingIndices {
|
||||
@@ -700,10 +752,12 @@ func (a *IndexedAttestation) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// AttesterSlashing is a field of a Beacon Block Body.
|
||||
type AttesterSlashing struct {
|
||||
*eth.AttesterSlashing
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of AttesterSlashing.
|
||||
func (s *AttesterSlashing) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Attestation1 *IndexedAttestation `json:"attestation_1"`
|
||||
@@ -714,10 +768,12 @@ func (s *AttesterSlashing) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// Checkpoint is a field of AttestationData.
|
||||
type Checkpoint struct {
|
||||
*eth.Checkpoint
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of Checkpoint.
|
||||
func (c *Checkpoint) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Epoch string `json:"epoch"`
|
||||
@@ -728,10 +784,12 @@ func (c *Checkpoint) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// AttestationData is a field of IndexedAttestation.
|
||||
type AttestationData struct {
|
||||
*eth.AttestationData
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of AttestationData.
|
||||
func (a *AttestationData) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Slot string `json:"slot"`
|
||||
@@ -748,10 +806,12 @@ func (a *AttestationData) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// Attestation is a field of Beacon Block Body.
|
||||
type Attestation struct {
|
||||
*eth.Attestation
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of Attestation.
|
||||
func (a *Attestation) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
AggregationBits hexutil.Bytes `json:"aggregation_bits"`
|
||||
@@ -764,10 +824,12 @@ func (a *Attestation) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// DepositData is a field of Deposit.
|
||||
type DepositData struct {
|
||||
*eth.Deposit_Data
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of DepositData.
|
||||
func (d *DepositData) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
PublicKey hexutil.Bytes `json:"pubkey"`
|
||||
@@ -782,10 +844,12 @@ func (d *DepositData) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// Deposit is a field of Beacon Block Body.
|
||||
type Deposit struct {
|
||||
*eth.Deposit
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of Deposit.
|
||||
func (d *Deposit) MarshalJSON() ([]byte, error) {
|
||||
proof := make([]hexutil.Bytes, len(d.Proof))
|
||||
for i := range d.Proof {
|
||||
@@ -800,10 +864,12 @@ func (d *Deposit) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// SignedVoluntaryExit is a field of Beacon Block Body.
|
||||
type SignedVoluntaryExit struct {
|
||||
*eth.SignedVoluntaryExit
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of SignedVoluntaryExit.
|
||||
func (sve *SignedVoluntaryExit) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Message *VoluntaryExit `json:"message"`
|
||||
@@ -814,10 +880,12 @@ func (sve *SignedVoluntaryExit) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// VoluntaryExit is a field in SignedVoluntaryExit
|
||||
type VoluntaryExit struct {
|
||||
*eth.VoluntaryExit
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of VoluntaryExit
|
||||
func (ve *VoluntaryExit) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Epoch string `json:"epoch"`
|
||||
@@ -828,10 +896,12 @@ func (ve *VoluntaryExit) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// SyncAggregate is a field of Beacon Block Body.
|
||||
type SyncAggregate struct {
|
||||
*eth.SyncAggregate
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of SyncAggregate.
|
||||
func (s *SyncAggregate) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
SyncCommitteeBits hexutil.Bytes `json:"sync_committee_bits"`
|
||||
@@ -842,10 +912,12 @@ func (s *SyncAggregate) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// Eth1Data is a field of Beacon Block Body.
|
||||
type Eth1Data struct {
|
||||
*eth.Eth1Data
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of Eth1Data.
|
||||
func (e *Eth1Data) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
DepositRoot hexutil.Bytes `json:"deposit_root"`
|
||||
@@ -858,6 +930,7 @@ func (e *Eth1Data) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockBodyBellatrix.
|
||||
func (b *BlindedBeaconBlockBodyBellatrix) MarshalJSON() ([]byte, error) {
|
||||
sve := make([]*SignedVoluntaryExit, len(b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits))
|
||||
for i := range b.BlindedBeaconBlockBodyBellatrix.VoluntaryExits {
|
||||
@@ -904,10 +977,12 @@ func (b *BlindedBeaconBlockBodyBellatrix) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// SignedBLSToExecutionChange is a field in Beacon Block Body for capella and above.
|
||||
type SignedBLSToExecutionChange struct {
|
||||
*eth.SignedBLSToExecutionChange
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of SignedBLSToExecutionChange.
|
||||
func (ch *SignedBLSToExecutionChange) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Message *BLSToExecutionChange `json:"message"`
|
||||
@@ -918,10 +993,12 @@ func (ch *SignedBLSToExecutionChange) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// BLSToExecutionChange is a field in SignedBLSToExecutionChange.
|
||||
type BLSToExecutionChange struct {
|
||||
*eth.BLSToExecutionChange
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of BLSToExecutionChange.
|
||||
func (ch *BLSToExecutionChange) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
ValidatorIndex string `json:"validator_index"`
|
||||
@@ -934,18 +1011,22 @@ func (ch *BLSToExecutionChange) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// SignedBlindedBeaconBlockCapella is part of the request object sent to builder API /eth/v1/builder/blinded_blocks for Capella.
|
||||
type SignedBlindedBeaconBlockCapella struct {
|
||||
*eth.SignedBlindedBeaconBlockCapella
|
||||
}
|
||||
|
||||
// BlindedBeaconBlockCapella is a field in SignedBlindedBeaconBlockCapella.
|
||||
type BlindedBeaconBlockCapella struct {
|
||||
*eth.BlindedBeaconBlockCapella
|
||||
}
|
||||
|
||||
// BlindedBeaconBlockBodyCapella is a field in BlindedBeaconBlockCapella.
|
||||
type BlindedBeaconBlockBodyCapella struct {
|
||||
*eth.BlindedBeaconBlockBodyCapella
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of SignedBlindedBeaconBlockCapella.
|
||||
func (b *SignedBlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Message *BlindedBeaconBlockCapella `json:"message"`
|
||||
@@ -956,6 +1037,7 @@ func (b *SignedBlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockCapella
|
||||
func (b *BlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(struct {
|
||||
Slot string `json:"slot"`
|
||||
@@ -972,6 +1054,7 @@ func (b *BlindedBeaconBlockCapella) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// MarshalJSON returns a JSON byte array representation of BlindedBeaconBlockBodyCapella
|
||||
func (b *BlindedBeaconBlockBodyCapella) MarshalJSON() ([]byte, error) {
|
||||
sve := make([]*SignedVoluntaryExit, len(b.VoluntaryExits))
|
||||
for i := range b.VoluntaryExits {
|
||||
@@ -1024,6 +1107,7 @@ func (b *BlindedBeaconBlockBodyCapella) MarshalJSON() ([]byte, error) {
|
||||
})
|
||||
}
|
||||
|
||||
// ErrorMessage is a JSON representation of the builder API's returned error message.
|
||||
type ErrorMessage struct {
|
||||
Code int `json:"code"`
|
||||
Message string `json:"message"`
|
||||
|
||||
@@ -1156,6 +1156,14 @@ func TestUint256Unmarshal(t *testing.T) {
|
||||
require.Equal(t, expected, string(m))
|
||||
}
|
||||
|
||||
func TestUint256Unmarshal_BadData(t *testing.T) {
|
||||
var bigNum Uint256
|
||||
|
||||
assert.ErrorContains(t, "provided Uint256 json string is too short", bigNum.UnmarshalJSON([]byte{'"'}))
|
||||
assert.ErrorContains(t, "provided Uint256 json string is malformed", bigNum.UnmarshalJSON([]byte{'"', '1', '2'}))
|
||||
|
||||
}
|
||||
|
||||
func TestUint256UnmarshalNegative(t *testing.T) {
|
||||
m := "-1"
|
||||
var value Uint256
|
||||
|
||||
@@ -340,7 +340,13 @@ func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
|
||||
}
|
||||
s.headLock.RLock()
|
||||
headRoot := s.head.root
|
||||
headSlot := s.head.slot
|
||||
headOptimistic := s.head.optimistic
|
||||
s.headLock.RUnlock()
|
||||
// we trust the head package for recent head slots, otherwise fallback to forkchoice
|
||||
if headSlot+2 >= s.CurrentSlot() {
|
||||
return headOptimistic, nil
|
||||
}
|
||||
|
||||
s.cfg.ForkChoiceStore.RLock()
|
||||
defer s.cfg.ForkChoiceStore.RUnlock()
|
||||
|
||||
@@ -422,6 +422,12 @@ func TestService_IsOptimistic(t *testing.T) {
|
||||
|
||||
opt, err := c.IsOptimistic(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, primitives.Slot(0), c.CurrentSlot())
|
||||
require.Equal(t, false, opt)
|
||||
|
||||
c.SetGenesisTime(time.Now().Add(-time.Second * time.Duration(4*params.BeaconConfig().SecondsPerSlot)))
|
||||
opt, err = c.IsOptimistic(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, opt)
|
||||
}
|
||||
|
||||
|
||||
@@ -71,7 +71,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
|
||||
|
||||
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
|
||||
hasAttr, attr, proposerId := s.getPayloadAttribute(ctx, arg.headState, nextSlot, arg.headRoot[:])
|
||||
|
||||
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attr)
|
||||
if err != nil {
|
||||
switch err {
|
||||
@@ -154,7 +153,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
|
||||
var pId [8]byte
|
||||
copy(pId[:], payloadID[:])
|
||||
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
|
||||
} else if hasAttr && payloadID == nil {
|
||||
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
|
||||
log.WithFields(logrus.Fields{
|
||||
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
|
||||
"slot": headBlk.Slot(),
|
||||
|
||||
@@ -47,9 +47,11 @@ func (s *Service) UpdateAndSaveHeadWithBalances(ctx context.Context) error {
|
||||
|
||||
// This defines the current chain service's view of head.
|
||||
type head struct {
|
||||
root [32]byte // current head root.
|
||||
block interfaces.ReadOnlySignedBeaconBlock // current head block.
|
||||
state state.BeaconState // current head state.
|
||||
root [32]byte // current head root.
|
||||
block interfaces.ReadOnlySignedBeaconBlock // current head block.
|
||||
state state.BeaconState // current head state.
|
||||
slot primitives.Slot // the head block slot number
|
||||
optimistic bool // optimistic status when saved head
|
||||
}
|
||||
|
||||
// This saves head info to the local service cache, it also saves the
|
||||
@@ -94,6 +96,10 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
|
||||
return errors.Wrap(err, "could not get old head root")
|
||||
}
|
||||
oldHeadRoot := bytesutil.ToBytes32(r)
|
||||
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("could not check if node is optimistically synced")
|
||||
}
|
||||
if headBlock.Block().ParentRoot() != oldHeadRoot {
|
||||
// A chain re-org occurred, so we fire an event notifying the rest of the services.
|
||||
commonRoot, forkSlot, err := s.cfg.ForkChoiceStore.CommonAncestor(ctx, oldHeadRoot, newHeadRoot)
|
||||
@@ -125,10 +131,6 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
|
||||
reorgDistance.Observe(float64(dis))
|
||||
reorgDepth.Observe(float64(dep))
|
||||
|
||||
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not check if node is optimistically synced")
|
||||
}
|
||||
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
|
||||
Type: statefeed.Reorg,
|
||||
Data: ðpbv1.EventChainReorg{
|
||||
@@ -150,7 +152,14 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
|
||||
}
|
||||
|
||||
// Cache the new head info.
|
||||
if err := s.setHead(newHeadRoot, headBlock, headState); err != nil {
|
||||
newHead := &head{
|
||||
root: newHeadRoot,
|
||||
block: headBlock,
|
||||
state: headState,
|
||||
optimistic: isOptimistic,
|
||||
slot: headBlock.Block().Slot(),
|
||||
}
|
||||
if err := s.setHead(newHead); err != nil {
|
||||
return errors.Wrap(err, "could not set head")
|
||||
}
|
||||
|
||||
@@ -195,20 +204,22 @@ func (s *Service) saveHeadNoDB(ctx context.Context, b interfaces.ReadOnlySignedB
|
||||
return nil
|
||||
}
|
||||
|
||||
// This sets head view object which is used to track the head slot, root, block and state.
|
||||
func (s *Service) setHead(root [32]byte, block interfaces.ReadOnlySignedBeaconBlock, state state.BeaconState) error {
|
||||
// This sets head view object which is used to track the head slot, root, block, state and optimistic status
|
||||
func (s *Service) setHead(newHead *head) error {
|
||||
s.headLock.Lock()
|
||||
defer s.headLock.Unlock()
|
||||
|
||||
// This does a full copy of the block and state.
|
||||
bCp, err := block.Copy()
|
||||
bCp, err := newHead.block.Copy()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.head = &head{
|
||||
root: root,
|
||||
block: bCp,
|
||||
state: state.Copy(),
|
||||
root: newHead.root,
|
||||
block: bCp,
|
||||
state: newHead.state.Copy(),
|
||||
optimistic: newHead.optimistic,
|
||||
slot: newHead.slot,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -157,7 +157,11 @@ func (s *Service) getSyncCommitteeHeadState(ctx context.Context, slot primitives
|
||||
if headState == nil || headState.IsNil() {
|
||||
return nil, errors.New("nil state")
|
||||
}
|
||||
headState, err = transition.ProcessSlotsIfPossible(ctx, headState, slot)
|
||||
headRoot, err := s.HeadRoot(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
headState, err = transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, slot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
|
||||
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
|
||||
"github.com/prysmaticlabs/prysm/v4/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v4/testing/require"
|
||||
@@ -15,7 +16,7 @@ import (
|
||||
|
||||
func TestService_HeadSyncCommitteeIndices(t *testing.T) {
|
||||
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
|
||||
c := &Service{}
|
||||
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
|
||||
c.head = &head{state: s}
|
||||
|
||||
// Current period
|
||||
@@ -38,7 +39,7 @@ func TestService_HeadSyncCommitteeIndices(t *testing.T) {
|
||||
|
||||
func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
|
||||
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
|
||||
c := &Service{}
|
||||
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
|
||||
c.head = &head{state: s}
|
||||
|
||||
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
|
||||
@@ -66,7 +67,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
|
||||
|
||||
func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
|
||||
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
|
||||
c := &Service{}
|
||||
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
|
||||
c.head = &head{state: s}
|
||||
|
||||
// Process slot up to 2 * `EpochsPerSyncCommitteePeriod` so it can run `ProcessSyncCommitteeUpdates` twice.
|
||||
@@ -81,7 +82,7 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
|
||||
|
||||
func TestService_HeadSyncCommitteeDomain(t *testing.T) {
|
||||
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
|
||||
c := &Service{}
|
||||
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
|
||||
c.head = &head{state: s}
|
||||
|
||||
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())
|
||||
|
||||
@@ -120,7 +120,7 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
|
||||
fields := logrus.Fields{
|
||||
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
|
||||
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
|
||||
"blockNumber": payload.BlockNumber,
|
||||
"blockNumber": payload.BlockNumber(),
|
||||
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
|
||||
}
|
||||
if block.Version() >= version.Capella {
|
||||
|
||||
@@ -172,3 +172,10 @@ func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithSyncComplete(c chan struct{}) Option {
|
||||
return func(s *Service) error {
|
||||
s.syncComplete = c
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -136,7 +136,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not validate new payload")
|
||||
}
|
||||
if isValidPayload {
|
||||
if signed.Version() < version.Capella && isValidPayload {
|
||||
if err := s.validateMergeTransitionBlock(ctx, preStateVersion, preStateHeader, signed); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -285,7 +285,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
|
||||
}()
|
||||
}
|
||||
defer reportAttestationInclusion(b)
|
||||
if err := s.handleEpochBoundary(ctx, postState); err != nil {
|
||||
if err := s.handleEpochBoundary(ctx, postState, blockRoot[:]); err != nil {
|
||||
return err
|
||||
}
|
||||
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
|
||||
@@ -483,14 +483,14 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
|
||||
}
|
||||
|
||||
// Epoch boundary bookkeeping such as logging epoch summaries.
|
||||
func (s *Service) handleEpochBoundary(ctx context.Context, postState state.BeaconState) error {
|
||||
func (s *Service) handleEpochBoundary(ctx context.Context, postState state.BeaconState, blockRoot []byte) error {
|
||||
ctx, span := trace.StartSpan(ctx, "blockChain.handleEpochBoundary")
|
||||
defer span.End()
|
||||
|
||||
var err error
|
||||
if postState.Slot()+1 == s.nextEpochBoundarySlot {
|
||||
copied := postState.Copy()
|
||||
copied, err := transition.ProcessSlots(ctx, copied, copied.Slot()+1)
|
||||
copied, err := transition.ProcessSlotsUsingNextSlotCache(ctx, copied, blockRoot, copied.Slot()+1)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -501,7 +501,28 @@ func (s *Service) handleEpochBoundary(ctx context.Context, postState state.Beaco
|
||||
if err := helpers.UpdateProposerIndicesInCache(ctx, copied); err != nil {
|
||||
return err
|
||||
}
|
||||
if s.nextEpochBoundarySlot != 0 {
|
||||
ep := slots.ToEpoch(s.nextEpochBoundarySlot)
|
||||
_, nextProposerIndexToSlots, err := helpers.CommitteeAssignments(ctx, copied, ep)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for k, v := range nextProposerIndexToSlots {
|
||||
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(v[0], k, [8]byte{}, [32]byte{})
|
||||
}
|
||||
}
|
||||
} else if postState.Slot() >= s.nextEpochBoundarySlot {
|
||||
postState = postState.Copy()
|
||||
if s.nextEpochBoundarySlot != 0 {
|
||||
ep := slots.ToEpoch(s.nextEpochBoundarySlot)
|
||||
_, nextProposerIndexToSlots, err := helpers.CommitteeAssignments(ctx, postState, ep)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for k, v := range nextProposerIndexToSlots {
|
||||
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(v[0], k, [8]byte{}, [32]byte{})
|
||||
}
|
||||
}
|
||||
s.nextEpochBoundarySlot, err = slots.EpochStart(coreTime.NextEpoch(postState))
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -652,18 +673,17 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
|
||||
// This routine checks if there is a cached proposer payload ID available for the next slot proposer.
|
||||
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
|
||||
func (s *Service) runLateBlockTasks() {
|
||||
_, err := s.clockWaiter.WaitForClock(s.ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("runLateBlockTasks encountered an error waiting for initialization")
|
||||
if err := s.waitForSync(); err != nil {
|
||||
log.WithError(err).Error("failed to wait for initial sync")
|
||||
return
|
||||
}
|
||||
|
||||
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
|
||||
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C():
|
||||
s.lateBlockTasks(s.ctx)
|
||||
|
||||
case <-s.ctx.Done():
|
||||
log.Debug("Context closed, exiting routine")
|
||||
return
|
||||
@@ -720,3 +740,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
|
||||
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
|
||||
}
|
||||
}
|
||||
|
||||
// waitForSync blocks until the node is synced to the head.
|
||||
func (s *Service) waitForSync() error {
|
||||
select {
|
||||
case <-s.syncComplete:
|
||||
return nil
|
||||
case <-s.ctx.Done():
|
||||
return errors.New("context closed, exiting goroutine")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -636,7 +636,7 @@ func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
|
||||
s, _ := util.DeterministicGenesisState(t, 1024)
|
||||
service.head = &head{state: s}
|
||||
require.NoError(t, s.SetSlot(2*params.BeaconConfig().SlotsPerEpoch))
|
||||
require.NoError(t, service.handleEpochBoundary(ctx, s))
|
||||
require.NoError(t, service.handleEpochBoundary(ctx, s, []byte{}))
|
||||
require.Equal(t, 3*params.BeaconConfig().SlotsPerEpoch, service.nextEpochBoundarySlot)
|
||||
}
|
||||
|
||||
|
||||
@@ -94,8 +94,10 @@ func (s *Service) spawnProcessAttestationsRoutine() {
|
||||
case <-s.ctx.Done():
|
||||
return
|
||||
case <-pat.C():
|
||||
log.Infof("proposer_mocker: calling updated head via offset ticker")
|
||||
s.UpdateHead(s.ctx, s.CurrentSlot()+1)
|
||||
case <-st.C():
|
||||
log.Infof("proposer_mocker: calling updated head via normal slot ticker in spawn atts")
|
||||
s.cfg.ForkChoiceStore.Lock()
|
||||
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, s.CurrentSlot()); err != nil {
|
||||
log.WithError(err).Error("could not process new slot")
|
||||
@@ -124,6 +126,8 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
|
||||
}
|
||||
s.processAttestations(ctx, disparity)
|
||||
|
||||
log.Infof("proposer_mocker: process attestations in fc took %s", time.Since(start).String())
|
||||
|
||||
processAttsElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
|
||||
|
||||
start = time.Now()
|
||||
@@ -136,11 +140,14 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
|
||||
s.headLock.RUnlock()
|
||||
}
|
||||
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
|
||||
log.Infof("proposer_mocker: head root in fc took %s", time.Since(start).String())
|
||||
|
||||
changed, err := s.forkchoiceUpdateWithExecution(s.ctx, newHeadRoot, proposingSlot)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("could not update forkchoice")
|
||||
}
|
||||
log.Infof("proposer_mocker: fcu call in fc took %s", time.Since(start).String())
|
||||
|
||||
if changed {
|
||||
s.headLock.RLock()
|
||||
log.WithFields(logrus.Fields{
|
||||
|
||||
@@ -60,6 +60,7 @@ type Service struct {
|
||||
wsVerifier *WeakSubjectivityVerifier
|
||||
clockSetter startup.ClockSetter
|
||||
clockWaiter startup.ClockWaiter
|
||||
syncComplete chan struct{}
|
||||
}
|
||||
|
||||
// config options for the service.
|
||||
@@ -307,7 +308,13 @@ func (s *Service) initializeHeadFromDB(ctx context.Context) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get finalized block")
|
||||
}
|
||||
if err := s.setHead(finalizedRoot, finalizedBlock, finalizedState); err != nil {
|
||||
if err := s.setHead(&head{
|
||||
finalizedRoot,
|
||||
finalizedBlock,
|
||||
finalizedState,
|
||||
finalizedBlock.Block().Slot(),
|
||||
false,
|
||||
}); err != nil {
|
||||
return errors.Wrap(err, "could not set head")
|
||||
}
|
||||
|
||||
@@ -439,7 +446,13 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
|
||||
}
|
||||
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(s.genesisTime.Unix()))
|
||||
|
||||
if err := s.setHead(genesisBlkRoot, genesisBlk, genesisState); err != nil {
|
||||
if err := s.setHead(&head{
|
||||
genesisBlkRoot,
|
||||
genesisBlk,
|
||||
genesisState,
|
||||
genesisBlk.Block().Slot(),
|
||||
false,
|
||||
}); err != nil {
|
||||
log.WithError(err).Fatal("Could not set head")
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -2,6 +2,7 @@ package altair
|
||||
|
||||
import (
|
||||
"context"
|
||||
goErrors "errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
@@ -22,6 +23,10 @@ import (
|
||||
|
||||
const maxRandomByte = uint64(1<<8 - 1)
|
||||
|
||||
var (
|
||||
ErrTooLate = errors.New("sync message is too late")
|
||||
)
|
||||
|
||||
// ValidateNilSyncContribution validates the following fields are not nil:
|
||||
// -the contribution and proof itself
|
||||
// -the message within contribution and proof
|
||||
@@ -217,7 +222,7 @@ func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockD
|
||||
upperBound := time.Now().Add(clockDisparity)
|
||||
// Verify sync message slot is within the time range.
|
||||
if messageTime.Before(lowerBound) || messageTime.After(upperBound) {
|
||||
return fmt.Errorf(
|
||||
syncErr := fmt.Errorf(
|
||||
"sync message time %v (slot %d) not within allowable range of %v (slot %d) to %v (slot %d)",
|
||||
messageTime,
|
||||
slot,
|
||||
@@ -226,6 +231,11 @@ func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockD
|
||||
upperBound,
|
||||
uint64(upperBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
|
||||
)
|
||||
// Wrap error message if sync message is too late.
|
||||
if messageTime.Before(lowerBound) {
|
||||
syncErr = goErrors.Join(ErrTooLate, syncErr)
|
||||
}
|
||||
return syncErr
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -38,6 +38,7 @@ go_library(
|
||||
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@io_opencensus_go//trace:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@@ -14,6 +14,10 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrTooLate = errors.New("attestation is too late")
|
||||
)
|
||||
|
||||
// ValidateNilAttestation checks if any composite field of input attestation is nil.
|
||||
// Access to these nil fields will result in run time panic,
|
||||
// it is recommended to run these checks as first line of defense.
|
||||
@@ -164,7 +168,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
|
||||
)
|
||||
if attTime.Before(lowerBounds) {
|
||||
attReceivedTooLateCount.Inc()
|
||||
return attError
|
||||
return errors.Join(ErrTooLate, attError)
|
||||
}
|
||||
if attTime.After(upperBounds) {
|
||||
attReceivedTooEarlyCount.Inc()
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
var CommitteeCacheInProgressHit = promauto.NewCounter(prometheus.CounterOpts{
|
||||
@@ -396,3 +397,22 @@ func isEligibleForActivation(activationEligibilityEpoch, activationEpoch, finali
|
||||
return activationEligibilityEpoch <= finalizedEpoch &&
|
||||
activationEpoch == params.BeaconConfig().FarFutureEpoch
|
||||
}
|
||||
|
||||
// LastActivatedValidatorIndex provides the last activated validator given a state
|
||||
func LastActivatedValidatorIndex(ctx context.Context, st state.ReadOnlyBeaconState) (primitives.ValidatorIndex, error) {
|
||||
_, span := trace.StartSpan(ctx, "helpers.LastActivatedValidatorIndex")
|
||||
defer span.End()
|
||||
var lastActivatedvalidatorIndex primitives.ValidatorIndex
|
||||
// linear search because status are not sorted
|
||||
for j := st.NumValidators() - 1; j >= 0; j-- {
|
||||
val, err := st.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if IsActiveValidatorUsingTrie(val, time.CurrentEpoch(st)) {
|
||||
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
|
||||
break
|
||||
}
|
||||
}
|
||||
return lastActivatedvalidatorIndex, nil
|
||||
}
|
||||
|
||||
@@ -727,3 +727,26 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(ðpb.BeaconState{})
|
||||
require.NoError(t, err)
|
||||
|
||||
validators := make([]*ethpb.Validator, 4)
|
||||
balances := make([]uint64, len(validators))
|
||||
for i := uint64(0); i < 4; i++ {
|
||||
validators[i] = ðpb.Validator{
|
||||
PublicKey: make([]byte, params.BeaconConfig().BLSPubkeyLength),
|
||||
WithdrawalCredentials: make([]byte, 32),
|
||||
EffectiveBalance: 32 * 1e9,
|
||||
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
|
||||
}
|
||||
balances[i] = validators[i].EffectiveBalance
|
||||
}
|
||||
require.NoError(t, beaconState.SetValidators(validators))
|
||||
require.NoError(t, beaconState.SetBalances(balances))
|
||||
|
||||
index, err := LastActivatedValidatorIndex(context.Background(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, index, primitives.ValidatorIndex(3))
|
||||
}
|
||||
|
||||
@@ -115,28 +115,32 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
|
||||
|
||||
func FuzzExecutionPayload(f *testing.F) {
|
||||
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
|
||||
execData := &engine.ExecutableData{
|
||||
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
|
||||
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
LogsBloom: logsBloom[:],
|
||||
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Number: math.MaxUint64,
|
||||
GasLimit: math.MaxUint64,
|
||||
GasUsed: math.MaxUint64,
|
||||
Timestamp: 100,
|
||||
ExtraData: nil,
|
||||
BaseFeePerGas: big.NewInt(math.MaxInt),
|
||||
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
|
||||
execData := &engine.ExecutionPayloadEnvelope{
|
||||
ExecutionPayload: &engine.ExecutableData{
|
||||
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
|
||||
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
LogsBloom: logsBloom[:],
|
||||
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Number: math.MaxUint64,
|
||||
GasLimit: math.MaxUint64,
|
||||
GasUsed: math.MaxUint64,
|
||||
Timestamp: 100,
|
||||
ExtraData: nil,
|
||||
BaseFeePerGas: big.NewInt(math.MaxInt),
|
||||
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
|
||||
Withdrawals: []*types.Withdrawal{},
|
||||
},
|
||||
BlockValue: nil,
|
||||
}
|
||||
output, err := json.Marshal(execData)
|
||||
assert.NoError(f, err)
|
||||
f.Add(output)
|
||||
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
|
||||
gethResp := &engine.ExecutableData{}
|
||||
prysmResp := &pb.ExecutionPayload{}
|
||||
gethResp := &engine.ExecutionPayloadEnvelope{}
|
||||
prysmResp := &pb.ExecutionPayloadCapellaWithValue{}
|
||||
gethErr := json.Unmarshal(jsonBlob, gethResp)
|
||||
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
|
||||
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
|
||||
@@ -147,10 +151,10 @@ func FuzzExecutionPayload(f *testing.F) {
|
||||
gethBlob, gethErr := json.Marshal(gethResp)
|
||||
prysmBlob, prysmErr := json.Marshal(prysmResp)
|
||||
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
|
||||
newGethResp := &engine.ExecutableData{}
|
||||
newGethResp := &engine.ExecutionPayloadEnvelope{}
|
||||
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
|
||||
assert.NoError(t, newGethErr)
|
||||
newGethResp2 := &engine.ExecutableData{}
|
||||
newGethResp2 := &engine.ExecutionPayloadEnvelope{}
|
||||
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
|
||||
assert.NoError(t, newGethErr)
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ func New() *ForkChoice {
|
||||
|
||||
b := make([]uint64, 0)
|
||||
v := make([]Vote, 0)
|
||||
return &ForkChoice{store: s, balances: b, votes: v}
|
||||
return &ForkChoice{store: s, balances: b, votes: v, fcLock: new(fcLock)}
|
||||
}
|
||||
|
||||
// NodeCount returns the current number of nodes in the Store.
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
package doublylinkedtree
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"runtime/debug"
|
||||
"runtime/pprof"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
|
||||
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
|
||||
@@ -11,7 +15,7 @@ import (
|
||||
|
||||
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
|
||||
type ForkChoice struct {
|
||||
sync.RWMutex
|
||||
*fcLock
|
||||
store *Store
|
||||
votes []Vote // tracks individual validator's last vote.
|
||||
balances []uint64 // tracks individual validator's balances last accounted in votes.
|
||||
@@ -68,3 +72,52 @@ type Vote struct {
|
||||
nextRoot [fieldparams.RootLength]byte // next voting root.
|
||||
nextEpoch primitives.Epoch // epoch of next voting period.
|
||||
}
|
||||
|
||||
type fcLock struct {
|
||||
lk sync.RWMutex
|
||||
t time.Time
|
||||
currChan chan int
|
||||
}
|
||||
|
||||
func (f *fcLock) Lock() {
|
||||
f.lk.Lock()
|
||||
f.t = time.Now()
|
||||
f.currChan = make(chan int)
|
||||
go func(t time.Time, c chan int) {
|
||||
tim := time.NewTimer(3 * time.Second)
|
||||
select {
|
||||
case <-c:
|
||||
tim.Stop()
|
||||
case <-tim.C:
|
||||
tim.Stop()
|
||||
pfile := pprof.Lookup("goroutine")
|
||||
bf := bytes.NewBuffer([]byte{})
|
||||
err := pfile.WriteTo(bf, 1)
|
||||
_ = err
|
||||
log.Warnf("FC lock is taking longer than 3 seconds with the complete stack of %s", bf.String())
|
||||
}
|
||||
}(time.Now(), f.currChan)
|
||||
}
|
||||
|
||||
func (f *fcLock) Unlock() {
|
||||
t := time.Since(f.t)
|
||||
f.t = time.Time{}
|
||||
close(f.currChan)
|
||||
f.lk.Unlock()
|
||||
if t > time.Second {
|
||||
log.Warnf("FC lock is taking longer than 1 second: %s with the complete stack of %s", t.String(), string(debug.Stack()))
|
||||
}
|
||||
}
|
||||
|
||||
func (f *fcLock) RLock() {
|
||||
t := time.Now()
|
||||
f.lk.RLock()
|
||||
dt := time.Since(t)
|
||||
if dt > time.Second {
|
||||
log.Warnf("FC Rlock is taking longer than 1 second: %s with stack %s", dt.String(), string(debug.Stack()))
|
||||
}
|
||||
}
|
||||
|
||||
func (f *fcLock) RUnlock() {
|
||||
f.lk.RUnlock()
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v4/runtime/version"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
@@ -29,6 +30,9 @@ func (s *Service) processSyncAggregate(state state.BeaconState, blk interfaces.R
|
||||
if blk == nil || blk.Body() == nil {
|
||||
return
|
||||
}
|
||||
if blk.Version() == version.Phase0 {
|
||||
return
|
||||
}
|
||||
bits, err := blk.Body().SyncAggregate()
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not get SyncAggregate")
|
||||
|
||||
@@ -230,13 +230,13 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Debugln("Registering Determinstic Genesis Service")
|
||||
if err := beacon.registerDeterminsticGenesisService(); err != nil {
|
||||
log.Debugln("Registering Deterministic Genesis Service")
|
||||
if err := beacon.registerDeterministicGenesisService(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Debugln("Registering Blockchain Service")
|
||||
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer); err != nil {
|
||||
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer, beacon.initialSyncComplete); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -590,7 +590,7 @@ func (b *BeaconNode) registerAttestationPool() error {
|
||||
return b.services.RegisterService(s)
|
||||
}
|
||||
|
||||
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer) error {
|
||||
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer, syncComplete chan struct{}) error {
|
||||
var web3Service *execution.Service
|
||||
if err := b.services.FetchService(&web3Service); err != nil {
|
||||
return err
|
||||
@@ -621,6 +621,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
|
||||
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
|
||||
blockchain.WithProposerIdsCache(b.proposerIdsCache),
|
||||
blockchain.WithClockSynchronizer(gs),
|
||||
blockchain.WithSyncComplete(syncComplete),
|
||||
)
|
||||
|
||||
blockchainService, err := blockchain.NewService(b.ctx, opts...)
|
||||
@@ -923,7 +924,7 @@ func (b *BeaconNode) registerGRPCGateway(router *mux.Router) error {
|
||||
return b.services.RegisterService(g)
|
||||
}
|
||||
|
||||
func (b *BeaconNode) registerDeterminsticGenesisService() error {
|
||||
func (b *BeaconNode) registerDeterministicGenesisService() error {
|
||||
genesisTime := b.cliCtx.Uint64(flags.InteropGenesisTimeFlag.Name)
|
||||
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
|
||||
|
||||
|
||||
@@ -460,6 +460,19 @@ func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
|
||||
return addresses, nil
|
||||
}
|
||||
|
||||
func peerIdsFromMultiAddrs(addrs []ma.Multiaddr) []peer.ID {
|
||||
peers := []peer.ID{}
|
||||
for _, a := range addrs {
|
||||
info, err := peer.AddrInfoFromP2pAddr(a)
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Could not derive peer info from multiaddress %s", a.String())
|
||||
continue
|
||||
}
|
||||
peers = append(peers, info.ID)
|
||||
}
|
||||
return peers
|
||||
}
|
||||
|
||||
func multiAddrFromString(address string) (ma.Multiaddr, error) {
|
||||
return ma.NewMultiaddr(address)
|
||||
}
|
||||
|
||||
@@ -38,9 +38,10 @@ type StoreConfig struct {
|
||||
// the mutex when accessing data.
|
||||
type Store struct {
|
||||
sync.RWMutex
|
||||
ctx context.Context
|
||||
config *StoreConfig
|
||||
peers map[peer.ID]*PeerData
|
||||
ctx context.Context
|
||||
config *StoreConfig
|
||||
peers map[peer.ID]*PeerData
|
||||
trustedPeers map[peer.ID]bool
|
||||
}
|
||||
|
||||
// PeerData aggregates protocol and application level info about a single peer.
|
||||
@@ -69,9 +70,10 @@ type PeerData struct {
|
||||
// NewStore creates new peer data store.
|
||||
func NewStore(ctx context.Context, config *StoreConfig) *Store {
|
||||
return &Store{
|
||||
ctx: ctx,
|
||||
config: config,
|
||||
peers: make(map[peer.ID]*PeerData),
|
||||
ctx: ctx,
|
||||
config: config,
|
||||
peers: make(map[peer.ID]*PeerData),
|
||||
trustedPeers: make(map[peer.ID]bool),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -105,12 +107,25 @@ func (s *Store) DeletePeerData(pid peer.ID) {
|
||||
delete(s.peers, pid)
|
||||
}
|
||||
|
||||
// SetTrustedPeers sets our desired trusted peer set.
|
||||
func (s *Store) SetTrustedPeers(peers []peer.ID) {
|
||||
for _, p := range peers {
|
||||
s.trustedPeers[p] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Peers returns map of peer data objects.
|
||||
// Important: it is assumed that store mutex is locked when calling this method.
|
||||
func (s *Store) Peers() map[peer.ID]*PeerData {
|
||||
return s.peers
|
||||
}
|
||||
|
||||
// IsTrustedPeer checks that the provided peer
|
||||
// is in our trusted peer set.
|
||||
func (s *Store) IsTrustedPeer(p peer.ID) bool {
|
||||
return s.trustedPeers[p]
|
||||
}
|
||||
|
||||
// Config exposes store configuration params.
|
||||
func (s *Store) Config() *StoreConfig {
|
||||
return s.config
|
||||
|
||||
@@ -80,3 +80,20 @@ func TestStore_PeerDataGetOrCreate(t *testing.T) {
|
||||
assert.Equal(t, uint64(0), peerData.ProcessedBlocks)
|
||||
require.Equal(t, 1, len(store.Peers()))
|
||||
}
|
||||
|
||||
func TestStore_TrustedPeers(t *testing.T) {
|
||||
store := peerdata.NewStore(context.Background(), &peerdata.StoreConfig{
|
||||
MaxPeers: 12,
|
||||
})
|
||||
|
||||
pid1 := peer.ID("00001")
|
||||
pid2 := peer.ID("00002")
|
||||
pid3 := peer.ID("00003")
|
||||
|
||||
tPeers := []peer.ID{pid1, pid2, pid3}
|
||||
store.SetTrustedPeers(tPeers)
|
||||
|
||||
assert.Equal(t, true, store.IsTrustedPeer(pid1))
|
||||
assert.Equal(t, true, store.IsTrustedPeer(pid2))
|
||||
assert.Equal(t, true, store.IsTrustedPeer(pid3))
|
||||
}
|
||||
|
||||
@@ -335,6 +335,10 @@ func (p *Status) IsBad(pid peer.ID) bool {
|
||||
|
||||
// isBad is the lock-free version of IsBad.
|
||||
func (p *Status) isBad(pid peer.ID) bool {
|
||||
// Do not disconnect from trusted peers.
|
||||
if p.store.IsTrustedPeer(pid) {
|
||||
return false
|
||||
}
|
||||
return p.isfromBadIP(pid) || p.scorers.IsBadPeerNoLock(pid)
|
||||
}
|
||||
|
||||
@@ -769,7 +773,7 @@ func (p *Status) PeersToPrune() []peer.ID {
|
||||
// Select connected and inbound peers to prune.
|
||||
for pid, peerData := range p.store.Peers() {
|
||||
if peerData.ConnState == PeerConnected &&
|
||||
peerData.Direction == network.DirInbound {
|
||||
peerData.Direction == network.DirInbound && !p.store.IsTrustedPeer(pid) {
|
||||
peersToPrune = append(peersToPrune, &peerResp{
|
||||
pid: pid,
|
||||
score: p.scorers.ScoreNoLock(pid),
|
||||
@@ -835,7 +839,7 @@ func (p *Status) deprecatedPeersToPrune() []peer.ID {
|
||||
// Select connected and inbound peers to prune.
|
||||
for pid, peerData := range p.store.Peers() {
|
||||
if peerData.ConnState == PeerConnected &&
|
||||
peerData.Direction == network.DirInbound {
|
||||
peerData.Direction == network.DirInbound && !p.store.IsTrustedPeer(pid) {
|
||||
peersToPrune = append(peersToPrune, &peerResp{
|
||||
pid: pid,
|
||||
badResp: peerData.BadResponses,
|
||||
@@ -900,6 +904,14 @@ func (p *Status) ConnectedPeerLimit() uint64 {
|
||||
return uint64(maxLim) - maxLimitBuffer
|
||||
}
|
||||
|
||||
// SetTrustedPeers sets our trusted peer set into
|
||||
// our peerstore.
|
||||
func (p *Status) SetTrustedPeers(peers []peer.ID) {
|
||||
p.store.Lock()
|
||||
defer p.store.Unlock()
|
||||
p.store.SetTrustedPeers(peers)
|
||||
}
|
||||
|
||||
// this method assumes the store lock is acquired before
|
||||
// executing the method.
|
||||
func (p *Status) isfromBadIP(pid peer.ID) bool {
|
||||
|
||||
@@ -755,6 +755,78 @@ func TestPrunePeers(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestPrunePeers_TrustedPeers(t *testing.T) {
|
||||
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
|
||||
PeerLimit: 30,
|
||||
ScorerParams: &scorers.Config{
|
||||
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
|
||||
Threshold: 1,
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
for i := 0; i < 15; i++ {
|
||||
// Peer added to peer handler.
|
||||
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
|
||||
}
|
||||
// Assert there are no prunable peers.
|
||||
peersToPrune := p.PeersToPrune()
|
||||
assert.Equal(t, 0, len(peersToPrune))
|
||||
|
||||
for i := 0; i < 18; i++ {
|
||||
// Peer added to peer handler.
|
||||
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
|
||||
}
|
||||
|
||||
// Assert there are the correct prunable peers.
|
||||
peersToPrune = p.PeersToPrune()
|
||||
assert.Equal(t, 3, len(peersToPrune))
|
||||
|
||||
// Add in more peers.
|
||||
for i := 0; i < 13; i++ {
|
||||
// Peer added to peer handler.
|
||||
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
|
||||
}
|
||||
|
||||
trustedPeers := []peer.ID{}
|
||||
// Set up bad scores for inbound peers.
|
||||
inboundPeers := p.InboundConnected()
|
||||
for i, pid := range inboundPeers {
|
||||
modulo := i % 5
|
||||
// Increment bad scores for peers.
|
||||
for j := 0; j < modulo; j++ {
|
||||
p.Scorers().BadResponsesScorer().Increment(pid)
|
||||
}
|
||||
if modulo == 4 {
|
||||
trustedPeers = append(trustedPeers, pid)
|
||||
}
|
||||
}
|
||||
p.SetTrustedPeers(trustedPeers)
|
||||
// Assert all peers more than max are prunable.
|
||||
peersToPrune = p.PeersToPrune()
|
||||
assert.Equal(t, 16, len(peersToPrune))
|
||||
|
||||
// Check that trusted peers are not pruned.
|
||||
for _, pid := range peersToPrune {
|
||||
for _, tPid := range trustedPeers {
|
||||
assert.NotEqual(t, pid.String(), tPid.String())
|
||||
}
|
||||
}
|
||||
for _, pid := range peersToPrune {
|
||||
dir, err := p.Direction(pid)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, network.DirInbound, dir)
|
||||
}
|
||||
|
||||
// Ensure it is in the descending order.
|
||||
currScore := p.Scorers().Score(peersToPrune[0])
|
||||
for _, pid := range peersToPrune {
|
||||
score := p.Scorers().BadResponsesScorer().Score(pid)
|
||||
assert.Equal(t, true, currScore >= score)
|
||||
currScore = score
|
||||
}
|
||||
}
|
||||
|
||||
func TestStatus_BestPeer(t *testing.T) {
|
||||
type peerConfig struct {
|
||||
headSlot primitives.Slot
|
||||
|
||||
@@ -210,6 +210,10 @@ func (s *Service) Start() {
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not connect to static peer")
|
||||
}
|
||||
// Set trusted peers for those that are provided as static addresses.
|
||||
pids := peerIdsFromMultiAddrs(addrs)
|
||||
s.peers.SetTrustedPeers(pids)
|
||||
peersToWatch = append(peersToWatch, s.cfg.StaticPeers...)
|
||||
s.connectWithAllPeers(addrs)
|
||||
}
|
||||
// Initialize metadata according to the
|
||||
|
||||
@@ -3,6 +3,7 @@ package apimiddleware
|
||||
import (
|
||||
"encoding/base64"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
@@ -17,9 +18,14 @@ func (p *EpochParticipation) UnmarshalJSON(b []byte) error {
|
||||
if len(b) < 2 {
|
||||
return errors.New("epoch participation length must be at least 2")
|
||||
}
|
||||
if b[0] != '"' || b[len(b)-1] != '"' {
|
||||
return errors.Errorf("provided epoch participation json string is malformed: %s", string(b))
|
||||
}
|
||||
|
||||
// Remove leading and trailing quotation marks.
|
||||
decoded, err := base64.StdEncoding.DecodeString(string(b[1 : len(b)-1]))
|
||||
jsonString := string(b)
|
||||
jsonString = strings.Trim(jsonString, "\"")
|
||||
decoded, err := base64.StdEncoding.DecodeString(jsonString)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not decode epoch participation base64 value")
|
||||
}
|
||||
|
||||
@@ -23,7 +23,7 @@ func TestUnmarshalEpochParticipation(t *testing.T) {
|
||||
ep := EpochParticipation{}
|
||||
err := ep.UnmarshalJSON([]byte(":illegal:"))
|
||||
require.NotNil(t, err)
|
||||
assert.ErrorContains(t, "could not decode epoch participation base64 value", err)
|
||||
assert.ErrorContains(t, "provided epoch participation json string is malformed", err)
|
||||
})
|
||||
t.Run("length too small", func(t *testing.T) {
|
||||
ep := EpochParticipation{}
|
||||
@@ -36,4 +36,8 @@ func TestUnmarshalEpochParticipation(t *testing.T) {
|
||||
require.NoError(t, ep.UnmarshalJSON([]byte("null")))
|
||||
assert.DeepEqual(t, EpochParticipation([]string{}), ep)
|
||||
})
|
||||
t.Run("invalid value", func(t *testing.T) {
|
||||
ep := EpochParticipation{}
|
||||
require.ErrorContains(t, "provided epoch participation json string is malformed", ep.UnmarshalJSON([]byte("XdHJ1ZQ==X")))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -146,6 +146,7 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
|
||||
|
||||
validatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
|
||||
nextValidatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
|
||||
|
||||
for _, pubKey := range req.PublicKeys {
|
||||
if ctx.Err() != nil {
|
||||
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())
|
||||
@@ -194,7 +195,8 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
|
||||
vs.ProposerSlotIndexCache.PrunePayloadIDs(epochStartSlot)
|
||||
} else {
|
||||
// If the validator isn't in the beacon state, try finding their deposit to determine their status.
|
||||
vStatus, _ := vs.validatorStatus(ctx, s, pubKey)
|
||||
// We don't need the lastActiveValidatorFn because we don't use the response in this.
|
||||
vStatus, _ := vs.validatorStatus(ctx, s, pubKey, nil)
|
||||
assignment.Status = vStatus.Status
|
||||
}
|
||||
|
||||
|
||||
@@ -25,7 +25,6 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
"github.com/sirupsen/logrus"
|
||||
@@ -64,13 +63,16 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
|
||||
return nil, status.Error(codes.Unavailable, "Syncing to latest head, not ready to respond")
|
||||
}
|
||||
|
||||
curr := time.Now()
|
||||
// process attestations and update head in forkchoice
|
||||
vs.ForkchoiceFetcher.UpdateHead(ctx, vs.TimeFetcher.CurrentSlot())
|
||||
log.Infof("proposer_mocker: update head in rpc took %s", time.Since(curr).String())
|
||||
headRoot := vs.ForkchoiceFetcher.CachedHeadRoot()
|
||||
parentRoot := vs.ForkchoiceFetcher.GetProposerHead()
|
||||
if parentRoot != headRoot {
|
||||
blockchain.LateBlockAttemptedReorgCount.Inc()
|
||||
}
|
||||
log.Infof("proposer_mocker: fetching head root in rpc took %s", time.Since(curr).String())
|
||||
|
||||
// An optimistic validator MUST NOT produce a block (i.e., sign across the DOMAIN_BEACON_PROPOSER domain).
|
||||
if slots.ToEpoch(req.Slot) >= params.BeaconConfig().BellatrixForkEpoch {
|
||||
@@ -91,6 +93,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.Internal, "Could not process slots up to %d: %v", req.Slot, err)
|
||||
}
|
||||
log.Infof("proposer_mocker: fetching head state rpc took %s", time.Since(curr).String())
|
||||
|
||||
// Set slot, graffiti, randao reveal, and parent root.
|
||||
sBlk.SetSlot(req.Slot)
|
||||
@@ -104,9 +107,10 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
|
||||
return nil, fmt.Errorf("could not calculate proposer index %v", err)
|
||||
}
|
||||
sBlk.SetProposerIndex(idx)
|
||||
log.Infof("proposer_mocker: setting proposer index took %s", time.Since(curr).String())
|
||||
|
||||
if features.Get().BuildBlockParallel {
|
||||
if err := vs.BuildBlockParallel(ctx, sBlk, head); err != nil {
|
||||
if err := vs.BuildBlockParallel(ctx, sBlk, head, curr); err != nil {
|
||||
return nil, errors.Wrap(err, "could not build block in parallel")
|
||||
}
|
||||
} else {
|
||||
@@ -192,7 +196,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
|
||||
return ðpb.GenericBeaconBlock{Block: ðpb.GenericBeaconBlock_Phase0{Phase0: pb.(*ethpb.BeaconBlock)}}, nil
|
||||
}
|
||||
|
||||
func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.SignedBeaconBlock, head state.BeaconState) error {
|
||||
func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.SignedBeaconBlock, head state.BeaconState, curr time.Time) error {
|
||||
// Build consensus fields in background
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
@@ -206,6 +210,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
log.WithError(err).Error("Could not get eth1data")
|
||||
}
|
||||
sBlk.SetEth1Data(eth1Data)
|
||||
log.Infof("proposer_mocker: setting eth1data took %s", time.Since(curr).String())
|
||||
|
||||
// Set deposit and attestation.
|
||||
deposits, atts, err := vs.packDepositsAndAttestations(ctx, head, eth1Data) // TODO: split attestations and deposits
|
||||
@@ -217,20 +222,26 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
sBlk.SetDeposits(deposits)
|
||||
sBlk.SetAttestations(atts)
|
||||
}
|
||||
log.Infof("proposer_mocker: setting deposits and atts took %s", time.Since(curr).String())
|
||||
|
||||
// Set slashings.
|
||||
validProposerSlashings, validAttSlashings := vs.getSlashings(ctx, head)
|
||||
sBlk.SetProposerSlashings(validProposerSlashings)
|
||||
sBlk.SetAttesterSlashings(validAttSlashings)
|
||||
log.Infof("proposer_mocker: setting slashings took %s", time.Since(curr).String())
|
||||
|
||||
// Set exits.
|
||||
sBlk.SetVoluntaryExits(vs.getExits(head, sBlk.Block().Slot()))
|
||||
log.Infof("proposer_mocker: setting exits took %s", time.Since(curr).String())
|
||||
|
||||
// Set sync aggregate. New in Altair.
|
||||
vs.setSyncAggregate(ctx, sBlk)
|
||||
log.Infof("proposer_mocker: setting sync aggs took %s", time.Since(curr).String())
|
||||
|
||||
// Set bls to execution change. New in Capella.
|
||||
vs.setBlsToExecData(sBlk, head)
|
||||
log.Infof("proposer_mocker: setting bls data took %s", time.Since(curr).String())
|
||||
|
||||
}()
|
||||
|
||||
localPayload, err := vs.getLocalPayload(ctx, sBlk.Block(), head)
|
||||
@@ -247,6 +258,7 @@ func (vs *Server) BuildBlockParallel(ctx context.Context, sBlk interfaces.Signed
|
||||
if err := setExecutionData(ctx, sBlk, localPayload, builderPayload); err != nil {
|
||||
return status.Errorf(codes.Internal, "Could not set execution data: %v", err)
|
||||
}
|
||||
log.Infof("proposer_mocker: setting execution data took %s", time.Since(curr).String())
|
||||
|
||||
wg.Wait() // Wait until block is built via consensus and execution fields.
|
||||
|
||||
@@ -347,10 +359,6 @@ func (vs *Server) GetFeeRecipientByPubKey(ctx context.Context, request *ethpb.Fe
|
||||
func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.SignedBeaconBlock) (*ethpb.ProposeResponse, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "ProposerServer.proposeGenericBeaconBlock")
|
||||
defer span.End()
|
||||
root, err := blk.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not tree hash block: %v", err)
|
||||
}
|
||||
|
||||
unblinder, err := newUnblinder(blk, vs.BlockBuilder)
|
||||
if err != nil {
|
||||
@@ -361,16 +369,6 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
|
||||
return nil, errors.Wrap(err, "could not unblind builder block")
|
||||
}
|
||||
|
||||
// Do not block proposal critical path with debug logging or block feed updates.
|
||||
defer func() {
|
||||
log.WithField("blockRoot", fmt.Sprintf("%#x", bytesutil.Trunc(root[:]))).Debugf(
|
||||
"Block proposal received via RPC")
|
||||
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
|
||||
Type: blockfeed.ReceivedBlock,
|
||||
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
|
||||
})
|
||||
}()
|
||||
|
||||
// Broadcast the new block to the network.
|
||||
blkPb, err := blk.Proto()
|
||||
if err != nil {
|
||||
@@ -379,6 +377,11 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
|
||||
if err := vs.P2P.Broadcast(ctx, blkPb); err != nil {
|
||||
return nil, fmt.Errorf("could not broadcast block: %v", err)
|
||||
}
|
||||
root, err := blk.Block().HashTreeRoot()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not tree hash block: %v", err)
|
||||
}
|
||||
|
||||
log.WithFields(logrus.Fields{
|
||||
"blockRoot": hex.EncodeToString(root[:]),
|
||||
}).Debug("Broadcasting block")
|
||||
@@ -387,6 +390,13 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
|
||||
return nil, fmt.Errorf("could not process beacon block: %v", err)
|
||||
}
|
||||
|
||||
log.WithField("slot", blk.Block().Slot()).Debugf(
|
||||
"Block proposal received via RPC")
|
||||
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
|
||||
Type: blockfeed.ReceivedBlock,
|
||||
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
|
||||
})
|
||||
|
||||
return ðpb.ProposeResponse{
|
||||
BlockRoot: root[:],
|
||||
}, nil
|
||||
@@ -395,10 +405,12 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
|
||||
// computeStateRoot computes the state root after a block has been processed through a state transition and
|
||||
// returns it to the validator client.
|
||||
func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
|
||||
curr := time.Now()
|
||||
beaconState, err := vs.StateGen.StateByRoot(ctx, block.Block().ParentRoot())
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not retrieve beacon state")
|
||||
}
|
||||
log.Infof("proposer_mocker: fetching parent state took %s", time.Since(curr).String())
|
||||
root, err := transition.CalculateStateRoot(
|
||||
ctx,
|
||||
beaconState,
|
||||
@@ -407,6 +419,7 @@ func (vs *Server) computeStateRoot(ctx context.Context, block interfaces.ReadOnl
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not calculate state root at slot %d", beaconState.Slot())
|
||||
}
|
||||
log.Infof("proposer_mocker: calculating state root took %s", time.Since(curr).String())
|
||||
|
||||
log.WithField("beaconStateRoot", fmt.Sprintf("%#x", root)).Debugf("Computed state root")
|
||||
return root[:], nil
|
||||
|
||||
@@ -30,6 +30,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v4/network/forks"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
@@ -184,3 +185,32 @@ func (vs *Server) WaitForChainStart(_ *emptypb.Empty, stream ethpb.BeaconNodeVal
|
||||
}
|
||||
return stream.Send(res)
|
||||
}
|
||||
|
||||
func (vs *Server) RandomStuff() {
|
||||
for vs.TimeFetcher.GenesisTime().IsZero() {
|
||||
time.Sleep(5 * time.Second)
|
||||
}
|
||||
genTime := vs.TimeFetcher.GenesisTime()
|
||||
|
||||
ticker := slots.NewSlotTicker(genTime, params.BeaconConfig().SecondsPerSlot)
|
||||
for {
|
||||
select {
|
||||
case <-vs.Ctx.Done():
|
||||
ticker.Done()
|
||||
return
|
||||
case slot := <-ticker.C():
|
||||
curr := time.Now()
|
||||
_, err := vs.GetBeaconBlock(context.Background(), ðpb.BlockRequest{
|
||||
Slot: slot,
|
||||
Graffiti: make([]byte, 32),
|
||||
RandaoReveal: make([]byte, 96),
|
||||
})
|
||||
if err != nil {
|
||||
log.Error(err)
|
||||
continue
|
||||
}
|
||||
log.Infof("proposer_mocker: successfully produced block %d in %s", slot, time.Since(curr).String())
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
)
|
||||
|
||||
var errPubkeyDoesNotExist = errors.New("pubkey does not exist")
|
||||
var errHeadstateDoesNotExist = errors.New("head state does not exist")
|
||||
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
|
||||
var nonExistentIndex = primitives.ValidatorIndex(^uint64(0))
|
||||
|
||||
@@ -46,7 +47,8 @@ func (vs *Server) ValidatorStatus(
|
||||
if err != nil {
|
||||
return nil, status.Error(codes.Internal, "Could not get head state")
|
||||
}
|
||||
vStatus, _ := vs.validatorStatus(ctx, headState, req.PublicKey)
|
||||
|
||||
vStatus, _ := vs.validatorStatus(ctx, headState, req.PublicKey, func() (primitives.ValidatorIndex, error) { return helpers.LastActivatedValidatorIndex(ctx, headState) })
|
||||
return vStatus, nil
|
||||
}
|
||||
|
||||
@@ -86,8 +88,9 @@ func (vs *Server) MultipleValidatorStatus(
|
||||
// Fetch statuses from beacon state.
|
||||
statuses := make([]*ethpb.ValidatorStatusResponse, len(pubKeys))
|
||||
indices := make([]primitives.ValidatorIndex, len(pubKeys))
|
||||
lastActivated, hpErr := helpers.LastActivatedValidatorIndex(ctx, headState)
|
||||
for i, pubKey := range pubKeys {
|
||||
statuses[i], indices[i] = vs.validatorStatus(ctx, headState, pubKey)
|
||||
statuses[i], indices[i] = vs.validatorStatus(ctx, headState, pubKey, func() (primitives.ValidatorIndex, error) { return lastActivated, hpErr })
|
||||
}
|
||||
|
||||
return ðpb.MultipleValidatorStatusResponse{
|
||||
@@ -223,11 +226,13 @@ func (vs *Server) activationStatus(
|
||||
}
|
||||
activeValidatorExists := false
|
||||
statusResponses := make([]*ethpb.ValidatorActivationResponse_Status, len(pubKeys))
|
||||
// only run calculation of last activated once per state
|
||||
lastActivated, hpErr := helpers.LastActivatedValidatorIndex(ctx, headState)
|
||||
for i, pubKey := range pubKeys {
|
||||
if ctx.Err() != nil {
|
||||
return false, nil, ctx.Err()
|
||||
}
|
||||
vStatus, idx := vs.validatorStatus(ctx, headState, pubKey)
|
||||
vStatus, idx := vs.validatorStatus(ctx, headState, pubKey, func() (primitives.ValidatorIndex, error) { return lastActivated, hpErr })
|
||||
if vStatus == nil {
|
||||
continue
|
||||
}
|
||||
@@ -272,6 +277,7 @@ func (vs *Server) validatorStatus(
|
||||
ctx context.Context,
|
||||
headState state.ReadOnlyBeaconState,
|
||||
pubKey []byte,
|
||||
lastActiveValidatorFn func() (primitives.ValidatorIndex, error),
|
||||
) (*ethpb.ValidatorStatusResponse, primitives.ValidatorIndex) {
|
||||
ctx, span := trace.StartSpan(ctx, "ValidatorServer.validatorStatus")
|
||||
defer span.End()
|
||||
@@ -340,17 +346,12 @@ func (vs *Server) validatorStatus(
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var lastActivatedvalidatorIndex primitives.ValidatorIndex
|
||||
for j := headState.NumValidators() - 1; j >= 0; j-- {
|
||||
val, err := headState.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
|
||||
if err != nil {
|
||||
return resp, idx
|
||||
}
|
||||
if helpers.IsActiveValidatorUsingTrie(val, time.CurrentEpoch(headState)) {
|
||||
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
|
||||
break
|
||||
}
|
||||
if lastActiveValidatorFn == nil {
|
||||
return resp, idx
|
||||
}
|
||||
lastActivatedvalidatorIndex, err := lastActiveValidatorFn()
|
||||
if err != nil {
|
||||
return resp, idx
|
||||
}
|
||||
// Our position in the activation queue is the above index - our validator index.
|
||||
if lastActivatedvalidatorIndex < idx {
|
||||
@@ -390,7 +391,7 @@ func checkValidatorsAreRecent(headEpoch primitives.Epoch, req *ethpb.DoppelGange
|
||||
|
||||
func statusForPubKey(headState state.ReadOnlyBeaconState, pubKey []byte) (ethpb.ValidatorStatus, primitives.ValidatorIndex, error) {
|
||||
if headState == nil || headState.IsNil() {
|
||||
return ethpb.ValidatorStatus_UNKNOWN_STATUS, 0, errors.New("head state does not exist")
|
||||
return ethpb.ValidatorStatus_UNKNOWN_STATUS, 0, errHeadstateDoesNotExist
|
||||
}
|
||||
idx, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
|
||||
if !ok || uint64(idx) >= uint64(headState.NumValidators()) {
|
||||
|
||||
@@ -250,6 +250,7 @@ func (s *Service) Start() {
|
||||
BLSChangesPool: s.cfg.BLSChangesPool,
|
||||
ClockWaiter: s.cfg.ClockWaiter,
|
||||
}
|
||||
go validatorServer.RandomStuff()
|
||||
validatorServerV1 := &validator.Server{
|
||||
HeadFetcher: s.cfg.HeadFetcher,
|
||||
TimeFetcher: s.cfg.GenesisTimeFetcher,
|
||||
|
||||
@@ -251,6 +251,10 @@ func TestFieldTrie_NativeState_fieldConvertersNative(t *testing.T) {
|
||||
field: types.FieldIndex(9),
|
||||
indices: []uint64{1},
|
||||
elements: []*ethpb.Eth1Data{
|
||||
{
|
||||
DepositRoot: make([]byte, fieldparams.RootLength),
|
||||
DepositCount: 2,
|
||||
},
|
||||
{
|
||||
DepositRoot: make([]byte, fieldparams.RootLength),
|
||||
DepositCount: 1,
|
||||
@@ -321,11 +325,14 @@ func TestFieldTrie_NativeState_fieldConvertersNative(t *testing.T) {
|
||||
wantHex: []string{"0x7d7696e7f12593934afcd87a0d38e1a981bee63cb4cf0568ba36a6e0596eeccb"},
|
||||
},
|
||||
{
|
||||
name: "Attestations",
|
||||
name: "Attestations convertAll false",
|
||||
args: &args{
|
||||
field: types.FieldIndex(15),
|
||||
indices: []uint64{1},
|
||||
elements: []*ethpb.PendingAttestation{
|
||||
{
|
||||
ProposerIndex: 0,
|
||||
},
|
||||
{
|
||||
ProposerIndex: 1,
|
||||
},
|
||||
@@ -352,8 +359,12 @@ func TestFieldTrie_NativeState_fieldConvertersNative(t *testing.T) {
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
roots, err := fieldConverters(tt.args.field, tt.args.indices, tt.args.elements, tt.args.convertAll)
|
||||
if err != nil && tt.errMsg != "" {
|
||||
require.ErrorContains(t, tt.errMsg, err)
|
||||
if err != nil {
|
||||
if tt.errMsg != "" {
|
||||
require.ErrorContains(t, tt.errMsg, err)
|
||||
} else {
|
||||
t.Error("Unexpected error: " + err.Error())
|
||||
}
|
||||
} else {
|
||||
for i, root := range roots {
|
||||
hex := hexutil.Encode(root[:])
|
||||
|
||||
@@ -2,6 +2,7 @@ package sync
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
@@ -13,6 +14,8 @@ import (
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
|
||||
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/flags"
|
||||
@@ -283,7 +286,7 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
|
||||
messageFailedValidationCounter.WithLabelValues(topic).Inc()
|
||||
}
|
||||
if b == pubsub.ValidationIgnore {
|
||||
if err != nil {
|
||||
if err != nil && !errorIsIgnored(err) {
|
||||
log.WithError(err).WithFields(logrus.Fields{
|
||||
"topic": topic,
|
||||
"multiaddress": multiAddr(pid, s.cfg.p2p.Peers()),
|
||||
@@ -781,3 +784,13 @@ func multiAddr(pid peer.ID, stat *peers.Status) string {
|
||||
}
|
||||
return addrs.String()
|
||||
}
|
||||
|
||||
func errorIsIgnored(err error) bool {
|
||||
if errors.Is(err, helpers.ErrTooLate) {
|
||||
return true
|
||||
}
|
||||
if errors.Is(err, altair.ErrTooLate) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -203,6 +203,7 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
|
||||
log.WithFields(logrus.Fields{
|
||||
"blockSlot": blk.Block().Slot(),
|
||||
"sinceSlotStartTime": receivedTime.Sub(startTime),
|
||||
"validationTime": prysmTime.Now().Sub(receivedTime),
|
||||
"proposerIndex": blk.Block().ProposerIndex(),
|
||||
"graffiti": string(graffiti[:]),
|
||||
}).Debug("Received block")
|
||||
|
||||
@@ -65,10 +65,7 @@ container_image(
|
||||
container_bundle(
|
||||
name = "image_bundle",
|
||||
images = {
|
||||
"gcr.io/prysmaticlabs/prysm/beacon-chain:latest": ":image_with_creation_time",
|
||||
"gcr.io/prysmaticlabs/prysm/beacon-chain:{DOCKER_TAG}": ":image_with_creation_time",
|
||||
"index.docker.io/prysmaticlabs/prysm-beacon-chain:latest": ":image_with_creation_time",
|
||||
"index.docker.io/prysmaticlabs/prysm-beacon-chain:{DOCKER_TAG}": ":image_with_creation_time",
|
||||
"gcr.io/prysmaticlabs/prysm/beacon-chain:fcTesting": ":image_with_creation_time",
|
||||
},
|
||||
tags = ["manual"],
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
@@ -119,20 +116,6 @@ docker_push(
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
)
|
||||
|
||||
docker_push(
|
||||
name = "push_images_debug",
|
||||
bundle = ":image_bundle_debug",
|
||||
tags = ["manual"],
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
)
|
||||
|
||||
docker_push(
|
||||
name = "push_images_alpine",
|
||||
bundle = ":image_bundle_alpine",
|
||||
tags = ["manual"],
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
)
|
||||
|
||||
go_binary(
|
||||
name = "beacon-chain",
|
||||
embed = [":go_default_library"],
|
||||
|
||||
@@ -93,7 +93,7 @@ var (
|
||||
// StaticPeers specifies a set of peers to connect to explicitly.
|
||||
StaticPeers = &cli.StringSliceFlag{
|
||||
Name: "peer",
|
||||
Usage: "Connect with this peer. This flag may be used multiple times.",
|
||||
Usage: "Connect with this peer, this flag may be used multiple times. This peer is recognized as a trusted peer.",
|
||||
}
|
||||
// BootstrapNode tells the beacon node which bootstrap node to connect to
|
||||
BootstrapNode = &cli.StringSliceFlag{
|
||||
|
||||
@@ -52,11 +52,11 @@ func createKeystore(t *testing.T, path string) (*keymanager.Keystore, string) {
|
||||
id, err := uuid.NewRandom()
|
||||
require.NoError(t, err)
|
||||
keystoreFile := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", validatingKey.PublicKey().Marshal()),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", validatingKey.PublicKey().Marshal()),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encoded, err := json.MarshalIndent(keystoreFile, "", "\t")
|
||||
require.NoError(t, err)
|
||||
|
||||
2
deps.bzl
2
deps.bzl
@@ -2458,6 +2458,8 @@ def prysm_deps():
|
||||
],
|
||||
build_file_proto_mode = "disable_global",
|
||||
importpath = "github.com/libp2p/go-libp2p",
|
||||
patch_args = ["-p1"],
|
||||
patches = ["//third_party:com_github_libp2p_go_libp2p.patch"],
|
||||
sum = "h1:KwA7pXKXpz8hG6Cr1fMA7UkgleogcwQj0sxl5qquWRg=",
|
||||
version = "v0.27.5",
|
||||
)
|
||||
|
||||
@@ -394,6 +394,9 @@ func (e *ExecutionPayloadCapellaWithValue) UnmarshalJSON(enc []byte) error {
|
||||
if err := json.Unmarshal(enc, &dec); err != nil {
|
||||
return err
|
||||
}
|
||||
if dec.ExecutionPayload == nil {
|
||||
return errors.New("missing required field 'executionPayload' for ExecutionPayloadWithValue")
|
||||
}
|
||||
|
||||
if dec.ExecutionPayload.ParentHash == nil {
|
||||
return errors.New("missing required field 'parentHash' for ExecutionPayload")
|
||||
|
||||
11
third_party/com_github_libp2p_go_libp2p.patch
vendored
Normal file
11
third_party/com_github_libp2p_go_libp2p.patch
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
diff --git a/p2p/net/swarm/swarm_conn.go b/p2p/net/swarm/swarm_conn.go
|
||||
index 0e79da1b..e770381a 100644
|
||||
--- a/p2p/net/swarm/swarm_conn.go
|
||||
+++ b/p2p/net/swarm/swarm_conn.go
|
||||
@@ -130,6 +130,7 @@ func (c *Conn) start() {
|
||||
|
||||
// We only get an error here when the swarm is closed or closing.
|
||||
if err != nil {
|
||||
+ scope.Done()
|
||||
return
|
||||
}
|
||||
@@ -7,6 +7,7 @@ go_library(
|
||||
importpath = "github.com/prysmaticlabs/prysm/v4/tools/interop/convert-keys",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/params:go_default_library",
|
||||
"//tools/unencrypted-keys-gen/keygen:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@in_gopkg_yaml_v2//:go_default_library",
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v4/tools/unencrypted-keys-gen/keygen"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"gopkg.in/yaml.v2"
|
||||
@@ -52,7 +53,7 @@ func main() {
|
||||
})
|
||||
}
|
||||
|
||||
outFile, err := os.Create(os.Args[2])
|
||||
outFile, err := os.OpenFile(os.Args[2], os.O_CREATE|os.O_EXCL, params.BeaconIoConfig().ReadWritePermissions)
|
||||
if err != nil {
|
||||
log.WithError(err).Fatalf("Failed to create file at %s", os.Args[2])
|
||||
}
|
||||
|
||||
@@ -197,11 +197,11 @@ func encrypt(cliCtx *cli.Context) error {
|
||||
return errors.Wrap(err, "could not encrypt into new keystore")
|
||||
}
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
if err != nil {
|
||||
@@ -229,7 +229,6 @@ func readAndDecryptKeystore(fullPath, password string) error {
|
||||
}
|
||||
decryptor := keystorev4.New()
|
||||
keystoreFile := &keymanager.Keystore{}
|
||||
|
||||
if err := json.Unmarshal(f, keystoreFile); err != nil {
|
||||
return errors.Wrap(err, "could not JSON unmarshal keystore file")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/kr/pretty"
|
||||
fssz "github.com/prysmaticlabs/fastssz"
|
||||
@@ -62,7 +63,7 @@ func main() {
|
||||
"signed_block_header|" +
|
||||
"signed_voluntary_exit|" +
|
||||
"voluntary_exit|" +
|
||||
"state",
|
||||
"state_capella",
|
||||
Required: true,
|
||||
Destination: &sszType,
|
||||
},
|
||||
@@ -92,8 +93,8 @@ func main() {
|
||||
data = ðpb.SignedVoluntaryExit{}
|
||||
case "voluntary_exit":
|
||||
data = ðpb.VoluntaryExit{}
|
||||
case "state":
|
||||
data = ðpb.BeaconState{}
|
||||
case "state_capella":
|
||||
data = ðpb.BeaconStateCapella{}
|
||||
default:
|
||||
log.Fatal("Invalid type")
|
||||
}
|
||||
@@ -101,6 +102,40 @@ func main() {
|
||||
return nil
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "benchmark-hash",
|
||||
Aliases: []string{"b"},
|
||||
Usage: "benchmark-hash SSZ data",
|
||||
Flags: []cli.Flag{
|
||||
&cli.StringFlag{
|
||||
Name: "ssz-path",
|
||||
Usage: "Path to file(ssz)",
|
||||
Required: true,
|
||||
Destination: &sszPath,
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "data-type",
|
||||
Usage: "ssz file data type: " +
|
||||
"block_capella|" +
|
||||
"blinded_block_capella|" +
|
||||
"signed_block_capella|" +
|
||||
"attestation|" +
|
||||
"block_header|" +
|
||||
"deposit|" +
|
||||
"proposer_slashing|" +
|
||||
"signed_block_header|" +
|
||||
"signed_voluntary_exit|" +
|
||||
"voluntary_exit|" +
|
||||
"state_capella",
|
||||
Required: true,
|
||||
Destination: &sszType,
|
||||
},
|
||||
},
|
||||
Action: func(c *cli.Context) error {
|
||||
benchmarkHash(sszPath, sszType)
|
||||
return nil
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "state-transition",
|
||||
Category: "state-transition",
|
||||
@@ -230,3 +265,26 @@ func prettyPrint(sszPath string, data fssz.Unmarshaler) {
|
||||
str = re.ReplaceAllString(str, "")
|
||||
fmt.Print(str)
|
||||
}
|
||||
|
||||
func benchmarkHash(sszPath string, sszType string) {
|
||||
switch sszType {
|
||||
case "state_capella":
|
||||
data := ðpb.BeaconStateCapella{}
|
||||
if err := dataFetcher(sszPath, data); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
st, err := state_native.InitializeFromProtoCapella(data)
|
||||
if err != nil {
|
||||
log.Fatal("not a state")
|
||||
}
|
||||
start := time.Now()
|
||||
root, err := st.HashTreeRoot(context.Background())
|
||||
if err != nil {
|
||||
log.Fatal("couldn't hash")
|
||||
}
|
||||
fmt.Printf("Duration: %v HTR: %#x\n", time.Since(start), root)
|
||||
return
|
||||
default:
|
||||
log.Fatal("Invalid type")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,7 +18,10 @@ go_library(
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v4/tools/specs-checker",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = ["@com_github_urfave_cli_v2//:go_default_library"],
|
||||
deps = [
|
||||
"//config/params:go_default_library",
|
||||
"@com_github_urfave_cli_v2//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
go_binary(
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/config/params"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
|
||||
@@ -40,7 +41,7 @@ func download(cliCtx *cli.Context) error {
|
||||
|
||||
func getAndSaveFile(specDocUrl, outFilePath string) error {
|
||||
// Create output file.
|
||||
f, err := os.Create(filepath.Clean(outFilePath))
|
||||
f, err := os.OpenFile(filepath.Clean(outFilePath), os.O_CREATE|os.O_EXCL, params.BeaconIoConfig().ReadWritePermissions)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot create output file: %w", err)
|
||||
}
|
||||
|
||||
@@ -277,6 +277,9 @@ func readKeystoreFile(_ context.Context, keystoreFilePath string) (*keymanager.K
|
||||
if keystoreFile.Pubkey == "" {
|
||||
return nil, errors.New("could not decode keystore json")
|
||||
}
|
||||
if keystoreFile.Description == "" && keystoreFile.Name != "" {
|
||||
keystoreFile.Description = keystoreFile.Name
|
||||
}
|
||||
return keystoreFile, nil
|
||||
}
|
||||
|
||||
@@ -295,11 +298,11 @@ func createKeystoreFromPrivateKey(privKey bls.SecretKey, walletPassword string)
|
||||
)
|
||||
}
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: fmt.Sprintf("%x", privKey.PublicKey().Marshal()),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: fmt.Sprintf("%x", privKey.PublicKey().Marshal()),
|
||||
Description: encryptor.Name(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package accounts
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -174,3 +175,30 @@ func Test_importPrivateKeyAsAccount(t *testing.T) {
|
||||
require.Equal(t, 1, len(pubKeys))
|
||||
assert.DeepEqual(t, pubKeys[0], bytesutil.ToBytes48(privKey.PublicKey().Marshal()))
|
||||
}
|
||||
|
||||
func Test_NameToDescriptionChangeIsOK(t *testing.T) {
|
||||
jsonString := `{"version":1, "name":"hmmm"}`
|
||||
type Obj struct {
|
||||
Version uint `json:"version"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
a := &Obj{}
|
||||
require.NoError(t, json.Unmarshal([]byte(jsonString), a))
|
||||
require.Equal(t, a.Description, "")
|
||||
}
|
||||
|
||||
func Test_MarshalOmitsName(t *testing.T) {
|
||||
type Obj struct {
|
||||
Version uint `json:"version"`
|
||||
Description string `json:"description"`
|
||||
Name string `json:"name,omitempty"`
|
||||
}
|
||||
a := &Obj{
|
||||
Version: 1,
|
||||
Description: "hmm",
|
||||
}
|
||||
|
||||
bytes, err := json.Marshal(a)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, string(bytes), `{"version":1,"description":"hmm"}`)
|
||||
}
|
||||
|
||||
@@ -117,11 +117,11 @@ func createRandomKeystore(t testing.TB, password string) *keymanager.Keystore {
|
||||
cryptoFields, err := encryptor.Encrypt(validatingKey.Marshal(), password)
|
||||
require.NoError(t, err)
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -166,10 +166,6 @@ func (_ MockValidator) WaitForKeymanagerInitialization(_ context.Context) error
|
||||
panic("implement me")
|
||||
}
|
||||
|
||||
func (_ MockValidator) AllValidatorsAreExited(_ context.Context) (bool, error) {
|
||||
panic("implement me")
|
||||
}
|
||||
|
||||
func (m MockValidator) Keymanager() (keymanager.IKeymanager, error) {
|
||||
return m.Km, nil
|
||||
}
|
||||
|
||||
@@ -121,7 +121,8 @@ func (c *beaconApiValidatorClient) getValidatorsStatusResponse(ctx context.Conte
|
||||
case ethpb.ValidatorStatus_PENDING, ethpb.ValidatorStatus_PARTIALLY_DEPOSITED, ethpb.ValidatorStatus_DEPOSITED:
|
||||
if !isLastActivatedValidatorIndexRetrieved {
|
||||
isLastActivatedValidatorIndexRetrieved = true
|
||||
|
||||
// TODO: double check this due to potential of PENDING STATE being active..
|
||||
// edge case https://github.com/prysmaticlabs/prysm/blob/0669050ffabe925c3d6e5e5d535a86361ae8522b/validator/client/validator.go#L1068
|
||||
activeStateValidators, err := c.stateValidatorsProvider.GetStateValidators(ctx, nil, nil, []string{"active"})
|
||||
if err != nil {
|
||||
return nil, nil, nil, errors.Wrap(err, "failed to get state validators")
|
||||
|
||||
@@ -56,7 +56,6 @@ type Validator interface {
|
||||
LogSyncCommitteeMessagesSubmitted()
|
||||
UpdateDomainDataCaches(ctx context.Context, slot primitives.Slot)
|
||||
WaitForKeymanagerInitialization(ctx context.Context) error
|
||||
AllValidatorsAreExited(ctx context.Context) (bool, error)
|
||||
Keymanager() (keymanager.IKeymanager, error)
|
||||
ReceiveBlocks(ctx context.Context, connectionErrorChannel chan<- error)
|
||||
HandleKeyReload(ctx context.Context, currentKeys [][fieldparams.BLSPubkeyLength]byte) (bool, error)
|
||||
|
||||
@@ -246,7 +246,7 @@ func (v *validator) LogValidatorGainsAndLosses(ctx context.Context, slot primiti
|
||||
if v.emitAccountMetrics {
|
||||
for _, missingPubKey := range resp.MissingValidators {
|
||||
fmtKey := fmt.Sprintf("%#x", missingPubKey)
|
||||
ValidatorBalancesGaugeVec.WithLabelValues(fmtKey).Set(0)
|
||||
ValidatorBalancesGaugeVec.WithLabelValues(fmtKey).Set(float64(params.BeaconConfig().MaxEffectiveBalance))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -93,14 +93,6 @@ func run(ctx context.Context, v iface.Validator) {
|
||||
onAccountsChanged(ctx, v, currentKeys, accountsChangedChan)
|
||||
case slot := <-v.NextSlot():
|
||||
span.AddAttributes(trace.Int64Attribute("slot", int64(slot))) // lint:ignore uintcast -- This conversion is OK for tracing.
|
||||
allExited, err := v.AllValidatorsAreExited(ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not check if validators are exited")
|
||||
}
|
||||
if allExited {
|
||||
log.Info("All validators are exited, no more work to perform...")
|
||||
continue
|
||||
}
|
||||
|
||||
deadline := v.SlotDeadline(slot)
|
||||
slotCtx, cancel := context.WithDeadline(ctx, deadline)
|
||||
@@ -276,7 +268,9 @@ func isConnectionError(err error) bool {
|
||||
}
|
||||
|
||||
func handleAssignmentError(err error, slot primitives.Slot) {
|
||||
if errCode, ok := status.FromError(err); ok && errCode.Code() == codes.NotFound {
|
||||
if errors.Is(err, ErrValidatorsAllExited) {
|
||||
log.Warn(ErrValidatorsAllExited)
|
||||
} else if errCode, ok := status.FromError(err); ok && errCode.Code() == codes.NotFound {
|
||||
log.WithField(
|
||||
"epoch", slot/params.BeaconConfig().SlotsPerEpoch,
|
||||
).Warn("Validator not yet assigned to epoch")
|
||||
|
||||
@@ -185,23 +185,6 @@ func TestBothProposesAndAttests_NextSlot(t *testing.T) {
|
||||
assert.Equal(t, uint64(slot), v.ProposeBlockArg1, "ProposeBlock was called with wrong arg")
|
||||
}
|
||||
|
||||
func TestAllValidatorsAreExited_NextSlot(t *testing.T) {
|
||||
v := &testutil.FakeValidator{Km: &mockKeymanager{accountsChangedFeed: &event.Feed{}}}
|
||||
ctx, cancel := context.WithCancel(context.WithValue(context.Background(), testutil.AllValidatorsAreExitedCtxKey, true))
|
||||
hook := logTest.NewGlobal()
|
||||
|
||||
slot := primitives.Slot(55)
|
||||
ticker := make(chan primitives.Slot)
|
||||
v.NextSlotRet = ticker
|
||||
go func() {
|
||||
ticker <- slot
|
||||
|
||||
cancel()
|
||||
}()
|
||||
run(ctx, v)
|
||||
assert.LogsContain(t, hook, "All validators are exited")
|
||||
}
|
||||
|
||||
func TestKeyReload_ActiveKey(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
km := &mockKeymanager{}
|
||||
|
||||
@@ -57,11 +57,6 @@ type FakeValidator struct {
|
||||
Km keymanager.IKeymanager
|
||||
}
|
||||
|
||||
type ctxKey string
|
||||
|
||||
// AllValidatorsAreExitedCtxKey represents the metadata context key used for exits.
|
||||
var AllValidatorsAreExitedCtxKey = ctxKey("exited")
|
||||
|
||||
// Done for mocking.
|
||||
func (fv *FakeValidator) Done() {
|
||||
fv.DoneCalled = true
|
||||
@@ -212,14 +207,6 @@ func (fv *FakeValidator) PubkeysToStatuses(_ context.Context) map[[fieldparams.B
|
||||
return fv.PubkeysToStatusesMap
|
||||
}
|
||||
|
||||
// AllValidatorsAreExited for mocking
|
||||
func (_ *FakeValidator) AllValidatorsAreExited(ctx context.Context) (bool, error) {
|
||||
if ctx.Value(AllValidatorsAreExitedCtxKey) == nil {
|
||||
return false, nil
|
||||
}
|
||||
return ctx.Value(AllValidatorsAreExitedCtxKey).(bool), nil
|
||||
}
|
||||
|
||||
// Keymanager for mocking
|
||||
func (fv *FakeValidator) Keymanager() (keymanager.IKeymanager, error) {
|
||||
return fv.Km, nil
|
||||
|
||||
@@ -56,6 +56,7 @@ import (
|
||||
var (
|
||||
keyRefetchPeriod = 30 * time.Second
|
||||
ErrBuilderValidatorRegistration = errors.New("Builder API validator registration unsuccessful")
|
||||
ErrValidatorsAllExited = errors.New("All validators are exited, no more work to perform...")
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -603,6 +604,16 @@ func (v *validator) UpdateDuties(ctx context.Context, slot primitives.Slot) erro
|
||||
return err
|
||||
}
|
||||
|
||||
allExitedCounter := 0
|
||||
for i := range resp.CurrentEpochDuties {
|
||||
if resp.CurrentEpochDuties[i].Status == ethpb.ValidatorStatus_EXITED {
|
||||
allExitedCounter++
|
||||
}
|
||||
}
|
||||
if allExitedCounter != 0 && allExitedCounter == len(resp.CurrentEpochDuties) {
|
||||
return ErrValidatorsAllExited
|
||||
}
|
||||
|
||||
v.duties = resp
|
||||
v.logDuties(slot, v.duties.CurrentEpochDuties)
|
||||
|
||||
@@ -844,38 +855,6 @@ func (v *validator) UpdateDomainDataCaches(ctx context.Context, slot primitives.
|
||||
}
|
||||
}
|
||||
|
||||
// AllValidatorsAreExited informs whether all validators have already exited.
|
||||
func (v *validator) AllValidatorsAreExited(ctx context.Context) (bool, error) {
|
||||
validatingKeys, err := v.keyManager.FetchValidatingPublicKeys(ctx)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "could not fetch validating keys")
|
||||
}
|
||||
if len(validatingKeys) == 0 {
|
||||
return false, nil
|
||||
}
|
||||
var publicKeys [][]byte
|
||||
for _, key := range validatingKeys {
|
||||
copyKey := key
|
||||
publicKeys = append(publicKeys, copyKey[:])
|
||||
}
|
||||
request := ðpb.MultipleValidatorStatusRequest{
|
||||
PublicKeys: publicKeys,
|
||||
}
|
||||
response, err := v.validatorClient.MultipleValidatorStatus(ctx, request)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if len(response.Statuses) != len(request.PublicKeys) {
|
||||
return false, errors.New("number of status responses did not match number of requested keys")
|
||||
}
|
||||
for _, status := range response.Statuses {
|
||||
if status.Status != ethpb.ValidatorStatus_EXITED {
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (v *validator) domainData(ctx context.Context, epoch primitives.Epoch, domain []byte) (*ethpb.DomainResponse, error) {
|
||||
v.domainDataLock.Lock()
|
||||
defer v.domainDataLock.Unlock()
|
||||
@@ -895,7 +874,6 @@ func (v *validator) domainData(ctx context.Context, epoch primitives.Epoch, doma
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
v.domainDataCache.Set(key, proto.Clone(res), 1)
|
||||
|
||||
return res, nil
|
||||
|
||||
@@ -569,7 +569,7 @@ func TestUpdateDuties_OK(t *testing.T) {
|
||||
|
||||
require.NoError(t, v.UpdateDuties(context.Background(), slot), "Could not update assignments")
|
||||
|
||||
util.WaitTimeout(&wg, 3*time.Second)
|
||||
util.WaitTimeout(&wg, 2*time.Second)
|
||||
|
||||
assert.Equal(t, params.BeaconConfig().SlotsPerEpoch+1, v.duties.Duties[0].ProposerSlots[0], "Unexpected validator assignments")
|
||||
assert.Equal(t, params.BeaconConfig().SlotsPerEpoch, v.duties.Duties[0].AttesterSlot, "Unexpected validator assignments")
|
||||
@@ -617,13 +617,55 @@ func TestUpdateDuties_OK_FilterBlacklistedPublicKeys(t *testing.T) {
|
||||
|
||||
require.NoError(t, v.UpdateDuties(context.Background(), slot), "Could not update assignments")
|
||||
|
||||
util.WaitTimeout(&wg, 3*time.Second)
|
||||
util.WaitTimeout(&wg, 2*time.Second)
|
||||
|
||||
for range blacklistedPublicKeys {
|
||||
assert.LogsContain(t, hook, "Not including slashable public key")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateDuties_AllValidatorsExited(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
client := validatormock.NewMockValidatorClient(ctrl)
|
||||
|
||||
slot := params.BeaconConfig().SlotsPerEpoch
|
||||
resp := ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
AttesterSlot: params.BeaconConfig().SlotsPerEpoch,
|
||||
ValidatorIndex: 200,
|
||||
CommitteeIndex: 100,
|
||||
Committee: []primitives.ValidatorIndex{0, 1, 2, 3},
|
||||
PublicKey: []byte("testPubKey_1"),
|
||||
ProposerSlots: []primitives.Slot{params.BeaconConfig().SlotsPerEpoch + 1},
|
||||
Status: ethpb.ValidatorStatus_EXITED,
|
||||
},
|
||||
{
|
||||
AttesterSlot: params.BeaconConfig().SlotsPerEpoch,
|
||||
ValidatorIndex: 201,
|
||||
CommitteeIndex: 101,
|
||||
Committee: []primitives.ValidatorIndex{0, 1, 2, 3},
|
||||
PublicKey: []byte("testPubKey_2"),
|
||||
ProposerSlots: []primitives.Slot{params.BeaconConfig().SlotsPerEpoch + 1},
|
||||
Status: ethpb.ValidatorStatus_EXITED,
|
||||
},
|
||||
},
|
||||
}
|
||||
v := validator{
|
||||
keyManager: newMockKeymanager(t, randKeypair(t)),
|
||||
validatorClient: client,
|
||||
}
|
||||
client.EXPECT().GetDuties(
|
||||
gomock.Any(),
|
||||
gomock.Any(),
|
||||
).Return(resp, nil)
|
||||
|
||||
err := v.UpdateDuties(context.Background(), slot)
|
||||
require.ErrorContains(t, ErrValidatorsAllExited.Error(), err)
|
||||
|
||||
}
|
||||
|
||||
func TestRolesAt_OK(t *testing.T) {
|
||||
v, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
@@ -849,114 +891,6 @@ func TestCheckAndLogValidatorStatus_OK(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestAllValidatorsAreExited_AllExited(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
client := validatormock.NewMockValidatorClient(ctrl)
|
||||
|
||||
statuses := []*ethpb.ValidatorStatusResponse{
|
||||
{Status: ethpb.ValidatorStatus_EXITED},
|
||||
{Status: ethpb.ValidatorStatus_EXITED},
|
||||
}
|
||||
|
||||
client.EXPECT().MultipleValidatorStatus(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // request
|
||||
).Return(ðpb.MultipleValidatorStatusResponse{Statuses: statuses}, nil /*err*/)
|
||||
|
||||
v := validator{keyManager: genMockKeymanager(t, 2), validatorClient: client}
|
||||
exited, err := v.AllValidatorsAreExited(context.Background())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, true, exited)
|
||||
}
|
||||
|
||||
func TestAllValidatorsAreExited_NotAllExited(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
client := validatormock.NewMockValidatorClient(ctrl)
|
||||
|
||||
statuses := []*ethpb.ValidatorStatusResponse{
|
||||
{Status: ethpb.ValidatorStatus_ACTIVE},
|
||||
{Status: ethpb.ValidatorStatus_EXITED},
|
||||
}
|
||||
|
||||
client.EXPECT().MultipleValidatorStatus(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // request
|
||||
).Return(ðpb.MultipleValidatorStatusResponse{Statuses: statuses}, nil /*err*/)
|
||||
|
||||
v := validator{keyManager: genMockKeymanager(t, 2), validatorClient: client}
|
||||
exited, err := v.AllValidatorsAreExited(context.Background())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, exited)
|
||||
}
|
||||
|
||||
func TestAllValidatorsAreExited_PartialResult(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
client := validatormock.NewMockValidatorClient(ctrl)
|
||||
|
||||
statuses := []*ethpb.ValidatorStatusResponse{
|
||||
{Status: ethpb.ValidatorStatus_EXITED},
|
||||
}
|
||||
|
||||
client.EXPECT().MultipleValidatorStatus(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // request
|
||||
).Return(ðpb.MultipleValidatorStatusResponse{Statuses: statuses}, nil /*err*/)
|
||||
|
||||
v := validator{keyManager: genMockKeymanager(t, 2), validatorClient: client}
|
||||
exited, err := v.AllValidatorsAreExited(context.Background())
|
||||
require.ErrorContains(t, "number of status responses did not match number of requested keys", err)
|
||||
assert.Equal(t, false, exited)
|
||||
}
|
||||
|
||||
func TestAllValidatorsAreExited_NoKeys(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
client := validatormock.NewMockValidatorClient(ctrl)
|
||||
v := validator{keyManager: genMockKeymanager(t, 0), validatorClient: client}
|
||||
exited, err := v.AllValidatorsAreExited(context.Background())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, exited)
|
||||
}
|
||||
|
||||
// TestAllValidatorsAreExited_CorrectRequest is a regression test that checks if the request contains the correct keys
|
||||
func TestAllValidatorsAreExited_CorrectRequest(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
client := validatormock.NewMockValidatorClient(ctrl)
|
||||
|
||||
// Create two different public keys
|
||||
pubKey0 := [fieldparams.BLSPubkeyLength]byte{1, 2, 3, 4}
|
||||
pubKey1 := [fieldparams.BLSPubkeyLength]byte{6, 7, 8, 9}
|
||||
// This is the request expected from AllValidatorsAreExited()
|
||||
request := ðpb.MultipleValidatorStatusRequest{
|
||||
PublicKeys: [][]byte{
|
||||
pubKey0[:],
|
||||
pubKey1[:],
|
||||
},
|
||||
}
|
||||
statuses := []*ethpb.ValidatorStatusResponse{
|
||||
{Status: ethpb.ValidatorStatus_ACTIVE},
|
||||
{Status: ethpb.ValidatorStatus_EXITED},
|
||||
}
|
||||
|
||||
client.EXPECT().MultipleValidatorStatus(
|
||||
gomock.Any(), // ctx
|
||||
request, // request
|
||||
).Return(ðpb.MultipleValidatorStatusResponse{Statuses: statuses}, nil /*err*/)
|
||||
|
||||
// If AllValidatorsAreExited does not create the expected request, this test will fail
|
||||
v := validator{
|
||||
keyManager: newMockKeymanager(t, keypair{pub: pubKey0}, keypair{pub: pubKey1}),
|
||||
validatorClient: client,
|
||||
}
|
||||
exited, err := v.AllValidatorsAreExited(context.Background())
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, false, exited)
|
||||
}
|
||||
|
||||
func TestService_ReceiveBlocks_NilBlock(t *testing.T) {
|
||||
ctrl := gomock.NewController(t)
|
||||
defer ctrl.Finish()
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
// ExtractKeystores retrieves the secret keys for specified public keys
|
||||
// in the function input, encrypts them using the specified password,
|
||||
// and returns their respective EIP-2335 keystores.
|
||||
func (_ *Keymanager) ExtractKeystores(
|
||||
func (*Keymanager) ExtractKeystores(
|
||||
_ context.Context, publicKeys []bls.PublicKey, password string,
|
||||
) ([]*keymanager.Keystore, error) {
|
||||
lock.Lock()
|
||||
@@ -44,11 +44,11 @@ func (_ *Keymanager) ExtractKeystores(
|
||||
return nil, err
|
||||
}
|
||||
keystores[i] = &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", pubKeyBytes),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", pubKeyBytes),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
return keystores, nil
|
||||
|
||||
@@ -31,11 +31,11 @@ func createRandomKeystore(t testing.TB, password string) *keymanager.Keystore {
|
||||
cryptoFields, err := encryptor.Encrypt(validatingKey.Marshal(), password)
|
||||
require.NoError(t, err)
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -83,12 +83,13 @@ type AccountLister interface {
|
||||
|
||||
// Keystore json file representation as a Go struct.
|
||||
type Keystore struct {
|
||||
Crypto map[string]interface{} `json:"crypto"`
|
||||
ID string `json:"uuid"`
|
||||
Pubkey string `json:"pubkey"`
|
||||
Version uint `json:"version"`
|
||||
Name string `json:"name"`
|
||||
Path string `json:"path"`
|
||||
Crypto map[string]interface{} `json:"crypto"`
|
||||
ID string `json:"uuid"`
|
||||
Pubkey string `json:"pubkey"`
|
||||
Version uint `json:"version"`
|
||||
Description string `json:"description"`
|
||||
Name string `json:"name,omitempty"` // field deprecated in favor of description, EIP2335
|
||||
Path string `json:"path"`
|
||||
}
|
||||
|
||||
// Kind defines an enum for either local, derived, or remote-signing
|
||||
|
||||
@@ -89,6 +89,9 @@ func (s *Server) ImportKeystores(
|
||||
for i := 0; i < len(req.Keystores); i++ {
|
||||
k := &keymanager.Keystore{}
|
||||
err = json.Unmarshal([]byte(req.Keystores[i]), k)
|
||||
if k.Description == "" && k.Name != "" {
|
||||
k.Description = k.Name
|
||||
}
|
||||
if err != nil {
|
||||
// we want to ignore unmarshal errors for now, proper status in importKeystore
|
||||
k.Pubkey = "invalid format"
|
||||
|
||||
@@ -534,11 +534,11 @@ func createRandomKeystore(t testing.TB, password string) *keymanager.Keystore {
|
||||
cryptoFields, err := encryptor.Encrypt(validatingKey.Marshal(), password)
|
||||
require.NoError(t, err)
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -243,6 +243,9 @@ func (*Server) ValidateKeystores(
|
||||
if err := json.Unmarshal([]byte(encoded), &keystore); err != nil {
|
||||
return nil, status.Errorf(codes.InvalidArgument, "Not a valid EIP-2335 keystore JSON file: %v", err)
|
||||
}
|
||||
if keystore.Description == "" && keystore.Name != "" {
|
||||
keystore.Description = keystore.Name
|
||||
}
|
||||
if _, err := decryptor.Decrypt(keystore.Crypto, req.KeystoresPassword); err != nil {
|
||||
doesNotDecrypt := strings.Contains(err.Error(), keymanager.IncorrectPasswordErrMsg)
|
||||
if doesNotDecrypt {
|
||||
|
||||
@@ -92,11 +92,11 @@ func TestServer_CreateWallet_Local(t *testing.T) {
|
||||
cryptoFields, err := encryptor.Encrypt(privKey.Marshal(), strongPass)
|
||||
require.NoError(t, err)
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
require.NoError(t, err)
|
||||
@@ -241,11 +241,11 @@ func TestServer_ValidateKeystores_OK(t *testing.T) {
|
||||
cryptoFields, err := encryptor.Encrypt(privKey.Marshal(), strongPass)
|
||||
require.NoError(t, err)
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
require.NoError(t, err)
|
||||
@@ -278,11 +278,11 @@ func TestServer_ValidateKeystores_OK(t *testing.T) {
|
||||
cryptoFields, err := encryptor.Encrypt(privKey.Marshal(), differentPassword)
|
||||
require.NoError(t, err)
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
keystores = append(keystores, string(encodedFile))
|
||||
|
||||
Reference in New Issue
Block a user