mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-11 06:18:05 -05:00
Compare commits
17 Commits
scenario_l
...
fcTesting
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cf0505b8db | ||
|
|
3a9764d3af | ||
|
|
d1d3edc7fe | ||
|
|
ba55ae8cea | ||
|
|
27aac105d7 | ||
|
|
115d565f49 | ||
|
|
019e0b56e2 | ||
|
|
0efb038984 | ||
|
|
63d81144e9 | ||
|
|
6edbfa3128 | ||
|
|
194b3b1c5e | ||
|
|
996ec67229 | ||
|
|
c7b2c011d8 | ||
|
|
d15122fae2 | ||
|
|
3e17dbb532 | ||
|
|
a75e78ddb4 | ||
|
|
1862422db9 |
@@ -1,6 +1,6 @@
|
||||
# Contribution Guidelines
|
||||
|
||||
Note: The latest and most up to date documenation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
|
||||
Note: The latest and most up-to-date documentation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
|
||||
|
||||
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
|
||||
|
||||
@@ -10,9 +10,9 @@ You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues)
|
||||
|
||||
**1. Set up Prysm following the instructions in README.md.**
|
||||
|
||||
**2. Fork the prysm repo.**
|
||||
**2. Fork the Prysm repo.**
|
||||
|
||||
Sign in to your Github account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
|
||||
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
|
||||
|
||||
**3. Create a local clone of Prysm.**
|
||||
|
||||
@@ -23,7 +23,7 @@ $ git clone https://github.com/prysmaticlabs/prysm.git
|
||||
$ cd $GOPATH/src/github.com/prysmaticlabs/prysm
|
||||
```
|
||||
|
||||
**4. Link your local clone to the fork on your Github repo.**
|
||||
**4. Link your local clone to the fork on your GitHub repo.**
|
||||
|
||||
```
|
||||
$ git remote add myprysmrepo https://github.com/<your_github_user_name>/prysm.git
|
||||
@@ -68,7 +68,7 @@ $ go test <file_you_are_working_on>
|
||||
$ git add --all
|
||||
```
|
||||
|
||||
This command stages all of the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
|
||||
This command stages all the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
|
||||
|
||||
**11. Commit the file or files.**
|
||||
|
||||
@@ -96,8 +96,7 @@ If there are conflicts between your edits and those made by others since you sta
|
||||
$ git status
|
||||
```
|
||||
|
||||
Open those files one at a time and you
|
||||
will see lines inserted by Git that identify the conflicts:
|
||||
Open those files one at a time, and you will see lines inserted by Git that identify the conflicts:
|
||||
|
||||
```
|
||||
<<<<<< HEAD
|
||||
@@ -119,7 +118,7 @@ $ git push myrepo feature-in-progress-branch
|
||||
|
||||
**15. Check to be sure your fork of the Prysm repo contains your feature branch with the latest edits.**
|
||||
|
||||
Navigate to your fork of the repo on Github. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
|
||||
Navigate to your fork of the repo on GitHub. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
|
||||
|
||||
**16. Create a pull request.**
|
||||
|
||||
@@ -151,7 +150,7 @@ pick hash fix a bug
|
||||
pick hash add a feature
|
||||
```
|
||||
|
||||
Replace the word pick with the word “squash” for every line but the first so you end with ….
|
||||
Replace the word pick with the word “squash” for every line but the first, so you end with ….
|
||||
|
||||
```
|
||||
pick hash do some work
|
||||
@@ -178,7 +177,7 @@ We consider two types of contributions to our repo and categorize them as follow
|
||||
Anyone can become a part-time contributor and help out on implementing Ethereum consensus. The responsibilities of a part-time contributor include:
|
||||
|
||||
- Engaging in Gitter conversations, asking the questions on how to begin contributing to the project
|
||||
- Opening up github issues to express interest in code to implement
|
||||
- Opening up GitHub issues to express interest in code to implement
|
||||
- Opening up PRs referencing any open issue in the repo. PRs should include:
|
||||
- Detailed context of what would be required for merge
|
||||
- Tests that are consistent with how other tests are written in our implementation
|
||||
@@ -188,12 +187,12 @@ Anyone can become a part-time contributor and help out on implementing Ethereum
|
||||
|
||||
### Core Contributors
|
||||
|
||||
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all of the responsibilities of part-time contributors plus the majority of the following:
|
||||
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all the responsibilities of part-time contributors plus the majority of the following:
|
||||
|
||||
- Stay up to date on the latest beacon chain specification
|
||||
- Monitor github issues and PR’s to make sure owner, labels, descriptions are correct
|
||||
- Monitor GitHub issues and PR’s to make sure owner, labels, descriptions are correct
|
||||
- Formulate independent ideas, suggest new work to do, point out improvements to existing approaches
|
||||
- Participate in code review, ensure code quality is excellent, and have ensure high code coverage
|
||||
- Participate in code review, ensure code quality is excellent, and ensure high code coverage
|
||||
- Help with social media presence, write bi-weekly development update
|
||||
- Represent Prysmatic Labs at events to help spread the word on scalability research and solutions
|
||||
|
||||
|
||||
@@ -135,15 +135,14 @@ func (s Uint256) SSZBytes() []byte {
|
||||
|
||||
// UnmarshalJSON takes in a byte array and unmarshals the value in Uint256
|
||||
func (s *Uint256) UnmarshalJSON(t []byte) error {
|
||||
start := 0
|
||||
end := len(t)
|
||||
if t[0] == '"' {
|
||||
start += 1
|
||||
if len(t) < 2 {
|
||||
return errors.Errorf("provided Uint256 json string is too short: %s", string(t))
|
||||
}
|
||||
if t[end-1] == '"' {
|
||||
end -= 1
|
||||
if t[0] != '"' || t[end-1] != '"' {
|
||||
return errors.Errorf("provided Uint256 json string is malformed: %s", string(t))
|
||||
}
|
||||
return s.UnmarshalText(t[start:end])
|
||||
return s.UnmarshalText(t[1 : end-1])
|
||||
}
|
||||
|
||||
// UnmarshalText takes in a byte array and unmarshals the text in Uint256
|
||||
|
||||
@@ -1156,6 +1156,14 @@ func TestUint256Unmarshal(t *testing.T) {
|
||||
require.Equal(t, expected, string(m))
|
||||
}
|
||||
|
||||
func TestUint256Unmarshal_BadData(t *testing.T) {
|
||||
var bigNum Uint256
|
||||
|
||||
assert.ErrorContains(t, "provided Uint256 json string is too short", bigNum.UnmarshalJSON([]byte{'"'}))
|
||||
assert.ErrorContains(t, "provided Uint256 json string is malformed", bigNum.UnmarshalJSON([]byte{'"', '1', '2'}))
|
||||
|
||||
}
|
||||
|
||||
func TestUint256UnmarshalNegative(t *testing.T) {
|
||||
m := "-1"
|
||||
var value Uint256
|
||||
|
||||
@@ -119,7 +119,6 @@ go_test(
|
||||
"process_block_test.go",
|
||||
"receive_attestation_test.go",
|
||||
"receive_block_test.go",
|
||||
"scenarios_test.go",
|
||||
"service_test.go",
|
||||
"setup_test.go",
|
||||
"weak_subjectivity_checks_test.go",
|
||||
|
||||
@@ -154,7 +154,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
|
||||
var pId [8]byte
|
||||
copy(pId[:], payloadID[:])
|
||||
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
|
||||
} else if hasAttr && payloadID == nil {
|
||||
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
|
||||
log.WithFields(logrus.Fields{
|
||||
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
|
||||
"slot": headBlk.Slot(),
|
||||
|
||||
@@ -120,7 +120,7 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
|
||||
fields := logrus.Fields{
|
||||
"blockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
|
||||
"parentHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.ParentHash())),
|
||||
"blockNumber": payload.BlockNumber,
|
||||
"blockNumber": payload.BlockNumber(),
|
||||
"gasUtilized": fmt.Sprintf("%.2f", gasUtilized),
|
||||
}
|
||||
if block.Version() >= version.Capella {
|
||||
|
||||
@@ -172,3 +172,10 @@ func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func WithSyncComplete(c chan struct{}) Option {
|
||||
return func(s *Service) error {
|
||||
s.syncComplete = c
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
@@ -136,7 +136,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not validate new payload")
|
||||
}
|
||||
if isValidPayload {
|
||||
if signed.Version() < version.Capella && isValidPayload {
|
||||
if err := s.validateMergeTransitionBlock(ctx, preStateVersion, preStateHeader, signed); err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -652,18 +652,17 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
|
||||
// This routine checks if there is a cached proposer payload ID available for the next slot proposer.
|
||||
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
|
||||
func (s *Service) runLateBlockTasks() {
|
||||
_, err := s.clockWaiter.WaitForClock(s.ctx)
|
||||
if err != nil {
|
||||
log.WithError(err).Error("runLateBlockTasks encountered an error waiting for initialization")
|
||||
if err := s.waitForSync(); err != nil {
|
||||
log.WithError(err).Error("failed to wait for initial sync")
|
||||
return
|
||||
}
|
||||
|
||||
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
|
||||
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C():
|
||||
s.lateBlockTasks(s.ctx)
|
||||
|
||||
case <-s.ctx.Done():
|
||||
log.Debug("Context closed, exiting routine")
|
||||
return
|
||||
@@ -720,3 +719,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
|
||||
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
|
||||
}
|
||||
}
|
||||
|
||||
// waitForSync blocks until the node is synced to the head.
|
||||
func (s *Service) waitForSync() error {
|
||||
select {
|
||||
case <-s.syncComplete:
|
||||
return nil
|
||||
case <-s.ctx.Done():
|
||||
return errors.New("context closed, exiting goroutine")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,363 +0,0 @@
|
||||
package blockchain
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
|
||||
mockExecution "github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/testing"
|
||||
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/startup"
|
||||
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
|
||||
"github.com/prysmaticlabs/prysm/v4/config/params"
|
||||
consensus_blocks "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v4/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
)
|
||||
|
||||
type Unmarshaler interface {
|
||||
UnmarshalSSZ(buf []byte) error
|
||||
}
|
||||
|
||||
// dataFetcher fetches and unmarshals data from file to provided data structure.
|
||||
func dataFetcher(fPath string, data Unmarshaler) error {
|
||||
rawFile, err := os.ReadFile(fPath) // #nosec G304
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return data.UnmarshalSSZ(rawFile)
|
||||
}
|
||||
|
||||
// Test setup:
|
||||
// Start with a head at slot 6653310
|
||||
// We import a timely block 6653311 this should warm up the caches
|
||||
// We import a late block 6653312 (epoch boundary). This should advance the NCS.
|
||||
// We import an early block 6653313 (slot 1). This should cause a reorg.
|
||||
//
|
||||
// This test does not run a full node, but rather runs the full blockchain
|
||||
// package without execution engine deadlocks. If you suspect that a delay comes
|
||||
// from the EL you should not use this template. This test also does not fully
|
||||
// cover the DB paths, it tricks the DB in thinking that we are doing a
|
||||
// checkpoint sync from the justified state so that it will fail to migrate old
|
||||
// states to cold.
|
||||
func TestLateSlot1(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
slot32 := primitives.Slot(6653312)
|
||||
baseDir := "/home/heluani/Documents/code/ethereum/prysm/beacon-chain/blockchain/testing/scenario/"
|
||||
// finalizedState is the current finalized state at slot 30.
|
||||
finalizedStateData := ðpb.BeaconStateCapella{}
|
||||
sszPathFinalizedState := baseDir + "state_finalized.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathFinalizedState, finalizedStateData))
|
||||
finalizedState, err := state_native.InitializeFromProtoCapella(finalizedStateData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, finalizedState.Slot()+96)
|
||||
|
||||
// finalizedBlock is the beacon block with the root that was finalized
|
||||
// checkpoint root.
|
||||
finalizedBlockData := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathFinalizedBlock := baseDir + "block_finalized.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathFinalizedBlock, finalizedBlockData))
|
||||
finalizedBlock, err := consensus_blocks.NewSignedBeaconBlock(finalizedBlockData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, finalizedBlock.Block().Slot()+96)
|
||||
|
||||
// justifiedState is the current justified state at slot 30, we will
|
||||
// consider this to be our genesis. It is at slot32 - 64.
|
||||
justifiedStateData := ðpb.BeaconStateCapella{}
|
||||
sszPathJustifiedState := baseDir + "state_justified.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathJustifiedState, justifiedStateData))
|
||||
justifiedState, err := state_native.InitializeFromProtoCapella(justifiedStateData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, justifiedState.Slot()+64)
|
||||
|
||||
// justifiedBlock is the beacon block with the root that was justified
|
||||
// checkpoint root.
|
||||
justifiedBlockData := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathJustifiedBlock := baseDir + "block_justified.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathJustifiedBlock, justifiedBlockData))
|
||||
justifiedBlock, err := consensus_blocks.NewSignedBeaconBlock(justifiedBlockData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, justifiedBlock.Block().Slot()+64)
|
||||
|
||||
// finalizedJustifiedState is the beacon state at the justification
|
||||
// checkpoint of the finalized checkpoint of the state at slot 30,
|
||||
// It is needed to obtain the right balances when starting
|
||||
finalizedJustifiedStateData := ðpb.BeaconStateCapella{}
|
||||
sszPathFinalizedJustifiedState := baseDir + "state_finalized_justified.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathFinalizedJustifiedState, finalizedJustifiedStateData))
|
||||
finalizedJustifiedState, err := state_native.InitializeFromProtoCapella(finalizedJustifiedStateData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, finalizedJustifiedState.Slot()+128)
|
||||
|
||||
// finalizedJustifiedBlock is the beacon block with the root that was
|
||||
// the finalizedJustified checkpoint root.
|
||||
finalizedJustifiedBlockData := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathFinalizedJustifiedBlock := baseDir + "block_finalized_justified.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathFinalizedJustifiedBlock, finalizedJustifiedBlockData))
|
||||
finalizedJustifiedBlock, err := consensus_blocks.NewSignedBeaconBlock(finalizedJustifiedBlockData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, finalizedJustifiedBlock.Block().Slot()+128)
|
||||
|
||||
// postState30 is the state at slot 30 right after applying the block at
|
||||
// slot 30.
|
||||
postState30Data := ðpb.BeaconStateCapella{}
|
||||
sszPathPostState30 := baseDir + "post_state_30.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathPostState30, postState30Data))
|
||||
postState30, err := state_native.InitializeFromProtoCapella(postState30Data)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, postState30.Slot()+2)
|
||||
|
||||
// block30 is the beacon block that arrived at slot 30. It's the
|
||||
// starting headRoot for the test
|
||||
beaconBlock30Data := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathBlock30 := baseDir + "block30.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathBlock30, beaconBlock30Data))
|
||||
block30, err := consensus_blocks.NewSignedBeaconBlock(beaconBlock30Data)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, block30.Block().Slot()+2)
|
||||
|
||||
// block31 is the beacon block that arrived at slot 31.
|
||||
beaconBlock31Data := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathBlock31 := baseDir + "block31.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathBlock31, beaconBlock31Data))
|
||||
block31, err := consensus_blocks.NewSignedBeaconBlock(beaconBlock31Data)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, block31.Block().Slot()+1)
|
||||
|
||||
// block32 is the beacon block that arrived at slot 32.
|
||||
beaconBlock32Data := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathBlock32 := baseDir + "block32.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathBlock32, beaconBlock32Data))
|
||||
block32, err := consensus_blocks.NewSignedBeaconBlock(beaconBlock32Data)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, block32.Block().Slot())
|
||||
|
||||
// block33 is the beacon block that arrived at slot 33, this block is
|
||||
// reorged
|
||||
beaconBlock33Data := ðpb.SignedBeaconBlockCapella{}
|
||||
sszPathBlock33 := baseDir + "block33.ssz"
|
||||
require.NoError(t, dataFetcher(sszPathBlock33, beaconBlock33Data))
|
||||
block33, err := consensus_blocks.NewSignedBeaconBlock(beaconBlock33Data)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32+1, block33.Block().Slot())
|
||||
require.Equal(t, block32.Block().ParentRoot(), block33.Block().ParentRoot())
|
||||
|
||||
// newJustifiedBlock is the beacon block that arrived at slot 0, it's the
|
||||
// block that becomes justified at the epoch boundary.
|
||||
newJustifiedBlockData := ðpb.SignedBeaconBlockCapella{}
|
||||
newJustifiedBlockPath := baseDir + "new_justified_block.ssz"
|
||||
require.NoError(t, dataFetcher(newJustifiedBlockPath, newJustifiedBlockData))
|
||||
newJustifiedBlock, err := consensus_blocks.NewSignedBeaconBlock(newJustifiedBlockData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, newJustifiedBlock.Block().Slot()+32)
|
||||
|
||||
// newJustifiedState is the beacon State at slot 0. It is the state that
|
||||
// gets justified at the epoch boundary.
|
||||
newJustifiedStateData := ðpb.BeaconStateCapella{}
|
||||
newJustifiedStatePath := baseDir + "new_justified_state.ssz"
|
||||
require.NoError(t, dataFetcher(newJustifiedStatePath, newJustifiedStateData))
|
||||
newJustifiedState, err := state_native.InitializeFromProtoCapella(newJustifiedStateData)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, slot32, newJustifiedState.Slot()+32)
|
||||
|
||||
// Setup the service
|
||||
service, tr := minimalTestService(t)
|
||||
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
|
||||
|
||||
// Save the justified and finalized states and blocks.
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, finalizedBlock))
|
||||
finalizedRoot, err := finalizedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, finalizedState.Copy(), finalizedRoot))
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, justifiedBlock))
|
||||
justifiedRoot, err := justifiedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, justifiedState.Copy(), justifiedRoot))
|
||||
fcp := postState30.FinalizedCheckpoint()
|
||||
require.Equal(t, [32]byte(fcp.Root), finalizedRoot)
|
||||
jcp := postState30.CurrentJustifiedCheckpoint()
|
||||
require.Equal(t, [32]byte(jcp.Root), justifiedRoot)
|
||||
finalizedSummary := ðpb.StateSummary{Slot: finalizedState.Slot(), Root: finalizedRoot[:]}
|
||||
require.NoError(t, beaconDB.SaveStateSummary(ctx, finalizedSummary))
|
||||
justifiedSummary := ðpb.StateSummary{Slot: justifiedState.Slot(), Root: justifiedRoot[:]}
|
||||
require.NoError(t, beaconDB.SaveStateSummary(ctx, justifiedSummary))
|
||||
|
||||
// Forkchoice requires justified balances,
|
||||
// For this we need to save the justified checkpoint of the
|
||||
// corresponding finalized state.
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, finalizedJustifiedBlock))
|
||||
finalizedJustifiedRoot, err := finalizedJustifiedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, finalizedJustifiedState.Copy(), finalizedJustifiedRoot))
|
||||
fjcs := finalizedState.CurrentJustifiedCheckpoint()
|
||||
require.Equal(t, [32]byte(fjcs.Root), finalizedJustifiedRoot)
|
||||
finalizedJustifiedSummary := ðpb.StateSummary{Slot: finalizedJustifiedState.Slot(), Root: finalizedJustifiedRoot[:]}
|
||||
require.NoError(t, beaconDB.SaveStateSummary(ctx, finalizedJustifiedSummary))
|
||||
|
||||
// Setup forkchoice to have the two checkpoints
|
||||
execution, err := finalizedBlock.Block().Body().Execution()
|
||||
require.NoError(t, err)
|
||||
payloadHash := [32]byte(execution.BlockHash())
|
||||
// We use a fake finalized checkpoint for the finalized checkpoint node
|
||||
// it's self-referencing as if it was genesis
|
||||
st, _, err := prepareForkchoiceState(ctx, slot32-96, finalizedRoot, [32]byte{}, payloadHash, fjcs, fcp)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, fcs.InsertNode(ctx, st, finalizedRoot))
|
||||
|
||||
// We fake the justified checkpoint to have as parent the finalized
|
||||
// checkpoint
|
||||
execution, err = justifiedBlock.Block().Body().Execution()
|
||||
require.NoError(t, err)
|
||||
payloadHash = [32]byte(execution.BlockHash())
|
||||
st, _, err = prepareForkchoiceState(ctx, slot32-64, justifiedRoot, finalizedRoot, payloadHash, fcp, fcp)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, fcs.InsertNode(ctx, st, justifiedRoot))
|
||||
|
||||
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: fcp.Epoch, Root: [32]byte(fcp.Root)}))
|
||||
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{Epoch: jcp.Epoch, Root: [32]byte(jcp.Root)}))
|
||||
|
||||
// Save the new justified state/block to DB since it will be justified
|
||||
// in the epoch pre-compute, we need to insert this block as well to
|
||||
// forkchoice
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, newJustifiedBlock))
|
||||
newJustifiedRoot, err := newJustifiedBlock.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, newJustifiedState.Copy(), newJustifiedRoot))
|
||||
newJustifiedSummary := ðpb.StateSummary{Slot: newJustifiedState.Slot(), Root: newJustifiedRoot[:]}
|
||||
require.NoError(t, beaconDB.SaveStateSummary(ctx, newJustifiedSummary))
|
||||
st, _, err = prepareForkchoiceState(ctx, slot32-32, newJustifiedRoot, justifiedRoot, [32]byte{}, jcp, fcp)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, fcs.InsertNode(ctx, st, newJustifiedRoot))
|
||||
|
||||
// We fake the forkchoice node at 30 since we won't be able to import
|
||||
// the block 31 otherwise. We need to use a full state to be able to
|
||||
// import later block 31. When importing this block we need to set
|
||||
// genesis time so that it is timely
|
||||
secondsSinceGenesis, err := (slot32 - 2).SafeMul(params.BeaconConfig().SecondsPerSlot)
|
||||
require.NoError(t, err)
|
||||
genesisTime := time.Now().Add(-time.Duration(uint64(secondsSinceGenesis)) * time.Second)
|
||||
service.SetGenesisTime(genesisTime)
|
||||
fcs.SetGenesisTime(uint64(genesisTime.Unix()))
|
||||
|
||||
// Start the service that will run the late block tasks and advance
|
||||
// slots
|
||||
var vr [32]byte
|
||||
require.NoError(t, service.clockSetter.SetClock(startup.NewClock(time.Now(), vr)))
|
||||
go service.runLateBlockTasks()
|
||||
go service.spawnProcessAttestationsRoutine()
|
||||
|
||||
block30Root := block31.Block().ParentRoot()
|
||||
fakeState30 := postState30.Copy()
|
||||
bh := fakeState30.LatestBlockHeader()
|
||||
copy(bh.ParentRoot, newJustifiedRoot[:])
|
||||
require.NoError(t, fakeState30.SetLatestBlockHeader(bh))
|
||||
require.NoError(t, fcs.InsertNode(ctx, fakeState30, block30Root))
|
||||
fcHead, err := fcs.Head(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, block30Root, fcHead)
|
||||
|
||||
// Check blockchain package's checkpoints
|
||||
bfcp := service.FinalizedCheckpt()
|
||||
require.Equal(t, fcp.Epoch, bfcp.Epoch)
|
||||
require.Equal(t, [32]byte(bfcp.Root), [32]byte(fcp.Root))
|
||||
bjcp := service.CurrentJustifiedCheckpt()
|
||||
require.DeepEqual(t, jcp, bjcp)
|
||||
|
||||
// Set up the node's head
|
||||
require.NoError(t, beaconDB.SaveBlock(ctx, block30))
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, postState30.Copy(), block30Root))
|
||||
block30Summary := ðpb.StateSummary{Slot: slot32 - 2, Root: block30Root[:]}
|
||||
require.NoError(t, beaconDB.SaveStateSummary(ctx, block30Summary))
|
||||
require.NoError(t, service.setHead(block30Root, block30, postState30))
|
||||
require.NoError(t, beaconDB.SaveHeadBlockRoot(ctx, block30Root))
|
||||
headRoot, err := service.HeadRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, block30Root, [32]byte(headRoot))
|
||||
headState, err := service.HeadState(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headState.Slot(), slot32-2)
|
||||
|
||||
// Update the node's next slot cache
|
||||
require.NoError(t, transition.UpdateNextSlotCache(ctx, block30Root[:], postState30))
|
||||
preState31 := transition.NextSlotState(block30Root[:], slot32-1)
|
||||
require.NotNil(t, preState31)
|
||||
|
||||
// Mock the execution engine to accept any block as valid
|
||||
executionEngine := &mockExecution.EngineClient{}
|
||||
service.cfg.ExecutionEngineCaller = executionEngine
|
||||
|
||||
// Save the origin checkpoint root to prevent the db from walking up the
|
||||
// ancestry. We are asserting that the justified checkpoint root (that
|
||||
// is at slot32 - 64 is the init checkpoint
|
||||
require.NoError(t, service.cfg.BeaconDB.SaveOriginCheckpointBlockRoot(ctx, justifiedRoot))
|
||||
|
||||
// Wait until slot 31 to import it. We wait so that the late blocks
|
||||
// tasks are run.
|
||||
slot31Start := slots.StartTime(uint64(service.genesisTime.Unix()), slot32-1)
|
||||
time.Sleep(slot31Start.Add(time.Second).Sub(time.Now()))
|
||||
|
||||
// Import block 31 check that headRoot and NSC are right
|
||||
block31Root := block32.Block().ParentRoot()
|
||||
require.NoError(t, service.ReceiveBlock(ctx, block31, block31Root))
|
||||
headRoot, err = service.HeadRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, block30Root, block31.Block().ParentRoot())
|
||||
require.Equal(t, block31Root, [32]byte(headRoot))
|
||||
headState, err = service.HeadState(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headState.Slot(), slot32-1)
|
||||
|
||||
// Why do we need to sleep here to let the NSC to be updated? state copies take long it seems
|
||||
time.Sleep(200 * time.Millisecond)
|
||||
preState32 := transition.NextSlotState(block31Root[:], slot32)
|
||||
require.NotNil(t, preState32)
|
||||
require.Equal(t, slot32, preState32.Slot())
|
||||
|
||||
// Wait until the start of slot 32 + 5 seconds so that slot 32 is late
|
||||
// Import block 32
|
||||
slot32Start := slots.StartTime(uint64(service.genesisTime.Unix()), slot32)
|
||||
time.Sleep(slot32Start.Add(5 * time.Second).Sub(time.Now()))
|
||||
|
||||
block32Root, err := block32.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.ReceiveBlock(ctx, block32, block32Root))
|
||||
headRoot, err = service.HeadRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, block32Root, [32]byte(headRoot))
|
||||
headState, err = service.HeadState(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headState.Slot(), slot32)
|
||||
|
||||
preState33 := transition.NextSlotState(block32Root[:], slot32+1)
|
||||
require.NotNil(t, preState33)
|
||||
require.Equal(t, slot32+1, preState33.Slot())
|
||||
|
||||
// We should also have advanced the NSC for the blockroot of 31
|
||||
preState33Alt := transition.NextSlotState(block31Root[:], slot32+1)
|
||||
require.NotNil(t, preState33Alt)
|
||||
require.Equal(t, slot32+1, preState33Alt.Slot())
|
||||
|
||||
// Wait until the start of slot 33 and import it early
|
||||
slot33Start := slots.StartTime(uint64(service.genesisTime.Unix()), slot32+1)
|
||||
time.Sleep(slot33Start.Add(time.Second).Sub(time.Now()))
|
||||
|
||||
block33Root, err := block33.Block().HashTreeRoot()
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, service.ReceiveBlock(ctx, block33, block33Root))
|
||||
headRoot, err = service.HeadRoot(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, block33Root, [32]byte(headRoot))
|
||||
headState, err = service.HeadState(ctx)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, headState.Slot(), slot32+1)
|
||||
|
||||
preState34 := transition.NextSlotState(block33Root[:], slot32+2)
|
||||
require.NotNil(t, preState34)
|
||||
require.Equal(t, slot32+2, preState34.Slot())
|
||||
|
||||
require.LogsContain(t, hook, "pingo")
|
||||
}
|
||||
@@ -60,6 +60,7 @@ type Service struct {
|
||||
wsVerifier *WeakSubjectivityVerifier
|
||||
clockSetter startup.ClockSetter
|
||||
clockWaiter startup.ClockWaiter
|
||||
syncComplete chan struct{}
|
||||
}
|
||||
|
||||
// config options for the service.
|
||||
|
||||
@@ -2,6 +2,7 @@ package altair
|
||||
|
||||
import (
|
||||
"context"
|
||||
goErrors "errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
@@ -22,6 +23,10 @@ import (
|
||||
|
||||
const maxRandomByte = uint64(1<<8 - 1)
|
||||
|
||||
var (
|
||||
ErrTooLate = errors.New("sync message is too late")
|
||||
)
|
||||
|
||||
// ValidateNilSyncContribution validates the following fields are not nil:
|
||||
// -the contribution and proof itself
|
||||
// -the message within contribution and proof
|
||||
@@ -217,7 +222,7 @@ func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockD
|
||||
upperBound := time.Now().Add(clockDisparity)
|
||||
// Verify sync message slot is within the time range.
|
||||
if messageTime.Before(lowerBound) || messageTime.After(upperBound) {
|
||||
return fmt.Errorf(
|
||||
syncErr := fmt.Errorf(
|
||||
"sync message time %v (slot %d) not within allowable range of %v (slot %d) to %v (slot %d)",
|
||||
messageTime,
|
||||
slot,
|
||||
@@ -226,6 +231,11 @@ func ValidateSyncMessageTime(slot primitives.Slot, genesisTime time.Time, clockD
|
||||
upperBound,
|
||||
uint64(upperBound.Unix()-genesisTime.Unix())/params.BeaconConfig().SecondsPerSlot,
|
||||
)
|
||||
// Wrap error message if sync message is too late.
|
||||
if messageTime.Before(lowerBound) {
|
||||
syncErr = goErrors.Join(ErrTooLate, syncErr)
|
||||
}
|
||||
return syncErr
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -38,6 +38,7 @@ go_library(
|
||||
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
|
||||
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@io_opencensus_go//trace:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@@ -14,6 +14,10 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrTooLate = errors.New("attestation is too late")
|
||||
)
|
||||
|
||||
// ValidateNilAttestation checks if any composite field of input attestation is nil.
|
||||
// Access to these nil fields will result in run time panic,
|
||||
// it is recommended to run these checks as first line of defense.
|
||||
@@ -164,7 +168,7 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
|
||||
)
|
||||
if attTime.Before(lowerBounds) {
|
||||
attReceivedTooLateCount.Inc()
|
||||
return attError
|
||||
return errors.Join(ErrTooLate, attError)
|
||||
}
|
||||
if attTime.After(upperBounds) {
|
||||
attReceivedTooEarlyCount.Inc()
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v4/time/slots"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
var CommitteeCacheInProgressHit = promauto.NewCounter(prometheus.CounterOpts{
|
||||
@@ -396,3 +397,22 @@ func isEligibleForActivation(activationEligibilityEpoch, activationEpoch, finali
|
||||
return activationEligibilityEpoch <= finalizedEpoch &&
|
||||
activationEpoch == params.BeaconConfig().FarFutureEpoch
|
||||
}
|
||||
|
||||
// LastActivatedValidatorIndex provides the last activated validator given a state
|
||||
func LastActivatedValidatorIndex(ctx context.Context, st state.ReadOnlyBeaconState) (primitives.ValidatorIndex, error) {
|
||||
_, span := trace.StartSpan(ctx, "helpers.LastActivatedValidatorIndex")
|
||||
defer span.End()
|
||||
var lastActivatedvalidatorIndex primitives.ValidatorIndex
|
||||
// linear search because status are not sorted
|
||||
for j := st.NumValidators() - 1; j >= 0; j-- {
|
||||
val, err := st.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if IsActiveValidatorUsingTrie(val, time.CurrentEpoch(st)) {
|
||||
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
|
||||
break
|
||||
}
|
||||
}
|
||||
return lastActivatedvalidatorIndex, nil
|
||||
}
|
||||
|
||||
@@ -727,3 +727,26 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
|
||||
beaconState, err := state_native.InitializeFromProtoPhase0(ðpb.BeaconState{})
|
||||
require.NoError(t, err)
|
||||
|
||||
validators := make([]*ethpb.Validator, 4)
|
||||
balances := make([]uint64, len(validators))
|
||||
for i := uint64(0); i < 4; i++ {
|
||||
validators[i] = ðpb.Validator{
|
||||
PublicKey: make([]byte, params.BeaconConfig().BLSPubkeyLength),
|
||||
WithdrawalCredentials: make([]byte, 32),
|
||||
EffectiveBalance: 32 * 1e9,
|
||||
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
|
||||
}
|
||||
balances[i] = validators[i].EffectiveBalance
|
||||
}
|
||||
require.NoError(t, beaconState.SetValidators(validators))
|
||||
require.NoError(t, beaconState.SetBalances(balances))
|
||||
|
||||
index, err := LastActivatedValidatorIndex(context.Background(), beaconState)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, index, primitives.ValidatorIndex(3))
|
||||
}
|
||||
|
||||
@@ -107,7 +107,6 @@ type HeadAccessDatabase interface {
|
||||
|
||||
// initialization method needed for origin checkpoint sync
|
||||
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
|
||||
SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32]byte) error
|
||||
SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error
|
||||
}
|
||||
|
||||
|
||||
@@ -115,28 +115,32 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
|
||||
|
||||
func FuzzExecutionPayload(f *testing.F) {
|
||||
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
|
||||
execData := &engine.ExecutableData{
|
||||
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
|
||||
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
LogsBloom: logsBloom[:],
|
||||
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Number: math.MaxUint64,
|
||||
GasLimit: math.MaxUint64,
|
||||
GasUsed: math.MaxUint64,
|
||||
Timestamp: 100,
|
||||
ExtraData: nil,
|
||||
BaseFeePerGas: big.NewInt(math.MaxInt),
|
||||
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
|
||||
execData := &engine.ExecutionPayloadEnvelope{
|
||||
ExecutionPayload: &engine.ExecutableData{
|
||||
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
|
||||
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
ReceiptsRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
LogsBloom: logsBloom[:],
|
||||
Random: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Number: math.MaxUint64,
|
||||
GasLimit: math.MaxUint64,
|
||||
GasUsed: math.MaxUint64,
|
||||
Timestamp: 100,
|
||||
ExtraData: nil,
|
||||
BaseFeePerGas: big.NewInt(math.MaxInt),
|
||||
BlockHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
|
||||
Transactions: [][]byte{{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}, {0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}},
|
||||
Withdrawals: []*types.Withdrawal{},
|
||||
},
|
||||
BlockValue: nil,
|
||||
}
|
||||
output, err := json.Marshal(execData)
|
||||
assert.NoError(f, err)
|
||||
f.Add(output)
|
||||
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
|
||||
gethResp := &engine.ExecutableData{}
|
||||
prysmResp := &pb.ExecutionPayload{}
|
||||
gethResp := &engine.ExecutionPayloadEnvelope{}
|
||||
prysmResp := &pb.ExecutionPayloadCapellaWithValue{}
|
||||
gethErr := json.Unmarshal(jsonBlob, gethResp)
|
||||
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
|
||||
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
|
||||
@@ -147,10 +151,10 @@ func FuzzExecutionPayload(f *testing.F) {
|
||||
gethBlob, gethErr := json.Marshal(gethResp)
|
||||
prysmBlob, prysmErr := json.Marshal(prysmResp)
|
||||
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
|
||||
newGethResp := &engine.ExecutableData{}
|
||||
newGethResp := &engine.ExecutionPayloadEnvelope{}
|
||||
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
|
||||
assert.NoError(t, newGethErr)
|
||||
newGethResp2 := &engine.ExecutableData{}
|
||||
newGethResp2 := &engine.ExecutionPayloadEnvelope{}
|
||||
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
|
||||
assert.NoError(t, newGethErr)
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ func New() *ForkChoice {
|
||||
|
||||
b := make([]uint64, 0)
|
||||
v := make([]Vote, 0)
|
||||
return &ForkChoice{store: s, balances: b, votes: v}
|
||||
return &ForkChoice{store: s, balances: b, votes: v, fcLock: new(fcLock)}
|
||||
}
|
||||
|
||||
// NodeCount returns the current number of nodes in the Store.
|
||||
|
||||
@@ -1,7 +1,11 @@
|
||||
package doublylinkedtree
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"runtime/debug"
|
||||
"runtime/pprof"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
|
||||
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
|
||||
@@ -11,7 +15,7 @@ import (
|
||||
|
||||
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
|
||||
type ForkChoice struct {
|
||||
sync.RWMutex
|
||||
*fcLock
|
||||
store *Store
|
||||
votes []Vote // tracks individual validator's last vote.
|
||||
balances []uint64 // tracks individual validator's balances last accounted in votes.
|
||||
@@ -68,3 +72,52 @@ type Vote struct {
|
||||
nextRoot [fieldparams.RootLength]byte // next voting root.
|
||||
nextEpoch primitives.Epoch // epoch of next voting period.
|
||||
}
|
||||
|
||||
type fcLock struct {
|
||||
lk sync.RWMutex
|
||||
t time.Time
|
||||
currChan chan int
|
||||
}
|
||||
|
||||
func (f *fcLock) Lock() {
|
||||
f.lk.Lock()
|
||||
f.t = time.Now()
|
||||
f.currChan = make(chan int)
|
||||
go func(t time.Time, c chan int) {
|
||||
tim := time.NewTimer(3 * time.Second)
|
||||
select {
|
||||
case <-c:
|
||||
tim.Stop()
|
||||
case <-tim.C:
|
||||
tim.Stop()
|
||||
pfile := pprof.Lookup("goroutine")
|
||||
bf := bytes.NewBuffer([]byte{})
|
||||
err := pfile.WriteTo(bf, 1)
|
||||
_ = err
|
||||
log.Warnf("FC lock is taking longer than 3 seconds with the complete stack of %s", bf.String())
|
||||
}
|
||||
}(time.Now(), f.currChan)
|
||||
}
|
||||
|
||||
func (f *fcLock) Unlock() {
|
||||
t := time.Since(f.t)
|
||||
f.t = time.Time{}
|
||||
close(f.currChan)
|
||||
f.lk.Unlock()
|
||||
if t > time.Second {
|
||||
log.Warnf("FC lock is taking longer than 1 second: %s with the complete stack of %s", t.String(), string(debug.Stack()))
|
||||
}
|
||||
}
|
||||
|
||||
func (f *fcLock) RLock() {
|
||||
t := time.Now()
|
||||
f.lk.RLock()
|
||||
dt := time.Since(t)
|
||||
if dt > time.Second {
|
||||
log.Warnf("FC Rlock is taking longer than 1 second: %s with stack %s", dt.String(), string(debug.Stack()))
|
||||
}
|
||||
}
|
||||
|
||||
func (f *fcLock) RUnlock() {
|
||||
f.lk.RUnlock()
|
||||
}
|
||||
|
||||
@@ -236,7 +236,7 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
|
||||
}
|
||||
|
||||
log.Debugln("Registering Blockchain Service")
|
||||
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer); err != nil {
|
||||
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer, beacon.initialSyncComplete); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -590,7 +590,7 @@ func (b *BeaconNode) registerAttestationPool() error {
|
||||
return b.services.RegisterService(s)
|
||||
}
|
||||
|
||||
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer) error {
|
||||
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer, syncComplete chan struct{}) error {
|
||||
var web3Service *execution.Service
|
||||
if err := b.services.FetchService(&web3Service); err != nil {
|
||||
return err
|
||||
@@ -621,6 +621,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
|
||||
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
|
||||
blockchain.WithProposerIdsCache(b.proposerIdsCache),
|
||||
blockchain.WithClockSynchronizer(gs),
|
||||
blockchain.WithSyncComplete(syncComplete),
|
||||
)
|
||||
|
||||
blockchainService, err := blockchain.NewService(b.ctx, opts...)
|
||||
|
||||
@@ -3,6 +3,7 @@ package apimiddleware
|
||||
import (
|
||||
"encoding/base64"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
@@ -17,9 +18,14 @@ func (p *EpochParticipation) UnmarshalJSON(b []byte) error {
|
||||
if len(b) < 2 {
|
||||
return errors.New("epoch participation length must be at least 2")
|
||||
}
|
||||
if b[0] != '"' || b[len(b)-1] != '"' {
|
||||
return errors.Errorf("provided epoch participation json string is malformed: %s", string(b))
|
||||
}
|
||||
|
||||
// Remove leading and trailing quotation marks.
|
||||
decoded, err := base64.StdEncoding.DecodeString(string(b[1 : len(b)-1]))
|
||||
jsonString := string(b)
|
||||
jsonString = strings.Trim(jsonString, "\"")
|
||||
decoded, err := base64.StdEncoding.DecodeString(jsonString)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not decode epoch participation base64 value")
|
||||
}
|
||||
|
||||
@@ -23,7 +23,7 @@ func TestUnmarshalEpochParticipation(t *testing.T) {
|
||||
ep := EpochParticipation{}
|
||||
err := ep.UnmarshalJSON([]byte(":illegal:"))
|
||||
require.NotNil(t, err)
|
||||
assert.ErrorContains(t, "could not decode epoch participation base64 value", err)
|
||||
assert.ErrorContains(t, "provided epoch participation json string is malformed", err)
|
||||
})
|
||||
t.Run("length too small", func(t *testing.T) {
|
||||
ep := EpochParticipation{}
|
||||
@@ -36,4 +36,8 @@ func TestUnmarshalEpochParticipation(t *testing.T) {
|
||||
require.NoError(t, ep.UnmarshalJSON([]byte("null")))
|
||||
assert.DeepEqual(t, EpochParticipation([]string{}), ep)
|
||||
})
|
||||
t.Run("invalid value", func(t *testing.T) {
|
||||
ep := EpochParticipation{}
|
||||
require.ErrorContains(t, "provided epoch participation json string is malformed", ep.UnmarshalJSON([]byte("XdHJ1ZQ==X")))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -146,6 +146,7 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
|
||||
|
||||
validatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
|
||||
nextValidatorAssignments := make([]*ethpb.DutiesResponse_Duty, 0, len(req.PublicKeys))
|
||||
|
||||
for _, pubKey := range req.PublicKeys {
|
||||
if ctx.Err() != nil {
|
||||
return nil, status.Errorf(codes.Aborted, "Could not continue fetching assignments: %v", ctx.Err())
|
||||
@@ -194,7 +195,8 @@ func (vs *Server) duties(ctx context.Context, req *ethpb.DutiesRequest) (*ethpb.
|
||||
vs.ProposerSlotIndexCache.PrunePayloadIDs(epochStartSlot)
|
||||
} else {
|
||||
// If the validator isn't in the beacon state, try finding their deposit to determine their status.
|
||||
vStatus, _ := vs.validatorStatus(ctx, s, pubKey)
|
||||
// We don't need the lastActiveValidatorFn because we don't use the response in this.
|
||||
vStatus, _ := vs.validatorStatus(ctx, s, pubKey, nil)
|
||||
assignment.Status = vStatus.Status
|
||||
}
|
||||
|
||||
|
||||
@@ -347,16 +347,6 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
|
||||
ctx, span := trace.StartSpan(ctx, "ProposerServer.proposeGenericBeaconBlock")
|
||||
defer span.End()
|
||||
|
||||
// Do not block proposal critical path with debug logging or block feed updates.
|
||||
defer func() {
|
||||
log.WithField("slot", blk.Block().Slot()).Debugf(
|
||||
"Block proposal received via RPC")
|
||||
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
|
||||
Type: blockfeed.ReceivedBlock,
|
||||
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
|
||||
})
|
||||
}()
|
||||
|
||||
unblinder, err := newUnblinder(blk, vs.BlockBuilder)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not create unblinder")
|
||||
@@ -387,6 +377,13 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
|
||||
return nil, fmt.Errorf("could not process beacon block: %v", err)
|
||||
}
|
||||
|
||||
log.WithField("slot", blk.Block().Slot()).Debugf(
|
||||
"Block proposal received via RPC")
|
||||
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
|
||||
Type: blockfeed.ReceivedBlock,
|
||||
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
|
||||
})
|
||||
|
||||
return ðpb.ProposeResponse{
|
||||
BlockRoot: root[:],
|
||||
}, nil
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
)
|
||||
|
||||
var errPubkeyDoesNotExist = errors.New("pubkey does not exist")
|
||||
var errHeadstateDoesNotExist = errors.New("head state does not exist")
|
||||
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
|
||||
var nonExistentIndex = primitives.ValidatorIndex(^uint64(0))
|
||||
|
||||
@@ -46,7 +47,8 @@ func (vs *Server) ValidatorStatus(
|
||||
if err != nil {
|
||||
return nil, status.Error(codes.Internal, "Could not get head state")
|
||||
}
|
||||
vStatus, _ := vs.validatorStatus(ctx, headState, req.PublicKey)
|
||||
|
||||
vStatus, _ := vs.validatorStatus(ctx, headState, req.PublicKey, func() (primitives.ValidatorIndex, error) { return helpers.LastActivatedValidatorIndex(ctx, headState) })
|
||||
return vStatus, nil
|
||||
}
|
||||
|
||||
@@ -86,8 +88,9 @@ func (vs *Server) MultipleValidatorStatus(
|
||||
// Fetch statuses from beacon state.
|
||||
statuses := make([]*ethpb.ValidatorStatusResponse, len(pubKeys))
|
||||
indices := make([]primitives.ValidatorIndex, len(pubKeys))
|
||||
lastActivated, hpErr := helpers.LastActivatedValidatorIndex(ctx, headState)
|
||||
for i, pubKey := range pubKeys {
|
||||
statuses[i], indices[i] = vs.validatorStatus(ctx, headState, pubKey)
|
||||
statuses[i], indices[i] = vs.validatorStatus(ctx, headState, pubKey, func() (primitives.ValidatorIndex, error) { return lastActivated, hpErr })
|
||||
}
|
||||
|
||||
return ðpb.MultipleValidatorStatusResponse{
|
||||
@@ -223,11 +226,13 @@ func (vs *Server) activationStatus(
|
||||
}
|
||||
activeValidatorExists := false
|
||||
statusResponses := make([]*ethpb.ValidatorActivationResponse_Status, len(pubKeys))
|
||||
// only run calculation of last activated once per state
|
||||
lastActivated, hpErr := helpers.LastActivatedValidatorIndex(ctx, headState)
|
||||
for i, pubKey := range pubKeys {
|
||||
if ctx.Err() != nil {
|
||||
return false, nil, ctx.Err()
|
||||
}
|
||||
vStatus, idx := vs.validatorStatus(ctx, headState, pubKey)
|
||||
vStatus, idx := vs.validatorStatus(ctx, headState, pubKey, func() (primitives.ValidatorIndex, error) { return lastActivated, hpErr })
|
||||
if vStatus == nil {
|
||||
continue
|
||||
}
|
||||
@@ -272,6 +277,7 @@ func (vs *Server) validatorStatus(
|
||||
ctx context.Context,
|
||||
headState state.ReadOnlyBeaconState,
|
||||
pubKey []byte,
|
||||
lastActiveValidatorFn func() (primitives.ValidatorIndex, error),
|
||||
) (*ethpb.ValidatorStatusResponse, primitives.ValidatorIndex) {
|
||||
ctx, span := trace.StartSpan(ctx, "ValidatorServer.validatorStatus")
|
||||
defer span.End()
|
||||
@@ -340,17 +346,12 @@ func (vs *Server) validatorStatus(
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var lastActivatedvalidatorIndex primitives.ValidatorIndex
|
||||
for j := headState.NumValidators() - 1; j >= 0; j-- {
|
||||
val, err := headState.ValidatorAtIndexReadOnly(primitives.ValidatorIndex(j))
|
||||
if err != nil {
|
||||
return resp, idx
|
||||
}
|
||||
if helpers.IsActiveValidatorUsingTrie(val, time.CurrentEpoch(headState)) {
|
||||
lastActivatedvalidatorIndex = primitives.ValidatorIndex(j)
|
||||
break
|
||||
}
|
||||
if lastActiveValidatorFn == nil {
|
||||
return resp, idx
|
||||
}
|
||||
lastActivatedvalidatorIndex, err := lastActiveValidatorFn()
|
||||
if err != nil {
|
||||
return resp, idx
|
||||
}
|
||||
// Our position in the activation queue is the above index - our validator index.
|
||||
if lastActivatedvalidatorIndex < idx {
|
||||
@@ -390,7 +391,7 @@ func checkValidatorsAreRecent(headEpoch primitives.Epoch, req *ethpb.DoppelGange
|
||||
|
||||
func statusForPubKey(headState state.ReadOnlyBeaconState, pubKey []byte) (ethpb.ValidatorStatus, primitives.ValidatorIndex, error) {
|
||||
if headState == nil || headState.IsNil() {
|
||||
return ethpb.ValidatorStatus_UNKNOWN_STATUS, 0, errors.New("head state does not exist")
|
||||
return ethpb.ValidatorStatus_UNKNOWN_STATUS, 0, errHeadstateDoesNotExist
|
||||
}
|
||||
idx, ok := headState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
|
||||
if !ok || uint64(idx) >= uint64(headState.NumValidators()) {
|
||||
|
||||
@@ -2,6 +2,7 @@ package sync
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
@@ -13,6 +14,8 @@ import (
|
||||
"github.com/libp2p/go-libp2p/core/host"
|
||||
"github.com/libp2p/go-libp2p/core/peer"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
|
||||
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
|
||||
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/flags"
|
||||
@@ -283,7 +286,7 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
|
||||
messageFailedValidationCounter.WithLabelValues(topic).Inc()
|
||||
}
|
||||
if b == pubsub.ValidationIgnore {
|
||||
if err != nil {
|
||||
if err != nil && !errorIsIgnored(err) {
|
||||
log.WithError(err).WithFields(logrus.Fields{
|
||||
"topic": topic,
|
||||
"multiaddress": multiAddr(pid, s.cfg.p2p.Peers()),
|
||||
@@ -781,3 +784,13 @@ func multiAddr(pid peer.ID, stat *peers.Status) string {
|
||||
}
|
||||
return addrs.String()
|
||||
}
|
||||
|
||||
func errorIsIgnored(err error) bool {
|
||||
if errors.Is(err, helpers.ErrTooLate) {
|
||||
return true
|
||||
}
|
||||
if errors.Is(err, altair.ErrTooLate) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -65,10 +65,7 @@ container_image(
|
||||
container_bundle(
|
||||
name = "image_bundle",
|
||||
images = {
|
||||
"gcr.io/prysmaticlabs/prysm/beacon-chain:latest": ":image_with_creation_time",
|
||||
"gcr.io/prysmaticlabs/prysm/beacon-chain:{DOCKER_TAG}": ":image_with_creation_time",
|
||||
"index.docker.io/prysmaticlabs/prysm-beacon-chain:latest": ":image_with_creation_time",
|
||||
"index.docker.io/prysmaticlabs/prysm-beacon-chain:{DOCKER_TAG}": ":image_with_creation_time",
|
||||
"gcr.io/prysmaticlabs/prysm/beacon-chain:fcTesting": ":image_with_creation_time",
|
||||
},
|
||||
tags = ["manual"],
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
@@ -119,20 +116,6 @@ docker_push(
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
)
|
||||
|
||||
docker_push(
|
||||
name = "push_images_debug",
|
||||
bundle = ":image_bundle_debug",
|
||||
tags = ["manual"],
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
)
|
||||
|
||||
docker_push(
|
||||
name = "push_images_alpine",
|
||||
bundle = ":image_bundle_alpine",
|
||||
tags = ["manual"],
|
||||
visibility = ["//beacon-chain:__pkg__"],
|
||||
)
|
||||
|
||||
go_binary(
|
||||
name = "beacon-chain",
|
||||
embed = [":go_default_library"],
|
||||
|
||||
@@ -52,11 +52,11 @@ func createKeystore(t *testing.T, path string) (*keymanager.Keystore, string) {
|
||||
id, err := uuid.NewRandom()
|
||||
require.NoError(t, err)
|
||||
keystoreFile := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", validatingKey.PublicKey().Marshal()),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", validatingKey.PublicKey().Marshal()),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encoded, err := json.MarshalIndent(keystoreFile, "", "\t")
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -394,6 +394,9 @@ func (e *ExecutionPayloadCapellaWithValue) UnmarshalJSON(enc []byte) error {
|
||||
if err := json.Unmarshal(enc, &dec); err != nil {
|
||||
return err
|
||||
}
|
||||
if dec.ExecutionPayload == nil {
|
||||
return errors.New("missing required field 'executionPayload' for ExecutionPayloadWithValue")
|
||||
}
|
||||
|
||||
if dec.ExecutionPayload.ParentHash == nil {
|
||||
return errors.New("missing required field 'parentHash' for ExecutionPayload")
|
||||
|
||||
@@ -7,6 +7,7 @@ go_library(
|
||||
importpath = "github.com/prysmaticlabs/prysm/v4/tools/interop/convert-keys",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/params:go_default_library",
|
||||
"//tools/unencrypted-keys-gen/keygen:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@in_gopkg_yaml_v2//:go_default_library",
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v4/tools/unencrypted-keys-gen/keygen"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"gopkg.in/yaml.v2"
|
||||
@@ -52,7 +53,7 @@ func main() {
|
||||
})
|
||||
}
|
||||
|
||||
outFile, err := os.Create(os.Args[2])
|
||||
outFile, err := os.OpenFile(os.Args[2], os.O_CREATE|os.O_EXCL, params.BeaconIoConfig().ReadWritePermissions)
|
||||
if err != nil {
|
||||
log.WithError(err).Fatalf("Failed to create file at %s", os.Args[2])
|
||||
}
|
||||
|
||||
@@ -197,11 +197,11 @@ func encrypt(cliCtx *cli.Context) error {
|
||||
return errors.Wrap(err, "could not encrypt into new keystore")
|
||||
}
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
if err != nil {
|
||||
@@ -229,7 +229,6 @@ func readAndDecryptKeystore(fullPath, password string) error {
|
||||
}
|
||||
decryptor := keystorev4.New()
|
||||
keystoreFile := &keymanager.Keystore{}
|
||||
|
||||
if err := json.Unmarshal(f, keystoreFile); err != nil {
|
||||
return errors.Wrap(err, "could not JSON unmarshal keystore file")
|
||||
}
|
||||
|
||||
@@ -18,7 +18,10 @@ go_library(
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v4/tools/specs-checker",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = ["@com_github_urfave_cli_v2//:go_default_library"],
|
||||
deps = [
|
||||
"//config/params:go_default_library",
|
||||
"@com_github_urfave_cli_v2//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
go_binary(
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v4/config/params"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
|
||||
@@ -40,7 +41,7 @@ func download(cliCtx *cli.Context) error {
|
||||
|
||||
func getAndSaveFile(specDocUrl, outFilePath string) error {
|
||||
// Create output file.
|
||||
f, err := os.Create(filepath.Clean(outFilePath))
|
||||
f, err := os.OpenFile(filepath.Clean(outFilePath), os.O_CREATE|os.O_EXCL, params.BeaconIoConfig().ReadWritePermissions)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot create output file: %w", err)
|
||||
}
|
||||
|
||||
@@ -277,6 +277,9 @@ func readKeystoreFile(_ context.Context, keystoreFilePath string) (*keymanager.K
|
||||
if keystoreFile.Pubkey == "" {
|
||||
return nil, errors.New("could not decode keystore json")
|
||||
}
|
||||
if keystoreFile.Description == "" && keystoreFile.Name != "" {
|
||||
keystoreFile.Description = keystoreFile.Name
|
||||
}
|
||||
return keystoreFile, nil
|
||||
}
|
||||
|
||||
@@ -295,11 +298,11 @@ func createKeystoreFromPrivateKey(privKey bls.SecretKey, walletPassword string)
|
||||
)
|
||||
}
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: fmt.Sprintf("%x", privKey.PublicKey().Marshal()),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: fmt.Sprintf("%x", privKey.PublicKey().Marshal()),
|
||||
Description: encryptor.Name(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package accounts
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -174,3 +175,30 @@ func Test_importPrivateKeyAsAccount(t *testing.T) {
|
||||
require.Equal(t, 1, len(pubKeys))
|
||||
assert.DeepEqual(t, pubKeys[0], bytesutil.ToBytes48(privKey.PublicKey().Marshal()))
|
||||
}
|
||||
|
||||
func Test_NameToDescriptionChangeIsOK(t *testing.T) {
|
||||
jsonString := `{"version":1, "name":"hmmm"}`
|
||||
type Obj struct {
|
||||
Version uint `json:"version"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
a := &Obj{}
|
||||
require.NoError(t, json.Unmarshal([]byte(jsonString), a))
|
||||
require.Equal(t, a.Description, "")
|
||||
}
|
||||
|
||||
func Test_MarshalOmitsName(t *testing.T) {
|
||||
type Obj struct {
|
||||
Version uint `json:"version"`
|
||||
Description string `json:"description"`
|
||||
Name string `json:"name,omitempty"`
|
||||
}
|
||||
a := &Obj{
|
||||
Version: 1,
|
||||
Description: "hmm",
|
||||
}
|
||||
|
||||
bytes, err := json.Marshal(a)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, string(bytes), `{"version":1,"description":"hmm"}`)
|
||||
}
|
||||
|
||||
@@ -117,11 +117,11 @@ func createRandomKeystore(t testing.TB, password string) *keymanager.Keystore {
|
||||
cryptoFields, err := encryptor.Encrypt(validatingKey.Marshal(), password)
|
||||
require.NoError(t, err)
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -121,7 +121,8 @@ func (c *beaconApiValidatorClient) getValidatorsStatusResponse(ctx context.Conte
|
||||
case ethpb.ValidatorStatus_PENDING, ethpb.ValidatorStatus_PARTIALLY_DEPOSITED, ethpb.ValidatorStatus_DEPOSITED:
|
||||
if !isLastActivatedValidatorIndexRetrieved {
|
||||
isLastActivatedValidatorIndexRetrieved = true
|
||||
|
||||
// TODO: double check this due to potential of PENDING STATE being active..
|
||||
// edge case https://github.com/prysmaticlabs/prysm/blob/0669050ffabe925c3d6e5e5d535a86361ae8522b/validator/client/validator.go#L1068
|
||||
activeStateValidators, err := c.stateValidatorsProvider.GetStateValidators(ctx, nil, nil, []string{"active"})
|
||||
if err != nil {
|
||||
return nil, nil, nil, errors.Wrap(err, "failed to get state validators")
|
||||
|
||||
@@ -246,7 +246,7 @@ func (v *validator) LogValidatorGainsAndLosses(ctx context.Context, slot primiti
|
||||
if v.emitAccountMetrics {
|
||||
for _, missingPubKey := range resp.MissingValidators {
|
||||
fmtKey := fmt.Sprintf("%#x", missingPubKey)
|
||||
ValidatorBalancesGaugeVec.WithLabelValues(fmtKey).Set(0)
|
||||
ValidatorBalancesGaugeVec.WithLabelValues(fmtKey).Set(float64(params.BeaconConfig().MaxEffectiveBalance))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
// ExtractKeystores retrieves the secret keys for specified public keys
|
||||
// in the function input, encrypts them using the specified password,
|
||||
// and returns their respective EIP-2335 keystores.
|
||||
func (_ *Keymanager) ExtractKeystores(
|
||||
func (*Keymanager) ExtractKeystores(
|
||||
_ context.Context, publicKeys []bls.PublicKey, password string,
|
||||
) ([]*keymanager.Keystore, error) {
|
||||
lock.Lock()
|
||||
@@ -44,11 +44,11 @@ func (_ *Keymanager) ExtractKeystores(
|
||||
return nil, err
|
||||
}
|
||||
keystores[i] = &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", pubKeyBytes),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Pubkey: fmt.Sprintf("%x", pubKeyBytes),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
return keystores, nil
|
||||
|
||||
@@ -31,11 +31,11 @@ func createRandomKeystore(t testing.TB, password string) *keymanager.Keystore {
|
||||
cryptoFields, err := encryptor.Encrypt(validatingKey.Marshal(), password)
|
||||
require.NoError(t, err)
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -83,12 +83,13 @@ type AccountLister interface {
|
||||
|
||||
// Keystore json file representation as a Go struct.
|
||||
type Keystore struct {
|
||||
Crypto map[string]interface{} `json:"crypto"`
|
||||
ID string `json:"uuid"`
|
||||
Pubkey string `json:"pubkey"`
|
||||
Version uint `json:"version"`
|
||||
Name string `json:"name"`
|
||||
Path string `json:"path"`
|
||||
Crypto map[string]interface{} `json:"crypto"`
|
||||
ID string `json:"uuid"`
|
||||
Pubkey string `json:"pubkey"`
|
||||
Version uint `json:"version"`
|
||||
Description string `json:"description"`
|
||||
Name string `json:"name,omitempty"` // field deprecated in favor of description, EIP2335
|
||||
Path string `json:"path"`
|
||||
}
|
||||
|
||||
// Kind defines an enum for either local, derived, or remote-signing
|
||||
|
||||
@@ -89,6 +89,9 @@ func (s *Server) ImportKeystores(
|
||||
for i := 0; i < len(req.Keystores); i++ {
|
||||
k := &keymanager.Keystore{}
|
||||
err = json.Unmarshal([]byte(req.Keystores[i]), k)
|
||||
if k.Description == "" && k.Name != "" {
|
||||
k.Description = k.Name
|
||||
}
|
||||
if err != nil {
|
||||
// we want to ignore unmarshal errors for now, proper status in importKeystore
|
||||
k.Pubkey = "invalid format"
|
||||
|
||||
@@ -534,11 +534,11 @@ func createRandomKeystore(t testing.TB, password string) *keymanager.Keystore {
|
||||
cryptoFields, err := encryptor.Encrypt(validatingKey.Marshal(), password)
|
||||
require.NoError(t, err)
|
||||
return &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
Pubkey: fmt.Sprintf("%x", pubKey),
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -243,6 +243,9 @@ func (*Server) ValidateKeystores(
|
||||
if err := json.Unmarshal([]byte(encoded), &keystore); err != nil {
|
||||
return nil, status.Errorf(codes.InvalidArgument, "Not a valid EIP-2335 keystore JSON file: %v", err)
|
||||
}
|
||||
if keystore.Description == "" && keystore.Name != "" {
|
||||
keystore.Description = keystore.Name
|
||||
}
|
||||
if _, err := decryptor.Decrypt(keystore.Crypto, req.KeystoresPassword); err != nil {
|
||||
doesNotDecrypt := strings.Contains(err.Error(), keymanager.IncorrectPasswordErrMsg)
|
||||
if doesNotDecrypt {
|
||||
|
||||
@@ -92,11 +92,11 @@ func TestServer_CreateWallet_Local(t *testing.T) {
|
||||
cryptoFields, err := encryptor.Encrypt(privKey.Marshal(), strongPass)
|
||||
require.NoError(t, err)
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
require.NoError(t, err)
|
||||
@@ -241,11 +241,11 @@ func TestServer_ValidateKeystores_OK(t *testing.T) {
|
||||
cryptoFields, err := encryptor.Encrypt(privKey.Marshal(), strongPass)
|
||||
require.NoError(t, err)
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
require.NoError(t, err)
|
||||
@@ -278,11 +278,11 @@ func TestServer_ValidateKeystores_OK(t *testing.T) {
|
||||
cryptoFields, err := encryptor.Encrypt(privKey.Marshal(), differentPassword)
|
||||
require.NoError(t, err)
|
||||
item := &keymanager.Keystore{
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Name: encryptor.Name(),
|
||||
Crypto: cryptoFields,
|
||||
ID: id.String(),
|
||||
Version: encryptor.Version(),
|
||||
Pubkey: pubKey,
|
||||
Description: encryptor.Name(),
|
||||
}
|
||||
encodedFile, err := json.MarshalIndent(item, "", "\t")
|
||||
keystores = append(keystores, string(encodedFile))
|
||||
|
||||
Reference in New Issue
Block a user